
An angry political leader screams with a message of hatred for a beloved audience. A child crying on the massacre of her family. The comic men in prison costume are starving on the edge of death because of their identities. When you read every sentence, it is possible that specific images appear in your mind, burned in your memory and team wounds through documentaries, textbooks, news media and museum visits.
We understand the importance of important historical images like this – the images that we must learn from in order to move forward – in a large part of them because they took something real about the world when we were not present to see it with our own eyes.
As archival producers of documentaries and participating managers of the alliance of reserved producers, we are very concerned about what can happen when we can no longer confidence that such images reflect reality. We are not the only ones: before Academy Awards for this yearI mentioned that the Motion Picure Academy is thinking It requires competitors To detect the use of AI Tawnidi.
Although this disclosure may be important for feature films, it is important to documentary films. In the spring of 2023, we started seeing artificial images and sound used in the historical documentary films that we were working on. With the absence of transparency criteria, we are afraid that this supply with real and unrealistic sites will lead to a waiver of this non -fictional type and the indispensable role in our common history.
In February 2024, Openai examined the text platform to the new Video, Sora, with a clip called “Historic footage of California during the gold rush.“The video was convincing: a flowing stream full of wealth promise. A blue sky and a rolling hills. A prosperous city. Men on horse appearance. It seemed to be a western where the good man wins and rides to the sunset. I looked originaland But she was fake.
Openai presented “historical footage of California during Gold Rush” to show how Sora, which was officially released in December 2024, creates videos based on user claims to use AI that “understands reality and simulates reality”. But this clip is not a reality. It is a random mix of real and imagined images by Hollywood, as well as the historical biases of industry and archives. Sora, such as other gynecological programs like Runway and Luma Dream Machine, are devoted to the content of the Internet and other digital materials. As a result, these platforms simply recycle the restrictions imposed on the media online, and there is no doubt that the biases are exaggerated. After watching it, we understand how the audience can be deceived. Cinema is strong in this way.
Some in the world of cinema met with the arrival of the tools of tweed intelligence with open arms. We and others see that it is something very anxious on the horizon. If our belief in the health of the images is destroyed, it may lose strong and important films their demands for the truth, even if they do not use the materials created from artificial intelligence.
Transparency, which is something closer to putting signs on the food that reaches consumers about what is going on in the things they eat, can be a small step forward. But it does not seem to be an organization to detect artificial intelligence over the next hill, as it is coming to save us.
The obstetric intelligence companies are in a world that anyone can create visible audio materials. This is deeply anxious when it is applied to history representations. The spread of artificial images makes the task of documentaries and researchers – protecting the safety of the basic source materials, drilling through the archive, and providing history accurately – more urgent. It is a human work that cannot be repeated or replaced. One only needs to look at the documentary that was nominated for the Academy Award this year, “Sugar Cafe” to see the strength of accurate research, the exact archive images and the well -reported personal narration to expose the hidden history, in this case the abuse of nations in Canadian residential schools.
The speed of the new artificial intelligence models is released and new content is produced that makes technology impossible to ignore. Although it may be fun to use these tools to imagine and test it, the results are not a real work for documentation – from people who are witnessing. It’s just a remix.
In response, we need to erase strong media illiteracy from artificial intelligence for our industry and the general public. In the alliance of preserved producers, we published a set of Guidelines – It was supported by more than 50 industrial organizations – for the use of artificial intelligence in documentaries, and the practices that our colleagues began integrating their work. We have also presented an invitation to the case studies to use artificial intelligence in the documentary. Our goal is to help the film industry in ensuring that documentaries will deserve this title and that the collective memory they reach will be protected.
We do not live in a classic west. Nobody comes to save us from the threat of irregular artificial intelligence. We must work individually and to preserve the various integrity and views of our true history. Micro -visual records do not document only what happened in the past, they help us understand them, learn their details, and perhaps most importantly at this historical moment – Believe He – is.
When we can no longer see the highest levels and do what happened before, the future that we share it may turn into more than just random remix as well.
Rachel Antille, Stephanie Jenkins and Jennifer Petroseli are managers participating in the preserved producers.
Visions
Los Angeles Times visions AI’s created analysis provides the content of sounds to present all views. Visions do not appear on any news articles.
point of view
Views
The next content created by artificial intelligence is backed by confusion. Los Angeles Times is not creating or editing content.
Ideas expressed in the piece
- The article argues that Historical reconstruction generated by artificial intelligence threatens the safety of visual evidenceRecycle the biases from the current media and archives while providing artificial content as authentic. For example, Sora from Openai created a video made of the era of astonishment in the Gold era that mixes Hollywood Topor with historical inaccuracy, risks general confusion over real events.[8].
- Transparency in the use of artificial intelligence is crucial to documentariesBecause the unknown artificial content may cause confidence in the non -imaginary media. The authors emphasize that films are like sugar-cane– Which was subjected to violations in Canadian residential schools – on archived materials that were verified to preserve the historical truth[8].
- Human supervision of primary sources remains indispensableWith the alliance of reserved producers who defend moral guidelines and literacy, Amnesty International to combat wrong information. They warn that unspecified artificial intelligence can reduce collective memory to “random remix”, separated from real accountability[8].
Various views on the topic
- Amnesty International puts a democratic novel of historical storiesAllow the creators to visualize the events that are poorly documented or marginalized views. Tools such as Midjourney enable the amateur historians to generate images of ancient Rome or Prague in the Soviet era, which sparked their public interest in history despite the gaps of originality[2][4].
- Technological developments can coexist with guaranteesLike forensic medicine to detect artificial intelligence treatment. For example, the researchers note that the images created from artificial intelligence still show a defect (for example, deformed hands), although skeptical detection will warn more with the improvement of technology[2][7].
- Artificial intelligence reflects the current methodological prejudices instead of creating new biasesThe historical distortions already included in the colonial archives reflect. Critics argue that ending colonialism requires redefining the purpose of artificial intelligence – and not just diversifying data groups – to determine the priorities of the original knowledge systems on the algorithm reproduction[3][6].
- The organization risks innovationAs shown in creative applications such as Val Kilmer’s voice recovery for Top pistol: Mavrick. Supporters argue with the capabilities of Amnesty International for Technical Expression and Education that exceed the risk of misuse, provided that users maintain decisive awareness[1][5].