Luke Conroy and Anne Ferres & AI4Media / AI Better Images / Models Built from Fossils / CC-BY 4.0
Generative artificial intelligence (AI) tools like OpenAI’s ChatGPT, Stability AI’s Stable Diffusion, and Midjourney have repeatedly sparked waves of panic and excitement since their release. News reports have understandably focused on how these tools make it easier and cheaper to create high-quality synthetic media, including images, video, and audio, and distribute it at scale in multiple languages with hyper-local targeting.
These tools, if misused, have the potential to dramatically destabilize reality. Technologists and scholars are already using the phrase “lying dividend” – the inverse ability of political actors to dismiss real content as deepfakes in order to avoid responsibility. Now an even more serious and dangerous erosion of truth is emerging. We call it “artificial history” – the ability of generative AI to create synthetic histories that are both highly credible and completely fictional.
For example, a user named Epentibi has nearly 30,000 subscribers on his YouTube page. The account is a kind of fan page, where Epentibi uses generative AI to create fake documentaries, news reels and other clips related to “The New Order: Last Days of Europe,” a video game set in an alternate timeline in which Nazi Germany won World War II.
Epentibi is talented and the clips are well done, complete with maps, graphics and highly produced “nightly news” style video voiced by familiar, reassuring figures like Walter Cronkite and his European broadcasting contemporaries, making these segments nearly indistinguishable from the actual broadcasts of the time.
As another example, TikTok account @what.if_ai creates alternate reality videos that flip the history of colonization onto colonialists. The account has more than 88,000 followers and its videos have garnered more than 2.2 million likes. The impetus for this content is good intentions. @what.if_ai creator Gerard Marin told Rest of World: “My audience comes from all over the world. Some live in former colonies like Somalia, India, Ireland, and they often want to visualize a different history…. I want to contribute to this ongoing dialogue about the legacy of colonialism and the possibilities for a more just future.”
But then things got worse: “Now someone like me can create something that gets millions of views, using only my imagination,” Marin says. “Anyone can do it.”
Of course, historical fabrications have existed throughout recorded history, from fantasy works such as “Piltdown Man,” who claimed a lost connection between humans and apes, to novelists who fabricated a Nazi victory in World War II.
What’s exciting and frightening now is the powerful verisimilitude of these new generative AI tools and the ability of social media platforms to serve up these synthetic artifacts, where truth and fiction are nearly indistinguishable, to the world at the click of a button. In the right hands, these tools can enhance human creativity and ingenuity. But in the wrong hands, they can empower bad actors to distort the truth and confuse, radicalize, or recruit audiences.
Take, for example, Russia’s attack on Ukraine in 2022. Prior to this invasion, and its previous invasion of Crimea in 2014, Russian state actors deployed information warfare campaigns portraying Russia as a brave hero. Russian President Vladimir Putin called Ukraine a far-right extremist state and claimed the invasion was an attempt to “denazify” the country. Russian soldiers and citizens were reportedly welcomed in Ukraine as heroes who had liberated Ukraine from its Nazi leadership and the Western powers behind it. Legions of state-sponsored or state-sponsored media, paid internet trolls, and bot accounts spread these messages on social media, with apparent spikes in the days before each invasion. Lacking actual evidence, the campaign relied heavily on Russia’s ability to fabricate conspiracy theories that exploited long-standing ethnic and regional conflicts.
Now, imagine that Russia’s claims about a Nazi state in Ukraine came complete with a web of synthetic artifacts that substantiate this fictitious history: retroactive evidence points spun from fake first-hand testimony, on-the-ground testimony, and deepfake trusted messengers. If we could retroactively rewrite our origin story, we could make a different argument for where we should be heading. In the past, it was difficult to erase or amplify a particular perspective because creating fakes to contest historical narratives took time and distributing these falsehoods through education and the media was difficult. Generative AI unlocks the ability to rapidly create artificial history, and social media allows it to be distributed instantly around the world.
The legal and regulatory challenges posed by generative AI are very real today and do not require any speculation (even if some effective altruists talk about Terminator-esque doomsday scenarios). The number of deepfakes online increased tenfold from 2022 to 2023. In the US, AI tools were used to create fake images of former President Trump being arrested and to forward robocalls impersonating President Biden to 20,000 New Hampshire residents. Outside the US, deepfake videos were used to disrupt the Slovak presidential election, aiming to tilt the election in favor of a more pro-Moscow candidate.
It is also true that the solutions we need to counter these threats are similar to those we need to protect against artificial history. Tools that allow policymakers, researchers, and users to verify the context and history of digital media (such as the C2PA standard) are critical and need to be tested at scale. As a recent report from the Center for News, Technology, and Innovation highlighted, social media companies and news media organizations need to ensure that their internal policies apply to all maliciously manipulated media, including “shallow fakes” and “cheap fakes” that don’t require advanced technical tools.
While internal platform changes are necessary, social media platforms that regularly spread misinformation and deepfakes need greater oversight and regulation. Yet so far, they have responded to these threats by firing key integrity teams and rolling back policies on election misinformation. While the onus should not be placed on users, public awareness, media literacy, and civic education programs can also help citizens distinguish accurate information from disinformation (including how to use new content provenance tools), especially during close elections.
AI tools have been around for a lot longer than ChatGPT, but we’re closer to the beginning of AI conversations than the end. If we can imagine good examples of how these tools can enhance creativity, we also need to think about how they might be used in the future. One way to do this is by rewriting the past.