Conspiracy Theories Have a New Best Friend

Generative AI programs like ChatGPT threaten to revolutionize how disinformation spreads online.

An illustration of a hand holding a quill pen
The Atlantic. Source: Getty.

History has long been a theater of war, the past serving as a proxy in conflicts over the present. Ron DeSantis is warping history by banning books on racism from Florida’s schools; people remain divided about the right approach to repatriating Indigenous objects and remains; the Pentagon Papers were an attempt to twist narratives about the Vietnam War. The Nazis seized power in part by manipulating the past—they used propaganda about the burning of the Reichstag, the German parliament building, to justify persecuting political rivals and assuming dictatorial authority. That specific example weighs on Eric Horvitz, Microsoft’s chief scientific officer and a leading AI researcher, who tells me that the apparent AI revolution could not only provide a new weapon to propagandists, as social media did earlier this century, but entirely reshape the historiographic terrain, perhaps laying the groundwork for a modern-day Reichstag fire.

The advances in question, including language models such as ChatGPT and image generators such as DALL-E 2, loosely fall under the umbrella of “generative AI.” These are powerful and easy-to-use programs that produce synthetic text, images, video, and audio, all of which can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation. You may have seen one-off examples of this type of media already: fake videos of Ukrainian President Volodymyr Zelensky surrendering to Russia; mock footage of Joe Rogan and Ben Shapiro arguing about the film Ratatouille. As this technology advances, piecemeal fabrications could give way to coordinated campaigns—not just synthetic media but entire synthetic histories, as Horvitz called them in a paper late last year. And a new breed of AI-powered search engines, led by Microsoft and Google, could make such histories easier to find and all but impossible for users to detect.

Even though similar fears about social media, TV, and radio proved somewhat alarmist, there is reason to believe that AI could really be the new variant of disinformation that makes lies about future elections, protests, or mass shootings both more contagious and immune-resistant. Consider, for example, the raging bird-flu outbreak, which has not yet begun spreading from human to human. A political operative—or a simple conspiracist—could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake “leaked” documents, audio and video recordings, and expert commentary. A synthetic history in which a government-weaponized bird flu would be ready to go if avian flu ever began circulating among humans. A propagandist could simply connect the news to their entirely fabricated—but fully formed and seemingly well-documented—backstory seeded across the internet, spreading a fiction that could consume the nation’s politics and public-health response. The power of AI-generated histories, Horvitz told me, lies in “deepfakes on a timeline intermixed with real events to build a story.”

It’s also possible that synthetic histories will change the kind, but not the severity, of the already rampant disinformation online. People are happy to believe the bogus stories they see on Facebook, Rumble, Truth Social, YouTube, wherever. Before the web, propaganda and lies about foreigners, wartime enemies, aliens, and Bigfoot abounded. And where synthetic media or “deepfakes” are concerned, existing research suggests that they offer surprisingly little benefit compared with simpler manipulations, such as mislabeling footage or writing fake news reports. You don’t need advanced technology for people to believe a conspiracy theory. Still, Horvitz believes we are at a precipice: The speed at which AI can generate high-quality disinformation will be overwhelming.

Automated disinformation produced at a heightened pace and scale could enable what he calls “adversarial generative explanations.” In a parallel of sorts to the targeted content you’re served on social media, which is tested and optimized according to what people engage with, propagandists could run small tests to determine which parts of an invented narrative are more or less convincing, and use that feedback along with social-psychology research to iteratively improve that synthetic history. For instance, a program could revise and modulate a fabricated expert’s credentials and quotes to land with certain demographics. Language models like ChatGPT, too, threaten to drown the internet in similarly conspiratorial and tailored Potemkin text—not targeted advertising, but targeted conspiracies.

Big Tech’s plan to replace traditional internet search with chatbots could increase this risk substantially. The AI language models being integrated into Bing and Google are notoriously terrible at fact-checking and prone to falsities, which perhaps makes them susceptible to spreading fake histories. Although many of the early versions of chatbot-based search give Wikipedia-style responses with footnotes, the whole point of a synthetic history is to provide an alternative and convincing set of sources. And the entire premise of chatbots is convenience—for people to trust them without checking.

If this disinformation doomsday sounds familiar, that’s because it is. “The claim about [AI] technology is the same claim that people were making yesterday about the internet,” says Joseph Uscinski, a political scientist at the University of Miami who studies conspiracy theories. “Oh my God, lies travel farther and faster than ever, and everyone’s gonna believe everything they see.” But he has found no evidence that beliefs in conspiracy theories have increased alongside social-media use, or even throughout the coronavirus pandemic; the research into common narratives such as echo chambers is also shaky.

People buy into alternative histories not because new technologies make them more convincing, Uscinski says, but for the same reason they believe anything else—maybe the conspiracy confirms their existing beliefs, matches their political persuasion, or comes from a source they trust. He referenced climate change as an example: People who believe in anthropogenic warming, for the most part, have “not investigated the data themselves. All they’re doing is listening to their trusted sources, which is exactly what the climate-change deniers are doing too. It’s the same exact mechanism; it’s just in this case the Republican elites happen to have it wrong.”

Of course, social media did change how people produce, spread, and consume information. Generative AI could do the same, but with new stakes. “In the past, people would try things out by intuition,” Horvitz told me. “But the idea of iterating faster, with more surgical precision on manipulating minds, is a new thing. The fidelity of the content, the ease with which it can be generated, the ease with which you can post multiple events onto timelines”—all are substantive reasons to worry. Already, in the lead-up to the 2020 election, Donald Trump planted doubts about voting fraud that bolstered the “Stop the Steal” campaign once he lost. As November 2024 approaches, like-minded political operatives could use AI to create fake personas and election officials, fabricate videos of voting-machine manipulation and ballot-stuffing, and write false news stories, all of which would come together into an airtight synthetic history in which the election was stolen.

Deepfake campaigns could send us further into “a post-epistemic world, where you don’t know what’s real or fake,” Horvitz said. A businessperson accused of wrongdoing could call incriminating evidence AI-generated; a politician could plant documented but entirely false character assassinations of rivals. Or perhaps, in the same way Truth Social and Rumble provide conservative alternatives to Twitter and YouTube, a far-right alternative to AI-powered search, trained on a wealth of conspiracies and synthetic histories, will ascend in response to fears about Google, Bing, and “WokeGPT” being too progressive. “There’s nothing in my mind that would stop that from happening in search capacity,” Renée DiResta, the research manager of the Stanford Internet Observatory, who recently wrote a paper on language models and disinformation, says. “It’s going to be seen as a fantastic market opportunity for somebody.” RightWingGPT and a conservative-Christian AI are already under discussion, and Elon Musk is reportedly recruiting talent to build a conservative rival to OpenAI.

Preparing for such deepfake campaigns, Horvitz said, will require a variety of strategies, including media-literacy efforts, enhanced detection methods, and regulation. Most promising might be creating a standard to establish the provenance of any piece of media—a log of where a photo was taken and all the ways it has been edited attached to the file as metadata, like a chain of custody for forensic evidence—which Adobe, Microsoft, and several other companies are working on. But people would still need to understand and trust that log. “You have this moment of both proliferation of content and muddiness about how things are coming to be,” says Rachel Kuo, a media-studies professor at the University of Illinois at Urbana-Champaign. Provenance, detection, or other debunking methods might still rely largely on people listening to experts, whether it be journalists, government officials, or AI chatbots, who tell them what is and isn’t legitimate. And even with such silicon chains of custody, simpler forms of lying—over cable news, on the floor of Congress, in print—will continue.

Framing technology as the driving force behind disinformation and conspiracy implies that technology is a sufficient, or at least necessary, solution. But emphasizing AI could be a mistake. If we’re primarily worried “that someone is going to deep-fake Joe Biden, saying that he is a pedophile, then we’re ignoring the reason why a piece of information like that would be resonant,” Alice Marwick, a media-studies professor at the University of North Carolina at Chapel Hill, told me. And to argue that new technologies, whether social media or AI, are primarily or solely responsible for bending the truth risks reifying the power of Big Tech’s advertisements, algorithms, and feeds to determine our thoughts and feelings. As the reporter Joseph Bernstein has written: “It is a model of cause and effect in which the information circulated by a few corporations has the total power to justify the beliefs and behaviors of the demos. In a way, this world is a kind of comfort. Easy to explain, easy to tweak, and easy to sell.”

The messier story might contend with how humans, and maybe machines, are not always very rational; with what might need to be done for writing history to no longer be a war. The historian Jill Lepore has said that “the footnote saved Wikipedia,” suggesting that transparent sourcing helped the website become, or at least appear to be, a premier source for fairly reliable information. But maybe now the footnote, that impulse and impetus to verify, is about to sink the internet—if it has not done so already.