The Conspiracies Are Coming From Inside the House

After 2016, Americans are alert to Russian election interference, but domestic influencers are spreading discord on their own.

An illustration of the pyramid eye over the stripes of the American flag.
Steve Allen / Getty / The Atlantic

About the author: Renée DiResta is the technical research manager at the Stanford Internet Observatory.

Four years ago, when Russia’s internet trolls wanted the American electorate to lose confidence in democracy, they had to work hard at it—by recirculating cynical postings from obscure social-media accounts, or by making up their own.

The message then was that everything in American society had been rigged: elections, football games, the stock market, primaries, polls, the media, “the system.” But this litany of conspiratorial messages bubbled up from the lower reaches of the social-media universe—for instance, from Twitter accounts whose Russian owners had worked painstakingly to gain followers. According to one spelling-challenged troll on Twitter, unspecified software that was “RIGGED AGAINTS BERNIE” somehow “stole votes in ALL Hillary won counties” at the behest of the Democratic National Committee. The Internet Research Agency encouraged distrust and paranoia on Facebook and Instagram, too: “Funny how Clinton was favored before the vote was even cast, and how the Democratic Party has been working against Bernie to ensure this corrupt puppet’s victory,” declared one posting. To encourage its discovery by disaffected Americans, the author also included a barrage of hashtags from #theteaparty to #nobama to #hillary4prison.

In 2020, though, the vitriol, conspiracies, and incessant allegations of rigging aren’t coming from outsiders. They’re being driven by real influencers in the United States—by verified users, many from within the media, and by passionate hyper-partisan fan groups that band together to drive the public conversation.

The bungled vote count at the Iowa caucus last month revealed the blazing incompetence of that state’s Democratic Party and Shadow Inc., the contractor it hired to design a vote-counting app. But it also revealed something far more troubling: deep suspicion and pervasive anger. Almost immediately after the announcement that results would be delayed, unfounded allegations proliferated on Twitter. Even blue-check Twitter users—people with verified identities and, often, affiliations with credible media institutions—quickly resorted to conspiratorial speculation about nefarious plots. Several high-profile Sanders surrogates claimed that the party was stalling because it was unhappy that results showed Bernie Sanders winning; others went a step further, suggesting that local party apparatchiks were outright rigging results for Pete Buttigieg. Some of these insinuations were retweeted by high-profile social-media accounts, including that of a sitting member of Congress.

Iowa wasn’t a one-off: After Joe Biden’s surprisingly strong performance in Tuesday’s primary, the hashtags #RiggedPrimary and #RiggedElection began trending on Twitter.

The key lesson from 2016 isn’t that Russia ran an online manipulation operation; it’s that, on an internet designed for sensationalism and virality, influence itself has evolved. When propaganda is democratized, when publishing costs nothing, when velocity and virality drive the information ecosystem, and when provocateurs face no consequences, literally everyone has the power to promote conspiracy theories and other forms of disinformation. Today, everyone is on alert for outside agitators ginning up unrest. But the most divisive activity in American politics is overwhelmingly homegrown.

I was one of the researchers who investigated the Internet Research Agency’s social-media manipulation tactics from 2014 to 2017; my team and I reviewed 10.4 million tweets from 3,841 Twitter accounts, 1,100 YouTube videos from 17 channels, 116,000 Instagram posts from 133 accounts, and 61,500 unique Facebook posts from 81 pages. Strikingly, only about 10 percent of the content that Russian trolls circulated during the three-year period was overtly political to the point of mentioning specific candidates; the rest was intended to galvanize people around group identities, to exacerbate distrust, and to sow social divisions around fundamental questions of who and what America is for.

Even in the 2016 influence operation, many of the conspiratorial and hyper-partisan tweets and memes that the trolls selected to power their outrage machine were created by Americans. The Internet Research Agency simply amplified them by reposting or rebranding them. Indeed, appropriating real content allowed the Russian meddlers to operate subtly—to the point that the extent of their influence stayed concealed for a full year after the 2016 election. Yet while Moscow’s trolls had convincingly pretended to be something they weren’t, other bad actors—most notably the Islamic State—had already quite visibly demonstrated the power of computational propaganda on social networks. This kind of manipulation was already becoming the new normal, and no one had any idea what to do about it.

By the mid-2010s, a wide range of domestic groups, comprised of very real people, had realized that coordinated online activism could have significant effects offline. Even small groups could influence the prevailing narrative—about social-justice issues, pop bands, or the latest Star Wars movie—by dominating the online conversation. Once disparaged as “slacktivism” or “clicktivism,” coordinated online activism was gaining respect as a powerful tool for garnering mainstream attention. At the time, reputable news organizations were willing to cover almost any viral topic as news, so getting a topic to trend on Twitter and other sites—by knowing how to game simple algorithms—could produce mainstream broadcast and print that reached millions of people. Soon, new apps allowed people to create the illusion of popularity even more easily, by automating coordinated posts on specific topics. At the time, these tactics—especially when deployed by advertisers or entertainers—were seen less as manipulation than as guerrilla marketing. It was just how entities competed for attention in the noisy social ecosystem.

These behaviors soon attracted the attention of data scientists and other researchers. I myself looked at anti-vaccine activists and conspiracy theorists, watching them expand their online audiences by cross-pollinating messages and memes into other groups’ hashtags in the hope of attracting new recruits. Liberal anti-vaccine activists were finding common cause with libertarian Second Amendment activists in the #2A hashtag, and conspiracy theorists in #pizzagate. The tactics were remarkably similar across a huge range of groups globally—ISIS used the same attention-by-hashtag strategies as cancer charities. Network graphs—visualizations that revealed who was (and wasn’t) talking to whom—became popular in news coverage. They were a clear visual representation of what the internet had become: a collection of networked factions.

Social platforms were built, in part, to help the like-minded find each other. By the time of the political campaigns of 2016, people had already begun to signal their online and offline factional allegiances in their Twitter handles and Facebook profiles, posting hashtags in their bios and, soon after, telltale emojis in their usernames. A frog was an allusion to Pepe, the cartoon amphibian that had become a mascot for alt-right Trump supporters. Soon after, red roses came to symbolize democratic socialists. This trend gradually extended to devotees of highly specific issues—globes for people concerned about climate change, bikes for critics of single-occupancy motor vehicles, bees for fans of Beyoncé.

Savvy political actors—including, yes, the Russians—had a deep understanding of the factional nature of the social internet, and how to take advantage of it. They understood that participation in an online community, where people are passionate and eager to evangelize, was akin to rooting for a sports team or being comrades-in-arms in a war. The Internet Research Agency segmented American society along these lines. Each of the Facebook pages that it created—with names such as “United Muslims of America” and “LGBT United”—appealed to a particular subgroup. The Russian trolls had a remarkable grasp of societal nuance, such as what type of Republican someone was—older Reaganite, or young Pepe sympathizer?—and what messaging would appeal to which specific identity. The overwhelming majority of the content they created or shared sought to solidify their followers’ allegiance to and pride in their group.

The Russian trolls didn’t invent rancorous identity politics; they acted more as provocateurs, infiltrating existing factions to encourage more chaos. In fact, they modeled some of their nastiest, most provocative accounts after the observable behavior of a subset of real American activists. In 2016, the #MAGA contingent was particularly adept at the emergent factional warfare. They called the presidential election the “Great Meme War”—partly with tongue in cheek, but also quite seriously. Only a small number of these Trump fans made content, but many took on the mission of sharing it. The so-called Bernie bros behaved similarly. Clinton had her own share of partisans among the most aggressive #ImWithHer tweeters. These factions were adept at creating online moments that drew in observers, often resulting in mainstream press coverage and mass attention.

The dynamics of political fandom are perhaps even more visible in 2020. Again, foreign provocateurs are likely somewhere in the mix. But the vast majority of the people driving the rancor, and impulsively retweeting false or misleading content that confirms their own political biases, are Americans, including many who should know better. This is, after all, the era of influencers—people who have made a career of being popular online and have active fan bases and massive reach that most Russian trolls can only aspire to. Misspelled social-media postings from pseudonymous accounts only go so far. But when prominent journalists and political figures with huge Twitter followings give in to baseless speculation and push narratives that delegitimize elections, the effect is exponentially greater.

As vote-counting delays wore on after the Iowa caucuses last month, the conspiracists settled on a scapegoat: Reporters for The Intercept and Rolling Stone began to speculate about whether Robert Mook, Hillary Clinton’s 2016 campaign manager, was somehow responsible for the situation. The insinuations resulted in Mook’s name trending. He denied any involvement in the debacle, but, as usually happens, the correction did not travel as quickly as the false accusation; the conspiratorially minded Twitter users it did reach attacked Mook anyway. Blue-check influencers on the right, such as Charlie Kirk and other Republican activists, began to amplify the conspiracy theories and decry Democratic incompetence. A handful of prominent accounts wondered if the Russians were behind all the discord, or if they’d hacked the Iowa Democrats’ vote-counting app.

This response to a delayed election result was significantly different from a similar situation that played out in 2012; there was a snafu in the Iowa caucus tallying that year, too: Mitt Romney was initially declared the victor, but amended counts from a few days later favored Rick Santorum. Social-media posts and news articles from the time suggested that very few people believed that Romney or the Republican National Committee had rigged Iowa. The majority of the commentary in press and Facebook post archives abided by Hanlon’s razor—the notion that one should never attribute to malice something that is adequately explained by incompetence.

The precipitous decline in trust and the ever-louder perpetual-outrage machine should scare Americans. Hanlon’s razor has dulled; pervasive paranoia and sensationalism perform well in an environment in which wild accusations spread rapidly and success is determined by the number of clicks, likes, comments, and reposts. This is partly a function of the incessant, internet-speed news cycle: When there is no new information to be had, irresponsible speculation fills the void. Yet simply to condemn this tendency is to ignore something important about online activist communities, which provide a real sense of camaraderie and fulfillment to their members. Being part of a faction—regardless of whether it’s built around K-pop, Star Wars, or Elizabeth Warren—gives people a sense of meaning. This is what modern activism looks like.

The question is, what can society do about the harmful downsides of these dynamics? Since 2016, regulators and tech platforms alike have primarily focused on subversive state-run agitators—the Russian trolls who manipulate public sentiment by posing as aggrieved Americans. Tech platforms, initially resistant to the idea that disinformation could affect elections, spent two years after 2016 shutting down specific loopholes that the Internet Research Agency had exploited. Facebook changed its policies for political advertising, requiring users to have their identities verified before they can buy targeted ads. Twitter changed its trending algorithm to make bots less effective at manipulating the discourse. After tech platforms spent months disclaiming any responsibility to act as “an arbiter of the truth” for any given piece of content, and academic researchers, media commentators, and advocacy groups spent months trying to change the companies’ minds, the conversation eventually shifted from whether postings are true to whether they are authentic. Those of us who participated in those early talks pointed out that large tech platforms such as Google and Facebook very clearly could be the arbiters of whether certain accounts were behaving suspiciously, whether a given piece of content had come from a sketchy domain, and whether its dissemination pattern was anomalous. Under the authenticity standards, the Internet Research Agency’s activity was impermissible because it was executed by a foreign state actor pretending to be something it wasn’t—a Russian troll masquerading as a Texas secessionist, for example. But if real Americans had put up the same Facebook pages, tweets, or memes, almost all of the content would have remained up.

Platforms can only do so much in response to hyperactive partisan factions or wild conspiracies furthered by blue-check influencers. Being, say, an actual Texas secessionist falls squarely within the bounds of free expression, and the tech companies are not going to intervene in legitimate domestic political expression (nor should they). Most of the verified activists on social media appear to genuinely believe even the conspiracies they spread. But no one can watch the recurring viral spread of blatant lies, borderline-manipulative videos, and articles from grifter blogs and come away with the sense that America’s information ecosystem is healthy.

Unfortunately, internet users are likely to see a huge degree of subjective policy enforcement in the near future. The tech platforms are regularly caught by surprise by crafty tricks they’ve never seen before, and they face partisan blowback depending on which side is harmed by the policy calls they make. And while some campaign-finance laws apply to political coordination, and some consumer-protection guidelines address influencer marketing, Americans are largely reliant on informal norms to decide which tactics cross the line. In our current hyperpolarized country, no one has the obvious moral authority to set those new norms for everyone.

The stakes are high. Narratives about stolen, manipulated, and rigged elections play very well in our current climate of distrust. Unchecked, they will lead to a loss of confidence in democratic institutions. In 2016, Russian troll accounts encouraged armed insurrection if Hillary Clinton “stole the election” from Donald Trump. Real Americans amplified that content. Hyper-partisan, angry, conspiratorial factions are unlikely to be more discerning this time around.

Foreign influence executed by inauthentic accounts—as challenging as it is to detect and as troubling as it is to discover—is at least clearly impermissible under emerging internet norms. But there are very few demarcations between acceptable influence and manipulative behavior for real American candidates or political activists. The influence machine itself, with its viral conspiracies, factional power struggles, and democratized propaganda—isn’t going anywhere. Everyone on the internet has been given extraordinary power to influence public discourse, with no responsibility to use that power wisely or even consciously. Until Americans define boundaries between legitimate activism and harmful manipulation, and unless media organizations and individual influencers become far more cognizant of the responsibility they bear in today’s information environment, the agents of chaos will prevail.