On the morning of September 21, 2020, three trays of United States mail were discovered in a ditch in Greenville, Wisconsin. The local sheriff’s office reported that the mail dump included several absentee ballots. When a U.S. Postal Service spokesperson made a similar assertion two days later, a local Fox affiliate, WLUK, reported the statement on its website. And then a national network of conservative commentators and influencers did something that happened again and again last fall: They picked up a bare-bones news story and made it sound nefarious.
Within hours, Jim Hoft, the combative founder and editor of The Gateway Pundit, a conservative media outlet, came across the story. A consortium of researchers working together on an effort called the Election Integrity Partnership (which included my team at the Stanford Internet Observatory) had by this point begun to track false and misleading voting-related information, particularly claims about ballot and mail fraud, as it moved across the social-media ecosystem. Our partnership began 100 days before the election and continued for a few weeks following Election Day. In that time, The Gateway Pundit would become a primary driver in dozens of instances in which false information or misleading narratives went viral. “We report the truth,” a banner on the site noted, as its pages regaled readers with stories of malfunctioning voting machines in Michigan, ballot boxes stuffed into cars, and “miraculous” fake ballots marked for Joe Biden. In our data set tracking the spread of misleading claims, The Gateway Pundit’s stories racked up more than 800,000 retweets on Twitter and at least 4 million views on YouTube over a four-month period.
The process of producing viral misinformation hits followed a familiar pattern throughout the 2020 campaign: Prominent pro–Donald Trump influencers or hyper-partisan conservative outlets would pick up a real-world event—in many cases an isolated incident that bubbled into the national conversation via social media—and shoehorn it into a far broader narrative. Many of the narratives involved hints of conspiracy. So it was with the wayward Wisconsin ballots: Soon after WLUK published its story, Hoft dashed off an article that added four lines of original content atop the station’s reporting:
Democrats are stealing the 2020 election.
Two trays of US mail were discovered in a ditch near Greenville, a rural area north of Appleton, Wisconsin.
According to local officials the mail included mail-in ballots.
The USPS unions support Joe Biden.
Soon after the story’s publication to The Gateway Pundit’s website and social-media pages, Charlie Kirk, a radio talk-show host and Twitter-verified conservative activist with 1.7 million followers, reposted it, as did Breitbart News. By noon the following day, more than 40,000 individual Twitter accounts had retweeted the story, reaching millions of viewers. That afternoon, White House Press Secretary Kayleigh McEnany cited the Greenville ballots as evidence that mail-in voting was fraudulent—an overarching theme that was one of 2020’s misinformation super-stories.
Within days of its discovery, the discarded mail had been weaponized into an attack on the integrity of the U.S. voting system. The story was challenging for fact-checkers. Mail had been discovered in a ditch in rural Wisconsin. Local officials had claimed that the mail included ballots. The Postal Service’s union had endorsed Biden. But the overall impression that Hoft’s story created—that USPS workers were part of a Democratic plot to steal the election—was decidedly false. A few days after the story went viral, Wisconsin elections officials clarified that the discarded mail “did not include any Wisconsin ballots.” (Subsequent local news coverage, from February, noted that seven Minnesota ballots had been turned over to that state.) In fact, the Democrats had not tried to steal the election, but by that point, the facts didn’t matter. The outrage machine had moved on, drawing its audience’s attention to other manufactured grievances.
Research teams participating in the Election Integrity Partnership saw this process play out repeatedly, via many of the same accounts. One team, at the University of Washington’s Center for an Informed Public, looked at which accounts were involved in specific viral misinformation “incidents”—for example, claims that Arizona voters had been improperly given Sharpies to mark their ballot, that Republican poll watchers were illegally excluded from Philadelphia vote-counting sites, that dead people had voted in Michigan. The researchers noted that 21 prominent influencers, including the actor James Woods, Donald Trump Jr., a couple of QAnon leaders, and former President Trump himself, had each amplified misinformation about at least 10 incidents. The University of Washington team also examined the domains of articles that were shared in voting-related viral misinformation incidents. The Gateway Pundit topped the list. It and Breitbart News are among the hyper-partisan media outlets that bundle small kernels of truth—such as the Greenville mail discovery—within concentric applications of falsehood.
The distinct behavior of serial spreaders of misinformation should theoretically make them easy for Facebook or Twitter to identify. Platforms that place warning labels on false or misleading content could penalize accounts that repeatedly create it; after an account earned a certain number of strikes, the platform’s algorithms could suspend it or limit users’ ability to share its posts. But platforms also want to appear politically neutral. Inconveniently for them, our research found that although some election-related misinformation circulated on the left, the pattern of the same accounts repeatedly spreading false or misleading claims about voting, or about the legitimacy of the election itself, occurred almost exclusively among pro-Trump influencers, QAnon boosters, and other outlets on the right. We were not the only ones to observe this; researchers at Harvard described the former president and the right-wing media as driving a “disinformation campaign” around mail-in voter fraud during the 2020 election; the researchers’ prior work had meticulously detailed a “propaganda feedback loop” within the closely linked right-wing media ecosystem.
If the problem were more evenly distributed, policy changes would be harder to miscast as anti-conservative bias. Tech companies are in a bind. They recognize that inaction toward certain crucial types of misinformation puts them at greater risk of regulation by a Democratic administration and investigation by a Democratic Congress. And yet, if any single platform acts too forcefully, it risks provoking the wrath of the hyper-partisan influencers, who take to competing platforms to decry their supposed mistreatment. Social-media companies find themselves in the position of having to act decisively and collectively—and yet, collective action begets further allegations of collusion to silence conservatives.
The question now is what to do about the problem. Online influencers and hyper-partisan micro-media properties don’t all possess robust distribution channels of their own. The Gateway Pundit and Donald Trump Jr. achieve their reach, and their ability to promote viral lies, because social networks allow them to. The platforms—Facebook, Twitter, YouTube, Instagram, WhatsApp, Parler—offer an audience of millions of users, sophisticated targeting, and curation algorithms that amplify precisely the kind of wildly sensational, high-engagement content that these influencers traffic in. Likes and shares and retweet buttons are the means by which their content spreads; algorithmically produced echo chambers entrench the fan base that they rely on to maintain influence (and, for some influencers, an income stream). The relationship is symbiotic up to a point; tech companies have benefited from the engagement that top influencers generate. But the worst of the repeat misinformation spreaders need Big Tech infrastructure, and have therefore worked hard to frame access to it as a fundamental right. And so these creatures of social media have come to regard the platforms’ growing distaste for high-impact misinformation as an existential threat.
Amid the right-wing effort to deny Trump’s election loss—and its explosive culmination in the Capitol riot—social-media companies felt compelled to step in. The insurrection pushed companies collectively to take policy actions, such as banning Trump from Twitter and Facebook, and eliminating tens of thousands of QAnon accounts and groups, that individually might have left each platform vulnerable to accusations of censorship from the right. But the post–January 6 status quo is unstable.
Because each platform has its own standards for labeling misinformation, hyper-partisan influencers can play the companies off against one another. A pro-Trump or QAnon group, for example, might tweet a screenshot of a YouTube or TikTok video that wouldn’t meet Twitter’s standards. An accompanying URL might lead to Instagram. Misinformation is networked; content moderation is not. In the past, Facebook has taken no action against some content that Twitter has labeled as factually dubious; TikTok has removed content that YouTube left up; Twitter recently banned The Gateway Pundit for violating the platform’s “civic integrity policy,” but the outlet remains active on Facebook.
As the COVID-19 vaccination rollout accelerates nationwide, the newest misinformation battle is upon us: vaccine misinformation. Many of its contours will closely resemble those of the election battle. Some media outlets will sow doubts and promote baseless conspiracy theories to audiences unsure of what to believe. Social-media influencers will shape and retransmit those messages to ever-wider audiences. Social-media companies, worried about government regulation or public discontent, will enforce content standards haphazardly. The influencers on the wrong side of those standards will assail them as illegitimate, dismiss the fact-checkers as biased, and label any attempt to limit the reach of false information as censorship. All the while, receptive segments of the public will be dragged deeper into bespoke realities and hyper-partisan echo chambers.
In other words: Misinformation has entered its industrial era. It has always existed, but now it is girded by structures, moves via clear pathways, and can be redirected at new targets. It is no longer the province of conspiracist novitiates and social-media amateurs. Yesterday’s “election fraud” is today’s “dangerous vaccines.” The dynamic is predictable, but seemingly unpreventable.
The Election Integrity Partnership came away from its voting-misinformation research with the conviction that key groups across the U.S. information ecosystem can take steps not just to address each topical crisis as it arises, but to diminish the likelihood and severity of future crises. Government officials should presume that, during major emergencies and high-stakes political moments, foreign and domestic actors will make elaborate attempts to deceive the American public. Traditional media outlets can develop policies on when their journalists should report on misinformation, balancing the benefit of debunking with the risk of elevating it to new audiences. Social-media platforms enjoy more power than any other group to handle the problem. They can move faster and more decisively to address policy violations, open up their data to external researchers, more closely scrutinize influential accounts, and invest in “prebunking” emerging misinformation campaigns (or supporting the civil-society organizations that do) before the claims become ubiquitous. Any of these actors can move forward independently of the others. Responding on all fronts won’t be simple—especially when the super-spreaders are vocal political partisans—but it has never been more necessary.