How ‘Big Disinformation’ Can Overcome Its Skeptics

Lies can threaten democracy. So can flawed efforts to combat them.

Illustration of an ax with a megaphone as its head
Adam Maida / The Atlantic; Getty

Sign up for Conor’s newsletter here.

During a recent conference at the University of Chicago, former President Barack Obama reflected on the role disinformation played during his presidency. He was subject to flagrant lies—that he was born in Kenya, for instance, and put “death panels” in his health-care overhaul. But he served relatively early in the era of the smartphone and social-media, and he now believes that he underestimated the vulnerability of democracies to false information that is intended to mislead.

The premise that disinformation is among the biggest threats to democracy is now ubiquitous. The conference where Obama was interviewed by The Atlantic’s editor in chief, Jeffrey Goldberg, “Disinformation and the Erosion of Democracy,” was co-hosted by The Atlantic and the University of Chicago’s Institute of Politics, which is led by David Axelrod. Various other official events, initiatives, and reports addressing this issue are sponsored by the European Union, UNESCO, Harvard, Yale, Princeton, Stanford, UC Berkeley, the Brookings Institution, New America, the Center for American Progress, the Clinton Foundation, the Aspen Institute, The New York Times, the Knight Foundation, the Ford Foundation, and more.

Seeing so many powerful institutions elevate roughly the same narrative raises concerns that what skeptics call “Big Disinformation” or “the Disinformation Industrial Complex” is trendy groupthink that could itself distort national priorities or perceptions of reality––and perhaps lead to infringements on free speech and freedom of the press. Abroad, disinformation is regularly invoked as a pretext to suppress dissent. “The concept is undefined and open to abuse,” says Irene Khan, the United Nations’ special rapporteur on the promotion and protection of the right to freedom of opinion, “and because the size and nature of the problem is contested in the absence of sufficient data and research, state responses have often been problematic and heavy handed and had a detrimental impact on human rights.”

Skeptical scrutiny of disinformation claims is prudent, especially as the work of disinformation initiatives inspires legislation—something that has already begun. Lies can threaten democracy. So can flawed efforts to combat them.

Still, the case for concern over disinformation is persuasive. Our constant connection to internet discourse and the platforms that mediate it are recent developments, as destabilizing in their own way as the rise of the printing press, television, and radio were in earlier eras.

Today’s ever-changing algorithms would probably sow confusion and polarization in civic debates even if we were all consuming exactly the same feeds. But everyone’s digital reality is unique. And foreign governments, scammers, and outrage-entrepreneurs are trying to harm, trick, or manipulate us, taking advantage of powerful new tools such as deepfakes and artificial intelligence as quickly as they advance. How can a free country respond at scale, with due epistemic modesty and without infringing on civil liberties or otherwise doing more harm than good?

Obama has some good instincts on the subject. Perhaps cognizant of how “disinformation” can be invoked to undermine civic deliberation, he prefaced his remarks by emphasizing his unwavering support for a free-speech culture. “I am close to a First Amendment absolutist,” he told Goldberg. “I believe in the idea of not just free speech, but also that you deal with bad speech with good speech, that the exceptions to that are very narrow.” What’s more, he said, he wants to avoid a society of manners where “we feel like our feelings are hurt” and that we will “wilt” because of the words of others. “I want us all, as citizens, to be in the habit of hearing things that we disagree with,” he said, “and be able to answer with our words.”

Then, reflecting an emerging consensus in the Democratic Party, he called for new laws to be imposed on digital-communications platforms such as Facebook, Twitter, and YouTube. Their designs “monetize anger, resentment, conflict, division,” he alleged––yet are opaque, embedding nontransparent editorial choices that sometimes spark violence. He wants us all to understand those choices better. What algorithms do these platforms use? Are botnets gaming them? How do they microtarget ads?  “A democracy can rightly expect them to show us,” Obama insisted, noting, for example, our expectation that meat-processing plants open their doors to food-safety inspectors.

The most concerning downsides of anti-disinformation laws arguably disappear if they merely better inform us about the information flows we consume and refrain from infringing on the free exchange of ideas (including obvious misinformation, such as ivermectin being a near-perfect COVID-19 prophylactic). But if Big Disinformation is to benefit Western democracies and justify the resources being lavished on it, rather than merely avoiding the worst harms done in the name of fighting disinformation elsewhere in the world, it must clear additional hurdles, some of which may prove especially difficult in establishment institutions with ideological monocultures.

Here are four of those hurdles:

1. Define terms rigorously. The leaders of nonprofit organizations aimed at combatting disinformation and journalists assigned to cover a “disinformation beat” may be tempted, or perhaps unconsciously inclined, to treat more and more social ills as disinformation problems. The struggle against that distorting tendency requires a clear delineation between objections to falsehoods intended to mislead and various other objections. For example, if in 2024 a foreign government covertly buys YouTube ads telling undecided voters that Kamala Harris was born outside of the United States, that would fall under disinformation. But if the ads instead declared that Harris presided over efforts to block the release of a wrongly convicted man from prison on procedural grounds, that would not be disinformation––it is true, though one could characterize it as unlawful foreign interference.

Obama is right that social media monetizes anger while making a lot of users angry. But is “disinformation” the right label for that design? Most tweets that make me angry aren’t willful falsehoods. If all false tweets were eliminated from the platform tomorrow, Twitter could still run an algorithm that optimizes engagement and therefore winds up elevating polarizing opinions, profiting off anger every bit as much in the bargain. Conversely, Twitter could presumably elevate factually false tweets that make most people happy.

2. Study alternative accounts of what ails us. Many attendees at the Chicago conference blamed the January 6 insurrection on disinformation spread by tech companies. They noted that Donald Trump’s lies about the 2020 election spread partly through social media, helping to fuel the “Stop the Steal” rally. However, any president who shouted for months that an election was stolen could have rallied a similar number of allies to the capital–with or without modern social networks. The significant problem was electing an unpatriotic narcissist president, not bad algorithms spreading willful lies on social media (many people spreading false claims about Election 2020 really believed what they were saying). In light of Karen Stenner’s thesis in The Authoritarian Dynamic, it may even be that merely by spreading true but polarizing news and diverse perspectives, social media activates latent predispositions toward authoritarianism––an account of rising polarization and violence that has very different implications than a disinformation problem.

The more carefully one defines disinformation and analyzes it alongside other factors, the more unclear it becomes that fighting disinformation is a solution to a given ill. Better outcomes may require focusing elsewhere––for example, on fielding better anti-authoritarian candidates.

3. Earn back trust with a bigger tent. Disinformation seems to be a bigger problem on the right than on the left in the Trump era. The storming of the Capitol, dying from COVID because of lack of vaccination, and the Q phenomenon have no analogues of equal consequence on the left. Still, the left has significant disinformation and misinformation problems too, and any solution to disinformation will require cooperation beyond the center-left.

And many outside the center-left may be skeptical of Big Disinformation because of the dearth of ideological diversity at many anti-disinformation efforts. Diversity of thought would make these efforts less error prone, less vulnerable to ideological capture, and likelier to gain broader buy-in. Skepticism is further fueled by denigrating as “disinformation” assertions, like the New York Post article on Hunter Biden’s laptop, that turn out to be true; supposed fact-checking efforts that fail to rigorously distinguish among facts, analysis, and opinion; and the invocation of subject-area expertise to disguise value judgments, as some in the public-health community did during the George Floyd protests.

The timing of Big Disinformation’s rise is also suggestive of double standards that narrow its appeal. Neither lies nor misinformation nor their distribution at scale is new, so it’s noteworthy that disinformation became public enemy number one not after (say) the absence of Ahmed Chalabi’s promised weapons of mass destruction in Iraq, the CIA torture cover-up, lies about mass surveillance, or mortgage-backed securities dubiously rated AAA, but because of a series of populist challenges to establishment actors. Among the many factors that perhaps help to explain Trump’s election, Brexit, the January 6 insurrection, and vaccine hesitancy, centering “disinformation” implies liars and greed-motivated algorithms are to blame––so why reckon with establishment failures? If the people knew the truth, this framework implies, they’d have behaved differently! Even now that Big Disinformation is here, you don’t see its adherents talking much about years of deliberately misleading reports from Afghanistan, a flagrant undermining of democracy.

And additional efforts are needed to reassure Americans that the center-left isn’t trying to invoke disinformation in order to narrow democratic debate. Consider an exchange at the conference in Chicago, where a young woman posed this question to Senator Amy Klobuchar:

You introduced the bill today that would punish social-media companies like Facebook and Twitter for having health misinformation on their platforms. And I’m going to ask you, if I were to say that there are only two sexes, male and female, would that be considered misinformation that you think should be banned speech on social-media platforms?

Here is Klobuchar’s answer:

Okay, I’m not going to get into what misinformation––first of all, I think the bill you’re talking about is different than the one we’ve mostly been talking about, so I want to make that clear. We’ve been talking about the competition bill, but there is another bill that I have on vaccine misinformation––it is that specific––in a public-health crisis. You wonder why you get that specific? It’s because we’re trying to find carve-outs. That’s what I did with [U.S. Senator] Ben Ray Luján, that you can’t have immunity as a social-media company if you are broadcasting vaccine misinformation. There is another bill that Mark Warner did that is about just misinformation in general and hate speech and those kinds of things.

And I think one of the things Deval [Patrick] is getting at is that, a lot of times, the content fight—and Kara [Swisher] was getting at this—starts to dominate the world here, and one of the things I’ve been so heartened by is some of my Republican co-sponsors on this bill who have different views than me on some of the internet content issues have united that this is a good place to start, and have not turned it into some of these disputes about the internet. So that’s why we have focused on competition policy.

Are you clear on her position?

A more reassuring answer would have been, “No, of course I don’t think the government should punish a social-media company for a user arguing that there are only two sexes, male and female. We always want Americans to be freely able to discuss contested issues of our time.”

To overcome all this skepticism and earn broader trust, Big Disinformation should cultivate a reputation for free-speech values, nonpartisanship, and ideological neutrality––for example, caring as much about willful falsehoods spread in service of outcomes the establishment likes, such as staying in Afghanistan, as about outcomes they don’t, such as vaccine hesitancy. The attitude can’t be, Stop disinformation to stop Trump in 2024. It must be, Stop disinformation as an end in itself, as doing so will be better on the whole.

4. Rebuild a culture of critical thinking. Some Americans are taught to prioritize separating fact from appeals to emotion, looking for evidence to support claims, identifying errors in chains of reasoning, separating the truth of an argument from the identity of the person making it, and evaluating the plausibility of all arguments. Such habits of mind are helpful in staying resilient against disinformation, but competing approaches are more and more preferred. Other young people are acculturated to prioritize moral clarity and outrage at injustice, or “cultural competencies” such as “reading the room,” avoiding microaggressions, and centering the identity of the speaker, perhaps by applying privilege or intersectional analysis and deferring as “allies” to the purportedly marginalized.

The latter outlooks are not without insights, but they are not especially helpful in staying resilient against disinformation––especially if bad actors pose as marginalized people, which is not an imagined hypothetical but a documented Russian-troll tactic. “These malicious accounts tweeted a mixture of sentiments to cultivate followers and manipulate U.S. narratives about race, racial tensions and police conduct,” The Washington Post reported two summers ago. I’ve wondered if they are partly responsible for the fact that although a couple dozen unarmed Black men are killed by police in a given year, a majority of very liberal people believe that figure is 1,000 or more.

“The Russians built manipulative Black Lives Matter and Blue Lives Matter pages, created pro-Muslim and pro-Christian groups, and let them expand via growth from real users,” Samuel Woolley, the author of The Reality Game: How the Next Wave of Technology Will Break the Truth, told The Economist. “The goal was to divide and conquer as much as it was to dupe and convince.” Anyone engaged in a politics of identity-based solidarity, whether with “Black lives” or “Blue lives” or Christians or Muslims, was presumably likelier to be subject to that disinformation effort and to be vulnerable to it, as allies aren’t supposed to skeptically evaluate claims and demand evidence.

Americans should strive to treat everyone with dignity. To make the next generation more resilient to disinformation spread on social media and to short-circuit foreign and domestic attempts to leverage race and religion to divide us, we should also shift back toward prioritizing dispassionate analysis of statements, regardless of the speaker’s perceived identity, as a valuable habit of mind, not a microaggressive example of insensitivity.

I’ll conclude with two examples of public-policy remedies proposed at the Chicago conference. First, one that I’d oppose: In the University of Chicago law professor Geoffrey Stone’s telling, social media threatens democracy by feeding users whatever they want to see, reaffirming their views. He favors a law mandating that a site like Facebook or Twitter must serve randomly chosen or balanced content. “The fairness doctrine did that,” he recalled. “If the radio or TV station presents one side, it has to present the other … People moved relatively towards the middle because they heard both sides.” I mistrust any law that would require government or tech companies to categorize content by ideological viewpoint and decide what must be amplified and diminished.

The journalist ​​Cecilia Kang favors a contrasting approach to regulation––an approach that I am inclined to prefer as well. “One of the most promising things that I’ve seen,” she said in Chicago, is the disinformation conversation “move away from ‘Let’s regulate types of speech that are on platforms’ towards ‘Let’s look at the system, at the design of the technologies, and think about if there are ways to regulate how things get amplified very quickly, whether companies should disclose when things go viral and how they go viral, and give consumers control of that.’”

For example, Frances Haugen, the data engineer and Facebook whistleblower, argued in Chicago that merely forcing people to click on an article and read it before sharing it cuts down on the spread of misinformation.

Haugen went on to observe that many people believe the solution to bad speech is good speech and raised a problem with the “good speech” remedy: under the status quo, that is sometimes impossible. The practice of narrowcasting information via advertising––so that some information is seen exclusively by a very narrow group, such as one that shares a particular occupation––has become common. “Part of why my team got formed was they caught Russians targeting information at police officers,” she explained. If Facebook offers the ability to narrowcast content through advertising, she said, “it should have to publish what the most popular thousand posts are every week in each of those 600 segments in the United States,” so outsiders can evaluate what others are hearing and have the opportunity to counter misinformation or disinformation. “I believe that if we wish to have a democracy, we have to see what those targeted information streams are,” she continued, “because we’re living in divergent realities and we can’t even go in there and pay for ads to counter that speech if we don’t know the speech is taking place.”

A law forcing transparency to enable meeting bad speech with good would address actual disinformation, in an ideologically neutral manner, inviting critical thinking to override groupthink.

More intriguing still is the prospect of new platforms where design transparency is built in from the start––and preventing or identifying and undermining disinformation is a priority. Are there potential platforms of that sort that people would want to use as much as Facebook or Twitter? The nonprofit sector offers some precedent for hope. “Fund a wave of experimentation in building social networks that we govern, that we control, that are noncommercial, that are non-surveillant, and actually work to benefit us as individuals and citizens,” the media scholar Ethan Zuckerman said in Chicago. Perhaps Big Disinformation should attempt to create rather than to regulate.