Trump Is a Problem That Twitter Cannot Fix

When a duly elected president is bent on spreading misinformation, tech companies can rein him in only so much.

An illustration of Trump surrounded by the Twitter logo
Getty / The Atlantic

About the author: Evelyn Douek, a doctoral student at Harvard Law School, is an affiliate at Harvard’s Berkman Klein Center for Internet and Society.

Donald Trump’s tweets pose a special problem for Twitter. Absolutely no one can be surprised that the president is using the platform to tweet false and inflammatory claims in the middle of a global pandemic and the lead-up to an election: This is the president’s signature style. His recent tweets have promoted baseless conspiracy theories about the death of Lori Klausutis, a former staffer for Republican congressman turned MSNBC host Joe Scarborough, and falsely claimed that an expansion of mail-in voting would rig the 2020 election. When Twitter took the unprecedented step of adding a fact-check link to Trump’s tweets about voting yesterday, many critics of the decision thought that CEO Jack Dorsey still had not gone far enough—they maintained that the offending tweets should come down, or that the company should kick Trump off its platform altogether.

The problem is that Trump’s critics are looking to Dorsey to solve a problem that Twitter did not create. What the president says and does is inherently newsworthy. As The Atlantic’s Adam Serwer tweeted yesterday, “You can’t deplatform the president of the United States.” At the moment, the duly elected president is someone who deliberately puts out divisive misinformation on social media. Twitter can surely do a better job of enforcing its own rules and flagging Trump’s worst statements—this morning, for instance, he repeated a casual insinuation that Scarborough was involved in Klausutis’s death and allegations that mail-in voting would lead to election cheating, and so far no warning labels or fact-checks are attached. But a tech company can’t change who the president is.

For some people, the answer is simple: If a tweet violates Twitter’s official rules, it should come down regardless of who posted it. If anything, the more powerful the figure, the greater potential they have to cause harm. But in democratic societies, at least, this isn’t always obviously the right answer. Democracy is based on the idea that voters should have access to information about who their candidates really are and what they believe. This remains true even (or, perhaps, especially) when those beliefs are abhorrent. And in a world where Twitter is but one of many megaphones at public figures’ disposal, the supposed benefit or efficacy of removing such content is debatable.

Twitter insists that world leaders are not above its policies entirely, and it will take down tweets in certain categories that it considers to be particularly destructive, such as clear and direct threats of violence or tweets encouraging self-harm. But the company is otherwise reluctant to intervene. Nevertheless, it has been rolling out a set of progressively detailed policies suggesting that it sometimes will going forward. A recently announced policy outlined a complex matrix for how Twitter will deal with misleading information related to COVID-19, depending on the severity of the harm. It also has a broad “civic integrity policy” intended to protect elections. This is the background against which Trump has been tweeting false claims about the steps that many states are considering taking to expand mail-in voting during the pandemic. Two of these tweets finally prompted Twitter to intervene yesterday.

If any cases for intervention are easy, Trump’s tweets claiming that mail-in ballots were fraudulent are among them. Platforms talk tough on the need to remove misinformation about voting processes, and rightly so—it’s an area in which the reliance on democratic accountability rings hollow, because the misinformation itself interferes with those very accountability mechanisms. You won’t vote someone out if you’re scared or misled out of voting at all. Similarly, platforms have abandoned their “defensive crouch” over their power to censor in relation to misinformation about the pandemic, and have generally earned plaudits for doing so. The president’s tweets about voting in the context of the pandemic therefore sit at the nexus of two exceptional topics where platforms have felt more comfortable stepping in. And yet, it doesn’t seem that this case was so easy for Twitter: The move yesterday came a week after similar tweets from the president that so far remain unlabeled.

Still, for those of us who study content moderation on the internet, this feels like a watershed moment. In March, Twitter removed posts by Brazilian President Jair Bolsonaro and Venezuelan President Nicolás Maduro for violating its policies on tweeting false or misleading information about COVID-19 cures, but the company had long refused to take any action against Trump. While Tuesday’s move was too modest for some, let’s pause to appreciate the remarkable nature of this private company—a platform whose stated purpose is to “serve the public conversation”—asserting its right to rule even the president of the United States out of bounds.

In general, the debate about content moderation needs to move beyond taking things down versus leaving them up—the binary that dominates these discussions and suggests simplistic solutions to complex problems. Tech platforms such as Twitter, Facebook, and YouTube have far more nuanced tools at their disposal than historically have been available to deal with harmful speech. Platforms can put a warning label on egregiously false and misleading posts or limit their recirculation, instead of just removing them; even subtle steps, such as changing the visibility of “likes” or “shares,” or how easy it is for users to share things, can dampen the virality of divisive falsehoods. Companies should make use of all these tools.

This is not to buy into a form of absolutism that elevates free-speech interests above all else, but it is to acknowledge the messy reality of the competing interests at play. The threshold for completely removing a statement by a democratically elected leader should be extraordinarily high. Asking platforms to step in to adjudicate truth often seems attractive until you fear that the people running them might no longer be in your camp. More proportionate measures that do not try to turn the president on and off, as if by flipping a switch, will strike a better balance.

Nevertheless, it’s hard to get excited when, in response to the outcry about Trump’s promotion of the Klausutis conspiracy theories, Twitter says it is “working to expand existing product features and policies so we can more effectively address things like this going forward.” Beyond its strictures against misinformation about the pandemic, Twitter already has a broader “public-interest policy” that may come into play when a public figure’s tweets break the rules but the company decides to keep them up for the purpose of debate and discussion. The policy says Twitter would add notices to the tweets in question. It basically lies fallow; the company has not specifically invoked it in its decision (so far) to leave the Klausutis tweets up.

The problem is not the policies—or at least not just the policies. The problem is that no one can force Twitter to stick to them or even explain what they actually mean in practice.

If Twitter’s policies are so flimsy, then why discuss them at all? Because in moments such as this, policies should not only be the basis for Twitter’s choices, but also a shield for them.

Within hours of Twitter’s move, Trump again surprised exactly no one by accusing the platform of interfering in the 2020 election and stifling “FREE SPEECH,” saying he will not allow the company to do what it’s doing. These assertions have no legal basis. But they are not intended to be legal claims; instead, they help lay out a story that will keep playing until the election.

If Twitter enforces its policies in an ad hoc manner, without explaining why some tweets with voting misinformation earn a “get the facts” flag and others don’t, or why tweets promoting clinically unproved, and potentially dangerous, coronavirus treatments aren’t in violation of the platform’s “broadened” definition of harm during the pandemic, it opens itself up to charges of arbitrariness, questions about its motives, and tweet-by-tweet reevaluation of its role in the public discourse.

Platforms are getting better at explaining their decisions. They need to do better still. Was yesterday’s action a sign of a paradigm shift in content moderation, or an outlier based on exceptional circumstances and made by a company under fire? Nobody has any idea, and that’s a large part of the problem.

At the same time, content moderation has its limits. Whatever Twitter does with individual tweets—whether the company labels them, quarantines them, or removes them entirely—can achieve only so much when voters have chosen a leader who uses all the megaphones at his disposal to willfully mislead them, and who would use any attempts at “censorship” to further inflame people. Twitter cannot change the content of the tweets in the first place—or the nature of the president who tweets them.