A Prominent Vaccine Skeptic Returns to Twitter

A year after he was banned, Alex Berenson sued his way back. Are more lawsuits coming?

Alex Berenson flying on a Twitter bird
Erik Carter / The Atlantic; Getty

One year ago this month, Twitter permanently suspended a 340,000-follower account for “repeated violations of our COVID-19 misinformation rules.” The owner of that account, the former New York Times reporter and vaccine skeptic Alex Berenson, responded with a lawsuit demanding reinstatement. Suffice to say that few observers thought he had any chance of coming out on top. One lawyer went through the complaint page by page on Twitter and concluded that Berenson had hired a “band of incompetent knock-off muppet lawyers” to present a doomed case.

Then, somehow, the muppet lawyers won. Earlier this summer, Twitter put Berenson’s account back online, noting that “the parties have come to a mutually acceptable resolution.” Berenson wasted little time in calling out mainstream media for failing to cover the “pathbreaking settlement” that led to his return. “I mean, imagine being @dkthomp right about now,” he wrote triumphantly, in reference to my colleague Derek Thompson, who last year dubbed Berenson “the pandemic’s wrongest man.” Now he’s bent on being acknowledged as the victim of the pandemic’s wrongest ban.

Whatever the merits of Berenson’s case, and of the specific tweet that led to his suspension, the outcome is significant. For years, people who have been booted off Twitter, Facebook, YouTube, and other platforms have tried to sue to get back on, and for years, most of their cases were dismissed. Eric Goldman, a law professor at Santa Clara University School of Law, analyzed 62 such decisions for an August 2021 paper and found that the internet companies had won “essentially all” of them. When he read about Berenson’s lawsuit, he told me, his first impression was that “it was doomed to fail just like the dozens of others that have also failed.”

Berenson's victory was not based on his argument that his ban was a violation of the First Amendment; the judge rejected this claim. Instead, his success seems to have hinged on promises made to him by a high-level Twitter employee. “The points you’re raising should not be an issue at all,” the company’s then–vice president of global communications assured Berenson at one point, according to the complaint. The lawsuit says the same executive later told Berenson that his name had “never come up in the discussions” about Twitter’s COVID-19 misinformation policies. Goldman believes that the court’s decision to allow a claim based on that correspondence prompted Twitter to settle. Internet-service executives have always been instructed by lawyers not to talk with people about their individual accounts and not to make any promises about what might happen, Goldman said, “for reasons that should now be obvious.”

This was not the end of the drama, though. Last week, Berenson published a Substack post that included screenshots of a conversation on Twitter’s internal Slack messaging system from April 2021, obtained during the course of the lawsuit. The images show employees discussing a recent White House meeting at which members of the Biden administration were said to have posed a “really tough question about why Alex Berenson hasn’t been kicked off from the platform,” as one Slack message put it. Another alleges that Andy Slavitt, who was at the time a senior adviser to Joe Biden on the administration’s COVID-19 response, specifically mentioned a “data viz that had showed [Berenson] was the epicenter of disinfo.” Berenson has since declared that he will sue the Biden administration for infringing upon his free speech by compelling Twitter to take action against his account.

Once again, legal experts say that his case is unlikely to succeed. Berenson faces a “very high bar” in proving that a private company behaved as a state actor, Evelyn Douek, an Atlantic contributor and assistant professor at Stanford Law School, told me. According to both her and Goldman, the Slack messages that Berenson published don’t amount to proof that the government pressured Twitter to remove Berenson’s account. But Douek is generally perturbed by the evidence of informal pressure by government officials to constrain speech. “It does strike me as unusual,” she said. “It’s certainly unusual to get records of it.”

Andy Slavitt told me that he did participate in a meeting with Twitter but doesn’t recall bringing up Berenson by name. “Twitter sets its own policies, and I wanted to understand them, whether they’re good or bad,” he said. I asked him about an MIT data visualization, widely circulated around that time, that described an “anti-maskers network” with Berenson as an “anchor.” Had he brought up that data-viz in the meeting? He said it was possible: “I don’t doubt it, because we tried to use examples.” But he denied having asked Twitter to get rid of Berenson, with whom he claimed to have only passing familiarity. “I think his name was in a magazine article,” he said. “I don’t remember anything else about him.”

I reached out to Berenson to request an interview, but he refused to answer questions about his legal fight with Twitter, and the settlement that came out of it. “If you want to have a real conversation that ends in a piece that discusses Derek’s piece as well as my case, we can do so,” he responded, once again referring to my colleague, “but I expect that will be impossible for you.”

Content moderation is messy by its nature. Health- or science-content moderation can be even more chaotic. Like other social platforms, Twitter tried to implement new policies at the start of the pandemic that could be applied to conversations about a rapidly shifting set of best practices for public health. Twitter’s “COVID-19 misleading information policy” specifically considers in violation any “claim of fact” that is “demonstrably false or misleading” and “likely to impact public safety or cause serious harm.” But those definitions have proved tricky.

Consider the final tweet from Berenson before he was kicked off Twitter last year, which made the following statements about COVID-19 vaccination: “It doesn’t stop infection. Or transmission. Don’t think of it as a vaccine. Think of it - at best - as a therapeutic with a limited window of efficacy and terrible side effect profile that must be dosed IN ADVANCE OF ILLNESS. And we want to mandate it? Insanity.” The first two statements in the tweet are factually accurate. The third wouldn’t seem to qualify as a “claim of fact.” The fourth, with its reference to a “terrible side effect profile,” is at least tendentious and arguably misleading, but the overall point of the tweet is to express disdain for vaccine mandates. How, exactly, did this tweet factor into Berenson’s removal from the site? A spokesperson for the company would provide me only with the same statement it had given out in July: “Upon further review,” the statement said, “Twitter acknowledges Mr. Berenson’s Tweets should not have led to his suspension at that time.”

Stephanie Alice Baker, a sociologist at City, University of London, has taken issue with the concept of “harm” as it’s used in health-misinformation policies on Twitter and Facebook. Scientific consensus and official recommendations have changed over the course of the pandemic, she argues, citing the changing early advice on face masks, as well as the retraction of prominent papers in The Lancet and The New England Journal of Medicine about the safety of various medications used by COVID-19 patients. “Part of the issue with predicating content moderation policies on the concept of harm at the start of the pandemic is that scientific understanding of harm was uncertain and evolving,” Baker told me recently via email. “Harm is not a neutral concept,” she added. “What is considered harmful is highly contingent on partisan issues and politics.”

In the meantime, the mere existence of these policies serves as fodder for a culture war over platforms’ efforts to mitigate harmful speech—and Berenson’s victory has been good for morale among those who believe that they’ve been censored. One of the lawyers who represented him, James R. Lawrence III, has been posting about his other clients, including the Rhode Island doctor Andrew Bostom and the former combat medic Daniel Kotzin, both of whom were kicked off Twitter for violating COVID-misinformation policies. “Science is not about the truth revealed by technocrats; it’s about discussion,” Adam Candeub, a Michigan lawyer who advised President Donald Trump on his efforts to counter alleged anti-Republican bias on social media, told me. Candeub has filed lawsuits on behalf of banned Twitter users but has never found success like Berenson and Lawrence’s. “It worked for them; thank God it did,” he said.

The next round of lawsuits may go nowhere, but they still can play a role in a growing ecosystem of “aggrieved influencers,” for whom claims of being censored by the platforms are themselves a form of clout. Goldman told me that this issue is only getting hotter. New efforts to regulate social media at the state level could enable far more legal action, with higher odds of success. If laws like those that have been passed in Florida and Texas were to stand up in court, everything will change, Goldman said. “We will see a massive tsunami of litigation that dwarfs what we’ve seen today.”