Updated at 12:25 p.m. ET on October 15, 2020.
Yesterday morning, the New York Post published a bombastic and dubious report—widely criticized by journalists at other outlets—that included screenshots of emails allegedly copied from a hard drive that could possibly have belonged to Hunter Biden. There were numerous holes in the story’s reporting, and the outlet made no obvious attempt to confirm the veracity of the emails, which it said it learned about from the former Trump adviser Steve Bannon.
The Post article was a common-enough case of shoddy journalism, but it elicited an unusual response from two major social platforms. Within a few hours, Facebook announced that it would limit the story’s spread on its platform while its third-party fact-checkers somehow investigated the information. Soon after, Twitter took an even more dramatic stance: Without immediate public explanation, it completely banned users from posting the link to the story. Republican lawmakers were quick to express their disapproval, and Senator Josh Hawley even sent a letter to the Federal Election Commission suggesting that the platforms’ actions might violate campaign-finance law. (Facebook did not immediately return a request for comment. A Twitter spokesperson directed me to a public statement posted last night.)
In an even more byzantine restriction this morning, Twitter temporarily blocked a link to a government website run by the Republicans of the House Judiciary Committee, where the story had been reposted. Donald Trump Jr. tweeted that the ban constituted “clear election interference.” The ban was quickly lifted and a Twitter spokesperson said the decision was made “in error.”
The very strange, very online events of the past 30 hours or so might not merit an FEC investigation, but Twitter’s URL ban in particular raised alarm bells for those who care about consistency and coherency in content moderation, and led to legitimate questions about how Twitter determined that this link was uniquely bad. Leaping—as some people did—to accusations of collusion between Twitter and the Biden campaign is unreasonable and dangerous. But Twitter’s decision opened the company up to both valid criticism and cheap shots at its previous moderation efforts, including the significant steps it’s taken to limit the spread of misinformation.
Perhaps the most frustrating thing about Twitter’s move is that it lent a degree of legitimacy to an otherwise nonsensical—but pervasive—paranoia about anti-conservative bias on social platforms. Whenever the company limits the spread of content that right-wing users support, Republican lawmakers start playing referee. Hawley, for example, has proposed legislation that would put the Federal Trade Commission in charge of making sure that social platforms aren’t expressing political bias with their algorithms or moderation decisions. President Donald Trump signed a toothless executive order “on preventing online censorship” this spring after Twitter labeled some of his tweets with fact-checking warnings. Studies have found no evidence that such bias exists, but the belief that Big Tech has a censorial attitude is common not just among politicians, but among Americans more generally: Recent Pew research found that 90 percent of Republicans think it’s at least “likely” that social-media sites censor political viewpoints, as do 73 percent of American adults overall.
To be clear, Twitter is a company, not a government, and as such it has every right to “censor” whatever it wants. There is no debate that the platform is legally allowed to block any URL it likes, and it often does: It has an incentive and some obligation to protect its users from malware, spam, illegal content, and so on. All major social platforms do some amount of link-blocking. This summer, Twitter expanded its definition of “unsafe” links to include links to content that would violate its on-platform rules against hate speech and the promotion of violence. (This was about a week after it took action to block URLs associated with the QAnon conspiracy theory.) But the New York Post story is the first high-profile instance of the site blocking just one URL without a coherent explanation.
Last night, many hours after the ban, Twitter published some of its reasoning. It said the New York Post story fell under its “hacked materials” policy, created in 2018, which states: “We don’t permit the use of our services to directly distribute content obtained through hacking that contains private information, may put people in physical harm or danger, or contains trade secrets.” Twitter invoked the same policy in June to ban a group that leaked 270 gigabytes of police-department data. But it’s hard to see how linking to a news outlet would constitute “directly” distributing hacked content, or how Twitter would apply this interpretation of its own rules consistently, when plenty of legitimate journalism involves reporting on leaks and hacks of private information pertaining to public figures. It’s an arbitrary decision. (That said, some journalists have suggested that the hacked emails might have been planted by a foreign government, raising questions about whether Twitter and other platforms can or will differentiate between government leaks with legitimate journalistic value and documents of questionable provenance distributed solely to sow discord.)
Twitter also said last night that the New York Post story contained images showing personal information like emails and phone numbers, which is an unusual journalistic practice, as well as a better reason to limit its spread—consistent with the company’s policy on doxing. The company should have and easily could have given such an explanation much sooner. But the temporarily blocked link on the House Judiciary Committee website does not contain the images in question, making that brief ban even more inscrutable.
Over the past year, as pandemic- and election-related misinformation has run rampant and violent subcultures have found mainstream support, major social platforms have felt public pressure to take responsibility for what spreads on their sites. That’s led Twitter to make rapid-fire decisions on issues it has hemmed and hawed about in the past. The company took a big step in May by fact-checking President Trump’s lies about mail-in voting, then went further by removing some of his more egregiously incorrect posts about COVID-19. Twitter has resisted calls throughout Trump’s presidency to penalize him for tweets threatening war or a renewed nuclear arms race, but recently added a warning label to a tweet in which he suggested that Black Lives Matter protesters should be met with state violence.
Twitter has made real strides to become a safer and more useful website, but the company’s choice to ban one link without a prompt, coherent explanation cheapens that progress. It sets a bizarre precedent, implying that the company might become an arbiter of journalistic rigor or public interest. It derails the conversation around platform accountability and offers free fodder to conspiracy theorists, many of whom were thrilled to have it. Limiting the spread of conspiracism has been a driving force behind many of Twitter’s moderation decisions this year. Letting unanswered questions swirl for hours around a politically charged controversy only had the opposite effect.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.