Jeff Jarvis: Platforms are not publishers
Last night, many hours after the ban, Twitter published some of its reasoning. It said the New York Post story fell under its “hacked materials” policy, created in 2018, which states: “We don’t permit the use of our services to directly distribute content obtained through hacking that contains private information, may put people in physical harm or danger, or contains trade secrets.” Twitter invoked the same policy in June to ban a group that leaked 270 gigabytes of police-department data. But it’s hard to see how linking to a news outlet would constitute “directly” distributing hacked content, or how Twitter would apply this interpretation of its own rules consistently, when plenty of legitimate journalism involves reporting on leaks and hacks of private information pertaining to public figures. It’s an arbitrary decision. (That said, some journalists have suggested that the hacked emails might have been planted by a foreign government, raising questions about whether Twitter and other platforms can or will differentiate between government leaks with legitimate journalistic value and documents of questionable provenance distributed solely to sow discord.)
Twitter also said last night that the New York Post story contained images showing personal information like emails and phone numbers, which is an unusual journalistic practice, as well as a better reason to limit its spread—consistent with the company’s policy on doxing. The company should have and easily could have given such an explanation much sooner. But the temporarily blocked link on the House Judiciary Committee website does not contain the images in question, making that brief ban even more inscrutable.
Over the past year, as pandemic- and election-related misinformation has run rampant and violent subcultures have found mainstream support, major social platforms have felt public pressure to take responsibility for what spreads on their sites. That’s led Twitter to make rapid-fire decisions on issues it has hemmed and hawed about in the past. The company took a big step in May by fact-checking President Trump’s lies about mail-in voting, then went further by removing some of his more egregiously incorrect posts about COVID-19. Twitter has resisted calls throughout Trump’s presidency to penalize him for tweets threatening war or a renewed nuclear arms race, but recently added a warning label to a tweet in which he suggested that Black Lives Matter protesters should be met with state violence.
Read: Twitter’s least-bad option for dealing with Donald Trump
Twitter has made real strides to become a safer and more useful website, but the company’s choice to ban one link without a prompt, coherent explanation cheapens that progress. It sets a bizarre precedent, implying that the company might become an arbiter of journalistic rigor or public interest. It derails the conversation around platform accountability and offers free fodder to conspiracy theorists, many of whom were thrilled to have it. Limiting the spread of conspiracism has been a driving force behind many of Twitter’s moderation decisions this year. Letting unanswered questions swirl for hours around a politically charged controversy only had the opposite effect.