That enthusiasm didn’t last, but mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm. During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year. Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate. These actions had a domino effect, as podcast platforms, on-demand fitness companies, and other websites banned QAnon postings. Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
As if to make clear how far things had come since 2016, Facebook and Twitter both took unusually swift action to limit the spread of a New York Post article about Hunter Biden mere weeks before the election. By stepping in to limit the story’s spread before it had even been evaluated by any third-party fact-checker, these gatekeepers trumped the editorial judgment of a major media outlet with their own.
Gone is the naive optimism of social-media platforms’ early days, when—in keeping with an overly simplified and arguably self-serving understanding of the First Amendment tradition—executives routinely insisted that more speech was always the answer to troublesome speech. Our tech overlords have been doing some soul-searching. As Reddit CEO Steve Huffman said, when doing a PR tour about an overhaul of his platform’s policies in June, “I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency.”
Derek Thompson: The real trouble with Silicon valley
Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University. The strong protection of even literal Nazism is the most famous emblem of America’s free-speech exceptionalism. But one year and one pandemic later, Zuckerberg’s thinking, and, with it, the policy of one of the biggest speech platforms in the world, had “evolved.”
The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines. This might seem an obvious move; the virus has killed more than 315,000 people in the U.S. alone, and widespread misinformation about vaccines could be one of the most harmful forms of online speech ever. But until now, Facebook, wary of any political blowback, had previously refused to remove anti-vaccination content. However, the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.