Second, enforcement during the state of emergency is swift and blunt. With most human content moderators at home and unable to work remotely for logistical reasons, the major platforms have to rely on their automated tools more than normal. Facebook, Twitter, and YouTube all acknowledged that they would make more mistakes as a result. In other words, they would remove speech that should stay up. This speech becomes collateral damage in the mobilization around the pandemic, and a concession to the exigencies of the moment. With misinformation a potential matter of life and death, and simply no way of having humans review every post, the choice between blunt tools and no moderation at all is simple.
Even the usually sacred principle that platforms will not interfere with the speech of political figures has been abandoned. After Twitter removed tweets from the Brazilian President Jair Bolsonaro for violating its policies by tweeting false or misleading information about COVID-19 cures, Facebook and YouTube quickly followed. For a tech platform to suppress statements by a democratically elected leader is a truly remarkable step—and potentially one that makes it harder for voters to hold their representatives accountable in the future.
Third, even with these sweeping new rules and blunter enforcement, platforms have been suspending their usual due-process protections. Being muted by an algorithm on Facebook or YouTube may have no legal consequence—unlike, say, being silenced by police in the public square. Still, the former is a much greater hindrance than the latter is on a person’s ability to connect with an audience, especially at a moment of social distancing. Nevertheless, without as many human content moderators on deck, the major platforms have all scaled back their appeals processes for people who feel their posts were taken down incorrectly.
Read: How the pandemic will end
The platforms are revealing their far-reaching power in other ways. For some time before the pandemic, members of Congress and regulators around the world had been attacking major internet companies over their data-collection and data-sharing practices. Yet in recent weeks, Facebook and Google have presented their troves of hyper-detailed data as a boon to disease researchers and have unveiled new products that employ user information to help document the pandemic’s spread and organize response efforts. As the tech journalist Casey Newton wrote recently, “Big tech companies, which have spent the past three years on the defensive over their data collection practices, are now promoting them.”
If ever an emergency justified a clampdown on misinformation and other extraordinary measures, the coronavirus pandemic is surely it. The tech companies’ swift action in the current crisis has been widely praised, and so it should be. But this still leaves very real questions. Unlike most countries’ emergency constitutions, those of major platforms have no checks or constraints. Are these emergency powers temporary? Will there be any oversight to ensure these powers are being exercised proportionately and even-handedly? Are data being collected to assess the effectiveness of these measures or their cost to society, and will those data be available to independent researchers? The question is already being asked whether things should ever go back to “normal”—or whether this more iron-fisted rule is what the internet needed all along. The favorable news coverage that platforms are receiving will no doubt make similar heavy-handedness more tempting in the future—and in circumstances far less dire than a global pandemic.