Two years ago, Matthew Prince, CEO of Cloudflare, saw this controversy coming and begged not to be put in this position. Then and now, his company—which helps provide some of the basic plumbing of the internet—found itself at the center of the battle over which speech should and should not be easily available online. The more fundamental question is who gets to make these decisions, and it’s being answered by default, in the absence of any legal norms. Who are the deciders? This week, at least, companies like Prince’s are.
Until recently, the infamous message board 8chan was one of Cloudflare’s clients. A breeding ground for violent extremists, 8chan has been the host of advance announcements of three mass shootings in less than six months, including the shooting in El Paso, Texas, on Saturday. Early yesterday morning, Cloudflare stopped serving 8chan—thereby disabling it, if only temporarily.
Though unknown to the average web user, Cloudflare makes possible much of the internet you see. Providing security, performance, and reliability services to its clients, the company sits in the middle of the “stack” of layers of infrastructure that bring web content to many users. Cloudflare is both fairly ubiquitous and largely imperceptible. A company serving such technical functions might seem an unlikely focal point for a deep philosophical debate over the meaning of free speech and the rule of law in the internet age. But that is what it became, because of 8chan.
As critics questioned how 8chan remained readily accessible even after being so directly linked to a string of horrific violence, their gaze settled on Cloudflare. One of the services Cloudflare provides is protection against cyberattacks; without such protection, vigilantes can target sites and effectively take them offline. Why was Cloudflare standing in their way? The negligible revenue from one of millions of customers seemed hardly worth the headache. But Prince said he felt a “moral obligation” to keep 8chan on Cloudflare’s network, in part to help law enforcement monitor the message board’s activity, and in part because he worried about the precedent it could set. As Cloudflare’s general counsel put it on Sunday, the company did not want to be in the business of evaluating content; it saw itself as “largely a neutral utility service.”
Then later in the day, the company reversed course. In a blog post, Prince said, “The rationale is simple: [People at 8chan] have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths.” It’s true that 8chan is “designed to be lawless and unmoderated,” as Prince put it. Even on Sunday, as police worked to verify its connection to the El Paso shooting, it still defiantly welcomed readers to the “Darkest Reaches of the Internet.” But as Prince insisted 8chan was an easy case because it is “uniquely lawless,” he also wrestled with an uncomfortable truth: His decision was lawless too.
Traditionally, the rule of law is associated with constraining governments. It reflects the idea that the exercise of power over fundamental rights should not be arbitrary. The rule of law is achieved in part by having procedural protections that make decisions transparent, consistent, predictable, and fundamentally accountable.
Prince’s decision was anything but. After a weekend of publicly insisting he had a moral obligation to keep serving 8chan, he could suddenly, unilaterally, and arbitrarily pull Cloudflare’s services. Prince, of course, is not a government official. He did not order police into a public square to arrest a speaker with whom he disagreed. Yet as the internet becomes the primary forum for free expression, power over who gets a platform to speak is more and more falling to private companies like his. And they exercise this power with none of the traditional protections that should accompany decisions of such public consequence. When Prince thoughtfully details what’s wrong with this situation, his legal training shows. So does the fact that he’s been in the same uncomfortable position before.
Two years ago, almost exactly to the day, Cloudflare terminated the account of the Daily Stormer, another online petri dish of violent extremism. Back then, the final straw was an article celebrating the murder of Heather Heyer in the white-nationalist rally in Charlottesville, Virginia, in August 2017. Far from trying to rationalize the decision, Prince wrote to employees, “Let me be clear: this was an arbitrary decision … Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” He was being deliberately provocative. Prince told Recode that he was frustrated at the paucity of public debate about who should be making editorial decisions on the internet and wanted to force more conversation about the issue. During moments of controversy, high-profile companies such as Facebook and YouTube typically defend their own decision making in blog posts that explain, with varying degrees of persuasiveness, how what they did conformed with their platform rules. Prince did nothing of the sort. Embracing the storm, he called his own decision “dangerous.”
Many commentators agreed. There are real risks in moving censorship tools into the infrastructure layers of the internet stack, where they are less visible to end users. Website owners have a limited choice of infrastructure intermediaries and so cannot as easily just move to another company if Cloudflare refuses to provide services; the tools these intermediaries have available are blunter and normally result in removing whole sites and domains, rather than individual posts; and the decisions can be less transparent and accountable, because these layers of the internet are not visible to the average person. However, when platforms refuse to moderate themselves but remain technically within the law, like 8chan does, intervention by infrastructure companies may be the only tool available.
In the two years since Cloudflare stopped serving the Daily Stormer, no framework for how to deal with content regulation at the infrastructure layer has arisen. But the tone of the conversation has changed. The optimism of the early years of the internet that the free flow of information will inevitably produce positive outcomes is crumbling; the civil libertarian John Gilmore’s adage that the internet interprets censorship as damage and finds ways around it has proved false—and lost its intellectual appeal in any case. More and more, a lack of censorship of dangerous speech is coming to be seen as damage, and internet intermediaries are being called on to be better gatekeepers to prevent it from bypassing societal controls.
So two years later, Prince’s tone is also very different. Instead of talking of how arbitrary his decision is, he attempts to provide a limiting principle: Where platforms have been designed to be lawless and unmoderated, and where the platforms have demonstrated their ability to cause real harm, then it may be justified for companies like Cloudflare to try to limit the reach of harmful content online. He is committed to the hard task of defining a policy that Cloudflare can enforce transparently and consistently going forward. But he also doesn’t provide any details about how his company intends to monitor its clients for disqualifying behavior. Tech companies that operate around the world are often highly reactive to issues that get attention in a relatively narrow spectrum of Western media or high-profile Twitter users, but outsourcing decisions about free speech to the most prominent commentators in those forums does not constitute a sufficient action plan.
The global ramifications are enormous. In 2017, Prince worried that, by taking the first step along the road of moderating vile but legal content at all, it would be “harder for us to argue against a government somewhere pressuring us into taking down a site they don’t like.” This weekend, he cited regular requests from Middle Eastern governments to take down the sites of an LGBTQ support group because they are “corrupting the children.” Companies might feel justified in rejecting such requests on human-rights grounds. But especially in less clear-cut cases, who is Matthew Prince to determine from San Francisco the legitimacy of a government’s insistence that something is harmful a world away? How can he justify his decisions as more than arbitrary? This is no doubt why he wants someone else—legislators, legal scholars, an independent oversight board, somebody—to provide a framework.
Prince’s reluctance to take on this role is unsurprising. It is not going well for those social-media platforms for whom it is part of their business models. Nobody wants to be left holding the content-moderation hot potato. Determining where to draw the line on speech is fraught: In many cases, there are no “right” answers, and any decision will upset some portion of companies’ users.
This is why Prince’s focus on processes and the rule of law is important. People’s judgments of the legitimacy of a decision are far more influenced by processes than outcomes. These values are also more universal: As Prince told Stratechery, “If you go to Germany and say ‘the First Amendment,’ everyone rolls their eyes, but if you talk about the rule of law, everyone agrees with you.” And so while it might have been convenient for it to be true that “conduits, like Cloudflare, are not visible to users and therefore cannot be transparent and consistent about their policies,” it is his responsibility now to make sure that’s not the case.
The 8chan decision is only the start of this conversation. 8chan is not the only lawless platform. And Prince himself acknowledged, “Increasingly, especially as more and more of the connection to websites becomes encrypted, more and more [questions about preventing access to illegal content] are going to fall to us.” Content moderation is moving down the stack, away from the application layer that users see. It needs to happen in a way that is transparent, consistent, and accountable. Otherwise, decisions like the one to stop hosting 8chan may look like efforts to tame the wild west of the internet, but without any safeguards in place they remain lawless—and just prove that no one knows who the sheriff should be.