Internet Censorship Could Happen More Than One Way

A landmark ruling in a ‘‘right to be forgotten” case discourages censorship on a global scale. What happens in individual countries may be a different story.

European Union flags
Yves Herman / Reuters

Yesterday the European Court of Justice rejected France’s attempt to impose the European Union’s “right to be forgotten” on internet users everywhere around the world—a ruling that has been widely celebrated as a win for free speech. The court ruling means that privacy restrictions in Paris won’t dictate which links Google can or can’t include in search results in the United States. The decision also appears to set a precedent for restraint; if the union’s highest court won’t try to force the EU approach on the rest of the world, perhaps other jurisdictions—including more repressive ones such as Russia and Turkey—will refrain from trying to impose their approaches too.

But while First Amendment enthusiasts in the United States are heaving a sigh of relief, the victory is a precarious one. Nothing in the ruling keeps the European Union from revising its laws so that right-to-be-forgotten decisions can be applied globally; to the contrary, the court provides a blueprint for how the EU might do so. And as the court also makes clear, individual member nations can continue to pursue global takedowns and delistings, even in situations where EU law does not demand it.

Moreover, in the absence of global delinking and takedown orders, the ruling envisions and promotes more geo-blocking—that is, the suppression of certain internet content only in countries that forbid it. But while the kind of geographic segmentation envisioned and promoted by the ruling is in many ways preferable to global censorship orders, it too has its downsides—potentially fostering increased censorship on the local level, in ways that exacerbate local repression.

Yesterday’s judgment resolves a long-standing dispute between France and Google over the right to be forgotten, which is intended as a means of promoting individual privacy rather than as a restriction on free expression. Now enshrined in EU law, the right enables citizens and residents of the bloc to demand that a search engine or website delete or unlink personal information they deem obsolete or excessively intrusive, even if true, and even in the absence of a finding of prejudice. Google and other search engines must delink offending webpages from a search of an individual’s name, even if the underlying article or webpage is lawful and remains online. It, in effect, provides a right of curation—enabling individuals to manage their own reputation online, and to avoid having an embarrassing news article, or an arrest on a charge that was later dropped, follow them around throughout their life.

This right has been actively employed. In the five years since the European Court first announced the right, Google has received more than 846,000 requests to delist a total of 3.3 million URLs. Google has granted the requests approximately 55 percent of the time. (Notably, some 20 percent of the demands came from 1 percent of the requesters. According to Google, many of these repeat requesters are reputation-management services and law firms.)

France, which deems the right a fundamental aspect of the right to privacy, has long argued that Google must delink the information globally. No one anywhere, the country argues, should be directed to a webpage that an individual has successfully suppressed via the right to be forgotten. In France’s view, this is the only way to adequately protect individuals’ rights.

Google, by contrast, argues that the right to be forgotten butts heads with other fundamental rights, including the speech-related right to access information that is lawful and true. To balance these competing interests, Google delinks the relevant webpages when users search an affected person’s name from within Europe. But the links are still visible to those who search the same name from outside the EU. Using geo-blocking, Google asserts that it can make the geographic distinctions with approximately 99 percent accuracy.

Ruling in Google’s favor, the European Court concluded that EU law does not require delinking to be global. But the court left open the possibility that the EU could demand global delinkings in the future—so long as the law is rewritten to explicitly allow it to do so. And irrespective of EU law, individual nations can still demand worldwide delinking if their laws and their court systems permit it. In other words, global delinking and takedown orders are hardly off the table.

Notably, the court’s analysis basically dictates the result in another highly watched case—pending before the European high court—with even more significant implications for free speech, in which an Austrian court has demanded the worldwide removal of Facebook posts calling a former Austrian Green Party leader a “lousy traitor,” a “corrupt oaf,” and a member of a “fascist party.” According to the Austrian Supreme Court of Justice, this constitutes impermissible defamation and needs to be taken down.

The European Court is now being asked to decide whether the Austrian court can, consistent with EU law, require Facebook to take down identical and equivalent content as well—and whether any such orders could be applied globally, covering content posted and viewed by those outside of Austria. The right-to-be-forgotten ruling makes clear that nothing in EU law would prevent Austria from trying, if it wishes, to apply its takedown orders globally under Austrian rather than EU law. Of course, the smaller the jurisdiction, and the more draconian the measure, the more likely it is that internet companies will decide to forgo access to that market rather than comply. In most cases, though, companies are loath to walk away.

Even if yesterday’s opinion is read to discourage these kind of global takedowns or delisting orders, the alternative—the suppression of content in one country alone—carries its own risks. The more the market is geographically segmented to accommodate divergent speech and privacy norms, the easier it will be for any one country to get away with increased censorship within its own borders. Global tech companies that operate across borders are, after all, much more willing to resist idiosyncratic and repressive speech restrictions when they are required to do so across their entire platform. If, however, they can respond to takedown demands or delink content in a geographically segmented way, they need not worry about competing values and the broader implications of, say, globally prohibiting all critiques of the Thai monarchy (something prohibited under Thai law). As a result, they are often willing to comply with local restrictions, particularly when doing so is a condition of doing business in that state.

As I have argued elsewhere, this patchwork of global speech regulation is perhaps the least bad means of dealing with conflicting speech and privacy norms across borders. But dangers exist. Whereas global mandates are to be resisted because they permit a censor-prone nation to set global speech rules, global segmentation also risks facilitating local repression in nontransparent and insidious ways.