Charles Platiau / Reuters

Last month, Mark Zuckerberg, Facebook’s CEO and majority shareholder, published a memo on censorship. “What should be the limits to what people can express?” he asked. “What content should be distributed and what should be blocked? Who should decide these policies and make enforcement decisions?”

The company had previously posted its community standards and the internal guidelines that it uses when attempting to enforce those standards.

Now its CEO was looking to the future.

One idea he aired might be thought of as a Supreme Court of Facebook. “I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own,” Zuckerberg wrote. “In the next year, we’re planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding.”

A person who was kicked off the platform, or frustrated that a certain kind of post is consistently censored, might soon have a new venue to air his grievances.

As the attorney Evelyn Douek noted recently, a body of that sort could conceivably “transform understandings of online speech governance, international communication and even the very definition of ‘free speech.’” On its face, “Zuckerberg’s proposal looks like a renunciation of power by Facebook. If the Supreme Court of Facebook is a truly independent body that Facebook will accept as binding authority on its content moderation decisions, Facebook would be giving up power to unilaterally decide what should and shouldn’t be on its platform.”

But a less discussed part of the memo raises questions about how much control the company would actually cede to its “Supreme Court.”

“When left unchecked, people will engage disproportionately with more sensationalist and provocative content,” Zuckerberg argued. “This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.”

He called this “borderline content”––content that doesn’t break Facebook’s rules, but that walks right up to the line without crossing it. “No matter where we draw the lines for what is allowed,” he wrote, “as a piece of content gets close to that line, people will engage with it more on average––even when they tell us afterwards they don’t like the content.”

Borderline content getting more engagement “applies not only to news but to almost every category of content,” he added. “For example, photos close to the line of nudity, as with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don’t come within our definition of hate speech but are still offensive.”

The solution that he proposed:

This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. [Facebook needs] to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join, [because] while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization.

Of course, while all borderline content may boost engagement, not all borderline content is the same substantively or morally. There’s a difference between a bigot who knows he can’t use racial slurs but gets perilously close … and an activist who objects to a war or a police killing or torture or animal cruelty in the strongest language permitted by the platform’s rules. ACT UP was far more sensationalist and polarizing than late-1980s contemporaries who downplayed the seriousness of AIDS in staid language. It would seem perverse, in a case like that, to penalize the provocative content in favor of the content that does not offend.

Regardless, Zuckerberg believes that the change is both necessary and desirable:

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it’s important to remember that won’t address the underlying incentive problem, which is often the bigger issue.

This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content … By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less polarized discourse where more people feel safe participating.

Perhaps. And a “penalty” that slightly decreases distribution is not censorship. But at some point, “penalizing” content becomes almost indistinguishable from censoring it. If I post something to Facebook but no one sees it, that’s functionally equal to outright removal. And a tweak to the algorithm that penalizes content that meets the stated rules is almost the opposite of moving toward content-moderation transparency. At least outright removal can be verified and challenged.

Douek, speculating as to why a “Supreme Court of Facebook” might be appealing to the company, argues, “Content-moderation decisions on Facebook are hard, and any call is likely to upset a proportion of Facebook users. By outsourcing the decision and blame, Facebook can try to wash its hands of controversial decisions.” If that’s part of the motivation, it doesn’t make the underlying idea better or worse.

But consumers should be aware that Facebook may prefer to manipulate distribution rather than impose an outright ban. A Supreme Court of Facebook with no control of the algorithm, in a context where Facebook wasn’t transparent about what content it penalizes and why, wouldn’t necessarily remove Facebook’s control over free expression and the most important censorship decisions after all.

This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.