Read: Mark Zuckerberg doesn’t understand journalism.
But a less discussed part of the memo raises questions about how much control the company would actually cede to its “Supreme Court.”
“When left unchecked, people will engage disproportionately with more sensationalist and provocative content,” Zuckerberg argued. “This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.”
He called this “borderline content”––content that doesn’t break Facebook’s rules, but that walks right up to the line without crossing it. “No matter where we draw the lines for what is allowed,” he wrote, “as a piece of content gets close to that line, people will engage with it more on average––even when they tell us afterwards they don’t like the content.”
Borderline content getting more engagement “applies not only to news but to almost every category of content,” he added. “For example, photos close to the line of nudity, as with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don’t come within our definition of hate speech but are still offensive.”
Read: What it really means to be the adult in the room
The solution that he proposed:
This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. [Facebook needs] to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join, [because] while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization.
Of course, while all borderline content may boost engagement, not all borderline content is the same substantively or morally. There’s a difference between a bigot who knows he can’t use racial slurs but gets perilously close … and an activist who objects to a war or a police killing or torture or animal cruelty in the strongest language permitted by the platform’s rules. ACT UP was far more sensationalist and polarizing than late-1980s contemporaries who downplayed the seriousness of AIDS in staid language. It would seem perverse, in a case like that, to penalize the provocative content in favor of the content that does not offend.
Regardless, Zuckerberg believes that the change is both necessary and desirable:
One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it’s important to remember that won’t address the underlying incentive problem, which is often the bigger issue.
This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content … By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less polarized discourse where more people feel safe participating.
Perhaps. And a “penalty” that slightly decreases distribution is not censorship. But at some point, “penalizing” content becomes almost indistinguishable from censoring it. If I post something to Facebook but no one sees it, that’s functionally equal to outright removal. And a tweak to the algorithm that penalizes content that meets the stated rules is almost the opposite of moving toward content-moderation transparency. At least outright removal can be verified and challenged.