In May, Gizmodo reported that Facebook had assigned a team of human “editors” to curate its Trending section, the list of topics in the upper-right corner of the desktop site. The editors were charged with making sure the items that appeared were factual, and linking them to a news story from a reputable source.
But the human element got Facebook in trouble. In another Gizmodo story, a former Facebook editor said he was instructed to suppress news about conservative topics. At first, Zuckerberg denied the practice—“We have rigorous guidelines that do not permit the prioritization of one viewpoint over another or the suppression of political perspectives,” he wrote in a post in May—but within months, the humans were fired and algorithms took over.
That immediately went poorly. Just days after the switch, the Trending section prominently displayed a fake news story declaring that Megyn Kelly had been fired from her job as a Fox News host for being a “traitor” and supporting Hillary Clinton for president. By 9:30 a.m., it had been removed from the Trending widget, but not before it spent hours there, likely seen by millions.
The fake article’s appearance in the Trending section was particularly problematic: Stories posted there come with an implied stamp of approval from Facebook, which might make users more likely to trust them. The fact that Facebook took it down makes it clear the company didn’t want to validate the misinformation.
But what about stories that appear in the newsfeed that are so clearly false a quick Google search disproves them? Should Facebook be filtering those?
The company’s already using machine learning—different algorithms than the ones that drive the Trending section—to try and catch misinformation on the platform, a Facebook spokesman told me. If a post containing a link to a news story gets a lot of pushback in the comments—links to posts debunking it on Snopes or PolitiFact, two popular fact-checking sites, for example—an algorithm will infer that the original news story is probably fake. Once the link is flagged internally, it’s less likely to crop up on users’ news feeds as they scroll through—no matter who posts it. That means anybody else who shared the same link will have their posts suppressed, too. (The algorithm only works on links, the spokesperson said, not text-only posts.)
And there might be more to come in the fact-checking field. Adam Mosseri, Facebook’s vice president in charge of the news feed, shared a statement with TechCrunch that hinted at future plans:
Despite these efforts we understand there’s so much more we need to do, and that is why it’s important that we keep improving our ability to detect misinformation. We’re committed to continuing to work on this issue and improve the experiences on our platform.
What Facebook chooses to do will ultimately be informed by how the company sees its role on the internet. If it considers itself a mirror that reflects the rest of the net, unfiltered and unvarnished, then it probably won’t step in to play a stronger moderating role. But if it fancies itself a safe space for sharing opinions and ideas, in addition to the daily humdrum of life, it might need to be more of an arbiter.