Facebook’s fact-checking efforts are on the rocks. Five months after the social-media giant debuted a third-party tool to stop the spread of dubious news stories on its platform, some of its fact-checker partners have begun expressing frustration that the company won’t share data on whether or not the program has been effective.

In the absence of that official data, a study by Yale researchers made waves last week by suggesting that flagging a post as “disputed” makes readers just a slim 3.7 percent less likely to believe its claim. Among Trump supporters and young people, the fact-checking program could even backfire: Those respondents were more likely to believe unflagged posts after they saw flags on others.* That concern was echoed earlier this year by the actor James Woods, who tweeted that a disputed tag on Facebook was the “best endorsement a story could have.”

The study—as well as ongoing revelations about how Russian troll farms might have used Facebook ads to meddle with the U.S. presidential election—has been stirring up the debate about whether and how social-media companies ought to police misinformation and propaganda on their platforms. Facebook claims that its efforts are working, and criticized the Yale researchers’ methodology, but a growing body of scholarship shows how difficult fact-checking has become online. With roots in old-fashioned cognitive biases that are amplified by social-media echo chambers, the problem is revealing itself to be extraordinarily difficult to fight at an institutional level.

Take Walter Quattrociocchi, a computer scientist at the University of Venice who has published a torrent of research over the past few years that examines how Facebook users consume information and self-segregate into online communities. In one recent paper, Quattrociocchi’s team looked at five years’ worth of Facebook posts, along with likes and comments, from a group of 413 public pages. These pages ranged from science-themed fare like “ScienceDaily” to ominously-titled conspiracy pages like “I Don’t Trust The Government.”

What Quattrociocchi found may have deep implications for the future of online fact-checking. Facebook users who cluster around conspiracy-related content tend to interact only with material that affirms their preexisting worldview, but in the rare cases when they do come into contact with dissenting information that attempts to debunk conspiracy theories—in the form of public posts by science-related pages—the conspiracy theorists become more, rather than less, likely to interact with conspiracy-related content in the future. In fact, conspiracy theorists who never interact with dissenting viewpoints are almost twice as likely as those who do to eventually drift away from conspiracy-themed content.

In other words, attempting to correct wrongheaded beliefs on Facebook appears to accomplish the precise opposite. Instead of alerting readers to the post’s factual inaccuracy, it entrenches them further in their erroneous beliefs. That’s not the same as studying the effect of a “disputed” tag on an article’s virality—only Facebook has access to that information—but it appears to be a good proxy.

Quattrociocchi doesn’t equivocate about his own feelings. He calls any promise that fact-checking can stomp out the spread of misinformation on social media a “hoax” and “bullshit.”

Though this issue predates the 2016 presidential election, the problem came into focus during that time. There were those Macedonian teens who discovered they could make a quick buck by publishing fictitious news reports designed to outrage conservative Americans. There was the rise of fringe media outlets like Infowars, whose figurehead Alex Jones has refused to retract conspiracy theories about how the Sandy Hook shooting was staged by paid actors. Donald Trump acknowledged what was already coming to be called “fake news” during the campaign by appropriating the term as a diss he still often lobs at CNN and The New York Times in the wake of unfriendly reports.

Before the internet, Quattrociocchi says, information had to make it past various gatekeepers before it could be widely disseminated. Those sentinels were often flawed, but they tended to filter out the most outrageous misinformation. Now, he suspects, information propagates via the same mechanisms as selfies or memes, leading to a crisis of authority.

None of that puts Facebook in an easy position. In the wake of the 2016 election, the platform faced a wave of criticism for having allowed misinformation to go unchecked on its platform during the campaign. Mark Zuckerberg, the company’s CEO, initially went on the defensive, but eventually acquiesced. In response to criticism of the new fact-checking program’s implementation, the company argues that flagging posts is only one part of a larger effort to fight misinformation on the platform. (Facebook declined to provide any further information on the record.)

Regardless, the difficulty of online fact-checking presents a grave challenge to public discourse. On the open web, biases can lead to vast, ingrown communities that reinforce preposterous beliefs. Kate Starbird, a researcher at the University of Washington, set out to study how useful, accurate information about safety and breaking news spreads on Twitter during bombings and mass shootings. But she started to notice an unnerving trend: As each disaster unfolded, a group of fringe accounts would start to promulgate paranoid theories about how the tragedy had been a “false flag,” carried out by the government or some other shadowy cabal.

Earlier this year, she published a provocative paper arguing that these strange networks of conspiracy posters are deeply connected to white nationalism, the alt-right, and the associated media ecosystem of sites like Infowars.

“We have ideas about how we make sense of the world,” Starbird said. “Our current information environment makes us newly vulnerable to things like filter bubbles and purposeful manipulation. People who understand how we think will try to influence us with ads and propaganda.”

Starbird did point out that not all findings are as somber as Quattrociocchi’s. One bright spot, for example, is that users may be more receptive to fact-checking if it comes from a friend—though that sort of engagement can be exhausting in these fraught times.

Starbird’s work has led her to reevaluate her outlook on the role of social media in the world. “When I started doing research about social media, I was very optimistic about its role in crisis response,” Starbird said. “But after the last year and a half, I’ve become increasingly depressed. I have a much more negative outlook.”


* This article originally stated that young people and Trump supporters were more likely to believe flagged posts than unflagged posts. We regret the error.