Facebook’s fact-checking efforts are on the rocks. Five months after the social-media giant debuted a third-party tool to stop the spread of dubious news stories on its platform, some of its fact-checker partners have begun expressing frustration that the company won’t share data on whether or not the program has been effective.
In the absence of that official data, a study by Yale researchers made waves last week by suggesting that flagging a post as “disputed” makes readers just a slim 3.7 percent less likely to believe its claim. Among Trump supporters and young people, the fact-checking program could even backfire: Those respondents were more likely to believe unflagged posts after they saw flags on others.* That concern was echoed earlier this year by the actor James Woods, who tweeted that a disputed tag on Facebook was the “best endorsement a story could have.”
The study—as well as ongoing revelations about how Russian troll farms might have used Facebook ads to meddle with the U.S. presidential election—has been stirring up the debate about whether and how social-media companies ought to police misinformation and propaganda on their platforms. Facebook claims that its efforts are working, and criticized the Yale researchers’ methodology, but a growing body of scholarship shows how difficult fact-checking has become online. With roots in old-fashioned cognitive biases that are amplified by social-media echo chambers, the problem is revealing itself to be extraordinarily difficult to fight at an institutional level.