In a 2013 report summarizing global challenges, the World Economic Forum singled out "massive digital misinformation" as "one of the main risks for the modern society." Social networks may be structurally optimized for sharing; their structures, however, don't tend to distinguish between good information and bad. Which means that sites like Facebook aren't just a great repositories for updates from your friends and pictures of your dog; they can also be breeding grounds for rumors, lies, and conspiracy theories. "False information," write Walter Quattrociocchi and a group of colleagues at Northeastern University, "is particularly pervasive on social media, fostering sometimes a sort of collective credulity."
That's the assumption, anyway. And Quattrociocchi and his colleagues wanted to test it—with a focus on Facebook. On its platform, they wondered, are there meaningful differences in the ways we interact with information that is true ... and information that is false?
The most obvious benefit to research conducted on Facebook is that it gives you not just tons of data to work with, but also tons of structured data. Quattrociocchi and his team took advantage of that as they studied the patterns that emerged among more than 2.3 million Facebook users in Italy. To get a baseline measure of those users' gullibility, they analyzed how those people treated political information posted to the network during the country's 2013 elections. Based on Likes and comments, they analyzed how those users treated news from three main categories of sources: mainstream news organizations, alternative news organizations (sites that share "topics that are neglected by science and mainstream media"), and pages devoted to political commentary.
Once they had that, the researchers compared those results against the same users' reactions to false information—stuff that had been sent into their streams by satirical news sites or lie-spreading trolls. They then measured the timespan between the posting of that information's first comment and its last—a rough proxy for the collective attention paid to it. They wanted to see how long people engaged with information that was true ... and how long they engaged with information that was false.
Their findings will be, perhaps, unsurprising to anyone familiar with the discursive particularities of Facebook. Basically: The trueness or falseness of the content was largely irrelevant to the length of the conversation about it. The veracity of the information offered didn't seem to matter much when it came to how long people kept talking about it.
Here's what that looked like, charted:
The researchers found similar curves when they traced engagement patterns around the content—as manifested through Likes and comments:
Attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information.
What they didn't track was the substance of these conversations; there's a chance that the Facebook users in question were spending all that time debunking the conspiracy theories in question. (But, you know: a chance.) The more salient point, though, is that they were dedicating their attention to those theories at all. They were allowing their time and their minds to be directed by misinformation, rumors, and lies. Which may be good news for your Uncle Stan and his various thoughts about the Kennedy assassination ... but bad news for the rest of us. How, then, do you start a conspiracy theory on Facebook? Throw a crazy idea out there, and see what happens.