In a 2013 report summarizing global challenges, the World Economic Forum singled out "massive digital misinformation" as "one of the main risks for the modern society." Social networks may be structurally optimized for sharing; their structures, however, don't tend to distinguish between good information and bad. Which means that sites like Facebook aren't just a great repositories for updates from your friends and pictures of your dog; they can also be breeding grounds for rumors, lies, and conspiracy theories. "False information," write Walter Quattrociocchi and a group of colleagues at Northeastern University, "is particularly pervasive on social media, fostering sometimes a sort of collective credulity."
That's the assumption, anyway. And Quattrociocchi and his colleagues wanted to test it—with a focus on Facebook. On its platform, they wondered, are there meaningful differences in the ways we interact with information that is true ... and information that is false?
The most obvious benefit to research conducted on Facebook is that it gives you not just tons of data to work with, but also tons of structured data. Quattrociocchi and his team took advantage of that as they studied the patterns that emerged among more than 2.3 million Facebook users in Italy. To get a baseline measure of those users' gullibility, they analyzed how those people treated political information posted to the network during the country's 2013 elections. Based on Likes and comments, they analyzed how those users treated news from three main categories of sources: mainstream news organizations, alternative news organizations (sites that share "topics that are neglected by science and mainstream media"), and pages devoted to political commentary.