Nonetheless, roughly 38,000 Twitter users shared the story. And it put together a retweet chain three times as long as the sick-child story managed.
A false story alleging the boxer Floyd Mayweather had worn a Muslim head scarf to a Trump rally also reached an audience more than 10 times the size of the sick-child story.
Why does falsehood do so well? The MIT team settled on two hypotheses.
First, fake news seems to be more “novel” than real news. Falsehoods are often notably different from the all the tweets that have appeared in a user’s timeline 60 days prior to their retweeting them, the team found.
Second, fake news evokes much more emotion than the average tweet. The researchers created a database of the words that Twitter users used to reply to the 126,000 contested tweets, then analyzed it with a state-of-the-art sentiment-analysis tool. Fake tweets tended to elicit words associated with surprise and disgust, while accurate tweets summoned words associated with sadness and trust, they found.
The team wanted to answer one more question: Were Twitter bots helping to spread misinformation?
After using two different bot-detection algorithms on their sample of 3 million Twitter users, they found that the automated bots were spreading false news—but they were retweeting it at the same rate that they retweeted accurate information.
“The massive differences in how true and false news spreads on Twitter cannot be explained by the presence of bots,” Aral told me.
But some political scientists cautioned that this should not be used to disprove the role of Russian bots in seeding disinformation recently. An “army” of Russian-associated bots helped amplify divisive rhetoric after the school shooting in Parkland, Florida, The New York Times has reported.
“It can both be the case that (1) over the whole 10-year data set, bots don’t favor false propaganda and (2) in a recent subset of cases, botnets have been strategically deployed to spread the reach of false propaganda claims,” said Dave Karpf, a political scientist at George Washington University, in an email.
“My guess is that the paper is going to get picked up as ‘scientific proof that bots don’t really matter!’ And this paper does indeed show that, if we’re looking at the full life span of Twitter. But the real bots debate assumes that their usage has recently escalated because strategic actors have poured resources into their use. This paper doesn’t refute that assumption,” he said.
Vosoughi agrees that his paper does not determine whether the use of botnets changed around the 2016 election. “We did not study the change in the role of bots across time,” he told me in an email. “This is an interesting question and one that we will probably look at in future work.”
Some political scientists also questioned the study’s definition of “news.” By turning to the fact-checking sites, the study blurs together a wide range of false information: outright lies, urban legends, hoaxes, spoofs, falsehoods, and “fake news.” It does not just look at fake news by itself—that is, articles or videos that look like news content, and which appear to have gone through a journalistic process, but which are actually made up.