Facebook representatives have been hauled before Congress three times in the past year—including testimony this week from Sheryl Sandberg—to answer uncomfortable questions about technology’s role in the spread of misinformation and its threat to U.S. democracy. But those questions aren’t the extent of the company’s public-relations problems. Facebook has also been accused of playing a role in political strife and even violence around the world, from reportedly enabling arms dealing in Libya and the propagation of conspiracy theories in the Philippines to allegedly helping fuel anti-immigrant violence on the streets of Germany.
The case against the world’s biggest social media platform is rapidly gaining momentum. But just how much concern is warranted? A recent article in The New York Times seems to suggest that the evidence is already in, and the link between Facebook and communal violence is real. The report, concerning attacks on immigrants in Germany, began with an anecdote: “When you ask locals why Dirk Denkhaus, a young firefighter trainee who had been considered neither dangerous nor political, broke into the attic of a refugee group house and tried to set it on fire, they will list the familiar issues.” The report alluded to those issues: economic decline, disillusionment, boredom, and the rise of fringe politics. It then added:
But they’ll often mention another factor not typically associated with Germany’s spate of anti-refugee violence: Facebook. Everyone here has seen Facebook rumors portraying refugees as a threat. They’ve encountered racist vitriol on local pages, a jarring contrast with Altena’s [a town in north-west Germany] public spaces, where people wave warmly to refugee families.
The authors cite the suspicion among locals that Denkhaus had “isolated himself in an online world of fear and anger that helped lead him to violence.”
What is striking about the Times report, however, is that it goes beyond mere anecdote and suspicion and relays the findings of what it describes as “a landmark study that claims to prove” that Facebook “makes communities more prone to racial violence.” But does it actually prove it? And how conclusive is the link between online hate and violence, really?
Karsten Muller and Carlo Schwarz, the study’s authors, advance two big empirical claims. The first, which is moderate and perfectly agreeable, is that “social media echo chambers can reinforce anti-refugee sentiments.” The second, which is acutely controversial, is that social media echo chambers not only reinforce anti-refugee sentiments in Germany, but can actually drive anti-refugee crime. As Muller and Schwarz put it, the echo chambers “may push some potential perpetrators over the edge to carry out violent acts.” They conclude with the suggestion that “social media has not only become a fertile soil for the spread of hateful ideas but also motivates real-life action.” Drawing on existing research on mass media and persuasion, Muller and Schwarz’s study is the largest empirical investigation to date of the link between Facebook usage and violence.
According to Muller and Schwarz, “right-wing anti-refugee sentiment on Facebook predicts violent crimes against refugees in municipalities with higher social media usage.” The main evidence for this is that in municipalities where internet users are active on the Facebook page of Alternative for Germany (AfD), the largest far-right party in Germany, hate crimes against immigrants are disproportionately high.
If you are inclined to think that online hate speech is so toxic that it drives hate-based violence in the real world, whether it wears a far-right mask or a jihadist one, then it would be tempting to see the study as knock-down evidence for the link. But as the authors themselves acknowledged to me, the evidence is inconclusive. The relationship between ideas and rhetoric on the one hand and actual behavior or deeds on the other remains dismayingly opaque, and doggedly resistant to empirical testing. Are people really animated to act by rhetoric, hateful or otherwise, or do they just invoke it after the act, for the purposes of rationalization? This is still very much contested among scholars.
As far as anti-immigrant violence in Germany is concerned, it is eminently possible that anti-immigrant crime itself may drive online hate speech, rather than hate speech driving anti-immigrant crime. If hate speech increases in places where anti-immigrant attacks are disproportionately high, it could just as well be because people are primed to talk about those attacks, particularly if they are anxious about immigration or are hateful toward immigrants. Terrorist atrocities are similarly propulsive, as we have seen with ISIS, where there is a huge upsurge in social media activity among its fanboys whenever what looks like an ISIS-inspired attack occurs, regardless of whether it is in fact ISIS-inspired or even a terrorist attack. This is part of a deeper social dynamic that the great French sociologist Emile Durkheim recognized long ago: that crime incites a passionate reaction in wider society, drawing people together so that they can “wax indignant in common,” either for or against the perpetrator.
At the time Durkheim lived in the late 19th and early 20th centuries, people did this in their homes and on the street; they still do, but now they have Facebook and other online social media platforms where, under the cover of anonymity, they can wax the most hateful bile imaginable.
Muller told me via email that “our study design does not allow us to say how much of the development in hate crimes can be accounted for by things happening on social media” and that the study’s numbers are “subject to many caveats.”
Muller emphasized that “any causal interpretation of our findings hinges on the results for internet and Facebook disruptions,” adding: “We find that, during such outages, the correlation between local hate crimes on one hand, and the interaction of the sentiment measure and local social media usage on the other, is essentially zero. This implies that at least some of the correlation we are capturing reflects a causal effect.” Or as the Times summarizes it: “Whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.” Which in turn might suggest a kind of drip-feed effect, whereby a cut in the supply of hateful online rhetoric could stop some real-world hate crime in its tracks.
This would be reductive, to say the least. Which isn’t to deny that Facebook and other social media platforms can facilitate violence by allowing people to disseminate beliefs and rhetoric that legitimizes it, nor is it to ignore the crucial role it can play in helping violent activists to coordinate and mobilize for the purposes of carrying out attacks. But it is far from clear that Facebook itself can materialize the motive for people to act violently. A similar debate can be found among terrorism studies scholars on whether exposure to online terrorist propaganda can radicalize those so exposed. The consensus here is that while sustained exposure may reinforce beliefs that are already extreme it is unlikely, by itself, to cause radicalization, let alone “push” people to act on their violent beliefs.
And notice the inherent determinism of the “push” metaphor. Are we really to believe that this happens, as if people have no say in the matter or any active desire to be pushed? And even if we allow that they are pushed, how does the pushing mechanism work? Is it a quick push, or is it a cumulative series of mini-pushes? More specifically, just how psychologically invested in a violent ideology do you have to be before you are pushed or allow yourself to be pushed? A little, a lot or not at all? And, finally, why do the majority of those who consume and circulate online hate speech refrain from implementing its hysterical demands and incitements? If online hate speech were so causally combustible you would expect to see far more hate crimes than actually occur, given its massive and ugly prominence across social media. Which raises the possibility that some hateful people refrain from violently acting out their hatreds in the real world because the online, fantasy variant allows them to vent indignant and cathartically release some of their hateful sentiments.
These questions should be at the heart of the debate about the role of social media in the contemporary world, yet they are too often crowded out by those who think that hateful words are not only deeds but also the root cause of deeds far graver.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.