The Evolution of Shaming
A new study suggests that we punish people who haven’t directly wronged us to signal our own trustworthiness.
A communications director sends an ill-advised tweet before boarding a plane. A dentist kills a lion. Donald Trump says pretty much anything. In all of these cases, the Internet ratchets itself into a dervish of judgment, outrage, and opprobrium. To some, this repeated pattern of online shaming is a sign of mob behavior gone horribly wrong; to others, it’s a sign of social progress and underrepresented voices finally being heard. But to Jillian Jordan from Yale University, it’s a clue about a universal human behavior called third-party punishment.
Third-party punishment happens when we punish people who behave badly and violate social rules, even when their actions don’t directly affect us. All cultures show it to varying degrees (while chimpanzees do not). It starts early: eight-month-old babies will gravitate towards a nasty moose if it punishes an unhelpful elephant. And it usually comes with costs: whistleblowers risk their careers, protesters face arrest and beatings, people sending disapproving tweets can get doxxed and harassed, and more generally, punishers lose time, energy, and social relationships.
So why bother? Why censure someone who hasn’t harmed us directly?
Some scientists have suggested that it helps to cement human societies together by enforcing social norms and discouraging selfishness or bad behavior. As such, groups that practice third-party punishment should do better than those that do not. That may be true, but collective benefits don’t explain why individuals choose to incur the cost of punishment. Why doesn’t any one person just sit back and let others punish?
In online shaming, Jordan saw a clue. “I started thinking about friends I knew who were involved in social justice,” she says. “There was a lot of moralistic speech that seemed like it was focused on communicating one’s own position.” In other words, maybe third-party punishment is primarily a signal that tells onlookers that you are trustworthy, in the same way that a peacock’s tail or stag’s antlers signal its genetic quality. It says: If I’m willing to punish selfishness, you know I’m not going to act selfishly to you.
This only works if punishing is an honest signal of trustworthiness, if those who do it are actually more trustworthy than those who don’t. Jordan argues that this is the case because the same factors that incentivize people to actually be trustworthy also incentivize them to punish others who behave badly. For example, you might be more likely to treat peers well if you interact with them repeatedly (contrast a permanent colleague with a summer intern) or if you belong to an institution that enforces codes of conduct (like the military or religious institutions). In these situations, you also gain more benefits from punishing (because you’re signaling your stance to a large group of long-term peers) and pay fewer costs (since more people have your back).
Together with David Rand, a psychologist who studies cooperation, Jordan tested these ideas by recruiting hundreds of volunteers through Amazon’s Mechanical Turk, and having them play a game of trust in two stages. In phase one, a Helper can decide whether to share money with a Recipient; if they’re selfish about it, a Punisher can decide to penalize them. A Chooser watches all of this. In phase two, they get a pot of money and can invest part of it with the Punisher. That investment gets tripled and the Punisher can decide how much to return to the Chooser. So the Chooser must evaluate how much they trust the Punisher, based on what they did in the first game.
Jordan found that the Choosers sent more money to the Punishers if they actually punished the selfish Helpers. “They treated punishment as a sign that you’re likely to be nice,” she says. And they were right to do so because the Punishers who punished ended up returning more money to the Choosers. They were, indeed, more trustworthy.
Jordan then replayed the experiments with a twist. This time, in phase two, the Choosers played with either the Helpers or the Punishers from phase one. In this set-up, punishing is no longer the only signal of trustworthiness; helping can convey the same information. “We predicted that people should be less inclined to punish if they have the opportunity to look good in another way,” says Rand. And they were right: This time, the Choosers were no longer swayed by punishment, and the Punishers were less likely to dole it out.
“This shows that people aren’t solely punishing because they want to see the selfish people harmed,” says Jordan. “They want to signal that they’re trustworthy. If there’s a better way of doing that, they won’t punish.” Likewise, from the Chooser’s point of view, “It’s not inherently about rewarding punishment, no matter what. It’s about predicting who will be trustworthy.”
“This clarifies why some studies show that punishers benefit and other studies do not: Helping others is a better signal of one’s cooperation, so people use that as a signal when possible,” says Pat Barclay at the University of Guelph.
Barclay and others have predicted that third-party punishment can be a reliable signal of cooperative intent, “but this paper provides empirical support for that prediction,” says Nichola Raihani from University College London. The team also enshrined their ideas in a mathematical model that simulates how punishment affects virtual people when playing games of trust. The results from that model closely match those from the actual experiment, which suggests that it’s a useful tool for exploring the evolution of punishment even further.
Jordan and Rand caution that this is an evolutionary perspective, about why third-party punishment arose in humans in the first place. It doesn’t suggest that people who show outrage, online or otherwise, are doing so because of cold calculations and self-interest. “We’re not calling people liars, like they say they care but really don’t,” says Jordan. “People genuinely feel outrage and moral anger. But at least part of why they care is that it gets them reputational benefits.”
“That also helps to explain why people get pissed off even when the wrong that was done was accidental,” she adds. “It’s hard to explain that if you think that the reason for punishing is for the good of the group.”
But punishment has reputational costs too. Although experiments have shown that people entrust more money to punishers, “most research says that people don't seem to like punishers more than non-punishers,” says Barclay. “They trust punishers to do the right thing, but don’t particularly like them, possibly out of fear of being punished themselves.“
And Sarah Mathew from Arizona State University says the team’s conclusions may not apply broadly. They certainly don’t jibe with her experiences of working with Turkana pastoralists in East Africa. “Among the Turkana, those who don't punish are talked about as free-riders” she says. They’re billed as “useless” people who don’t contribute to the community. “People may care about their reputation as punishers, not because it is a form of signaling, but because punishment is one domain of cooperation, just like providing aid or participating in warfare.”
She adds that it’s misleading to study the origins of third-party punishment by looking at large states. In such societies, formal institutions like the legal system do the heavy-lifting of maintaining social order, which changes the way we view individuals who dole out punishment. And for most of our evolutionary history, such institutions didn’t exist. “It’s a bit like making conclusions about human diet by studying people practicing agriculture,” Mathew says.