Now the test begins. Whenever the subject gives an incorrect answer, she is given a powerful jolt of electricity. The witnesses watching on television see her writhe in pain and hear her scream. They think she is being tortured.
One group of volunteers is now given a choice: they can transfer the shocked subject to a different learning paradigm, where she is given positive reinforcements instead of painful punishments. Not surprisingly, the vast majority of people choose to end the torture. They quickly act to rectify the injustice. When asked what they thought of the "learner," they described her as an innocent victim who didn't deserve to be shocked. That's why they saved her.
The other group of subjects, however, isn't allowed to rescue the volunteer undergoing the test. Instead, they are told a variety of different stories about the victim. Some were told that she would receive nothing in return for being tortured; others were told that she would be paid for her participation. And a final group was given the martyr scenario, in which the victim submits to a second round of torture so that the other volunteers might benefit from her pain. She is literally sacrificing herself for the group.
How did these different narratives affect their view of the victim? All of the volunteers watched the exact same video of torture. They saw the same poor woman get subjected to painful shocks. And yet the stories powerfully influenced their conclusions about her character.
Here the most disturbing data point: the less money the volunteer received in compensation for her suffering the more the subjects disliked her. The people explained the woeful injustice by assuming that it was her own fault: she was shocked because she wasn't paying attention, or was incapable of learning, or that the pain would help her perform better. The martyrs fared even worse. Even though this victim was supposedly performing an act of altruism - she was suffering for the sake of others - the witnesses thought she was the most culpable of all. Her pain was proof of her guilt. Lerner's conclusion was unsettling: "The sight of an innocent person suffering without possibility of reward or compensation motivated people to devalue the attractiveness of the victim in order to bring about a more appropriate fit between her fate and her character."
The moral of the Just World Hypothesis is that people have a powerful intuition that the world is just and that people get what they deserve. While I’m sure this instinct makes all sorts of social contracts possible, it also leads to one very troubling tendency: we often rationalize injustices away, so that we can maintain our naive belief in a just world. This, I believe, is what happens when we read about innocent people getting sent to Guantanamo, or the wrong immigrant getting waterboarded, or why it’s so easy to brush aside calls for prison reform. We might acknowledge the awfulness of the error, but then quip that he shouldn’t have been hanging around with the Taliban, or that the guy who got sent to prison for a crime he didn’t commit was still a creep, or that the Madoff victims should have known their money manager was a fraud. In other words, we act like the subjects in the Lerner experiment blaming the innocent volunteer, as we search for reasons why the wrongfully treated deserved what they got. Subsequent studies have found that people with "a strong tendency to believe in a just world" tend to exhibit certain characteristics: they're much more likely to admire political leaders and existing social institutions, and have negative attitudes toward underprivileged groups. Furthermore,they "feel less of a need to engage in activities to change society or to alleviate plight of social victims."
Is there any way out of this cognitive trap? The only thing I can think of is education: people are shocked out of their complacence. After all, if an honest man can get executed than maybe the world isn’t so just.