Psychology’s Replication Crisis Can’t Be Wished Away

It has a real and heartbreaking cost.

William Hong / Reuters

Last August, I wrote about a large initiative called the Reproducibility Project, led by Brian Nosek from the University of Virginia. The project members collectively repeated 100 published psychological experiments and replicated the results of just a third of them. It was an alarming figure, which fed into what has become something of a civil war among psychologists. On one side are those who say that the field is experiencing a “replicability crisis,” where many of the most cherished results may not actually be true. On the other are those who argue that no such crisis exists, and that psychology is in rude health. (If you want to catch up, this paper by Bobbie Spellman is the single best summary of everything thus far.)

Four members of that second camp, including Harvard University’s Daniel Gilbert, hit back yesterday with a  comment that challenged the methods and statistical analyses of the Reproducibility Project, and put forward a much more optimistic take on the state of psychology. Katie Palmer at Wired had the best take on the debate, capturing its technical details as well as its spirit. This has, after all, always been as much about personalities as it has been about statistics. Consider, for example:

“Emotions are running high. Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that. This is how beleaguered Gilbert feels: When I asked if he thought his defensiveness might have colored his interpretation of this data, he hung up on me.”

Technical discussion aside, I want to make two points here. First, the Reproducibility Project is far from the only line of evidence for psychology’s problems. There’s the growing list of failures to replicate textbook phenomena. There’s publication bias—the tendency to only publish studies with positive results, while dismissing those with negative ones. There’s evidence of questionable research practices that are widespread and condoned.

Second, it can be very easy to see this as an academic spat about turgid statistical matters like p-values, and degrees of freedom, and publication bias. It’s not. It’s about people’s lives. Their careers. Their passions. Their futures. Of all the things I’ve read (or written) about the (alleged) replicability crisis, few have driven this point home better than a post from the Michael Inzlicht at the University of Toronto, published Monday. It is unguarded, humane, and heartbreaking.

“To be clear: I am in love with social psychology. I am writing here because I am still in love with social psychology. Yet, I am dismayed that so many of us are dismissing or justifying all those small (and not so small) signs that things are just not right, that things are not what they seem. “Carry-on, folks, nothing to see here,” is what some of us seem to be saying. Our problems are not small and they will not be remedied by small fixes. Our problems are systemic and they are at the core of how we conduct our science.”

He continues, with an astonishing level of frankness from someone who has everything to lose from acknowledging the existence of replicability problems and is doing so anyway:

“As someone who has been doing research for nearly twenty years, I now can’t help but wonder if the topics I chose to study are in fact real and robust. Have I been chasing puffs of smoke for all these years? I have spent nearly a decade working on the concept of ego depletion, including work that is critical of the model used to explain the phenomenon. I have been rewarded for this work, and I am convinced that the main reason I get any invitations to speak at colloquia and brown-bags these days is because of this work. The problem is that ego depletion might not even be a thing.”

In years of reporting on this unfolding story, I have spoken to many psychologists who feel the same: lecturers who don’t know what to tell their students, students who are unsure what research to build upon, and professors who are watching the academic ground give up beneath their feet. But I also see many psychologists who are trying to make things better. As Inzlicht says:

“What is not helping is a reluctance to dig into our past and ask what needs revisiting. Time is nigh to reckon with our past. Our future just might depend on it.”

Crisis or not, if we end up with a more rigorous approach to science, and more confidence in what it tells us, surely that is a good thing?