Last August, I wrote about a large initiative called the Reproducibility Project, led by Brian Nosek from the University of Virginia. The project members collectively repeated 100 published psychological experiments and replicated the results of just a third of them. It was an alarming figure, which fed into what has become something of a civil war among psychologists. On one side are those who say that the field is experiencing a “replicability crisis,” where many of the most cherished results may not actually be true. On the other are those who argue that no such crisis exists, and that psychology is in rude health. (If you want to catch up, this paper by Bobbie Spellman is the single best summary of everything thus far.)
Four members of that second camp, including Harvard University’s Daniel Gilbert, hit back yesterday with a comment that challenged the methods and statistical analyses of the Reproducibility Project, and put forward a much more optimistic take on the state of psychology. Katie Palmer at Wired had the best take on the debate, capturing its technical details as well as its spirit. This has, after all, always been as much about personalities as it has been about statistics. Consider, for example:
“Emotions are running high. Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that. This is how beleaguered Gilbert feels: When I asked if he thought his defensiveness might have colored his interpretation of this data, he hung up on me.”
Technical discussion aside, I want to make two points here. First, the Reproducibility Project is far from the only line of evidence for psychology’s problems. There’s the growing list of failures to replicate textbook phenomena. There’s publication bias—the tendency to only publish studies with positive results, while dismissing those with negative ones. There’s evidence of questionable research practices that are widespread and condoned.



