Notes
First thoughts, running arguments, stories in progress

Recently, I reported that a large team of scientists had tried to replicate the results of 100 psychological experiments—and failed in most cases.

Five days later, Lisa Feldman Barrett, a professor of psychology at Northeastern University, took to the New York Times to defend her field. In an op-ed, she argued that “contrary to the implication of the Reproducibility Project, there is no replication crisis in psychology,” and that so-called ‘failures’ to replicate a study are “a normal part of how science works.”

Do they discredit the original studies?

Hardly. Instead, they tell us that the original phenomena only appear in certain contexts. Subtle unaccounted factors—the characteristics of the volunteers or the skill of the researchers—might lead to very different results from two seemingly identical experiments.

I made similar points in my piece. But Barrett’s op-ed attributes unsuccessful replications entirely mostly to these contextual differences, and there, I take issue (as did some other psychologists).

For a start, as Dorothy Bishop from the University of Oxford noted on Twitter, it “raises [the question] of how seriously to take findings that that depend so precisely on conditions.” In other words, if the results are delicate wilting flowers that only bloom under the care of certain experimenters, how relevant are they to the messy, noisy, chaotic world outside the lab?

Worse, Barrett’s piece ignores empirical evidence. In 2012, Leslie John from Harvard Business School surveyed more than 2,100 psychologists and found worrying levels of so-called “questionable research practices.” More than 40 percent selectively reported studies that “worked.” More than half admitted to checking the statistical significance of their results before deciding whether to collect more data. These practices pollute the scientific literature with false positives and, according to the freely volunteered information that John collected, they are not only common in psychology, but largely accepted—most of her respondents thought they were defensible.

That’s a problem! That’s not something to lightly sweep under the rug!

Yes, context matters, and sure, failures to replicate are a normal part of science. But that’s neither a universal absolution nor a reason for complacency, especially when there’s evidence of wider problems. We can talk about a replication crisis or refer to something milder, but as Brian Nosek, the head of the Reproducibility Project, told me:

There may be other reasons why [studies] didn’t replicate, but this does mean that we don’t understand those reasons as well as we think we do. We can’t ignore that. We have data that says: We can do better.

Contribute to Notes: hello@theatlantic.com
Most Popular On The Atlantic