But Simine Vazire from the University of California, Davis says that there’s not much evidence that incompetent replicators are the source of psychology’s problems. For example, in the Many Labs project, where 36 groups repeated 13 earlier studies, there were only small differences in the results from different labs. “It’s a problem if science only replicates when the original authors get to hand-pick the replicators,” she says. “I don’t think it's a good precedent to set.”
A fair point, Uhlmann concedes. Then again, he asked the replicators to pre-register their studies. They detailed every aspect of the experiments and uploaded their plans for anyone to see. With such transparency, “it is difficult to see how a replicator could artificially bias the result in favor of the original finding without committing outright fraud,” he writes.
Absent such biases, the most likely explanations for his four irreproducible experiments are simple. The tipping effect was unique to the U.S., and can’t be generalized to other countries. The other three effects were probably false positives—the kind that often show up in small, underpowered studies. “Even when everything’s public and you have expert replicators, you’ll have an imperfect replicability rate, because that’s just science,” says Uhlmann.
He hopes that these pre-publication initiatives will become more common, from simple study swaps to complex daisy chains of replications, as suggested by Nobel laureate Daniel Kahneman. He is also trying to couple pipeline projects to university courses, by turning undergraduate and graduate students into a network of replicators. “Any researcher can email me their study and ask to have it replicated by students at a number of different universities,” he says.
That would only work for cheap, simple studies—arguably the biggest limitation to the pipeline approach. “It’s a great example of what can be done for research that is economical to replicate,” says Brian Nosek from the Center for Open Science. “It would be great to extend such a process to more specialized procedures, intensive data collections, and difficult-to-sample populations.”
“It might be possible to create an online market where people can post studies, indicate the expertise that’s required, and match up to replicator pools,” says Uhlmann. “The trick there would be to find ways of incentivizing authors to submit their studies. Journals could establish a premium where they’re more likely to accept something if it’s pre-replicated.”
“I hope more authors will subject their work to this kind of test,” adds Vazire. “That would be a huge advance for psychology.”