Peer Review is No Panacea

Over the past few years, I've encountered what seems like a reasonably new phenomenon in the blog world--though perhaps I've merely recently begun to notice.  Whenever an objection is raised to some paper, its supporters respond that it's peer-reviewed, as if peer review were some sort of magic incantation that prevented wrong papers from getting published.  (They cannot possibly actually believe this, since they have plenty of quibbles with papers that disagree with them--but this is how the term is deployed).

Derek Lowe points out just how weak a tool peer review is in many cases:

So what to do with work that's mostly reference data for the future? It shouldn't have to appear in physical print, you'd think. How about the peer-reviewed journal part? Well, peer review is not magic. As it stands, that sort of information is the least-reviewed part of most papers. If someone tells you that they've made Compound X and Compound Y, and the synthesis isn't obviously crazy, you tend to take their word for it. It's a rare reviewer that gets all the way down to the NMR spectra in the supplementary material, that's for sure. And if one does, and the NMR spectra look reasonably believable, well, what else can you do? Even so, every working chemist has dealt with literature whose procedures Just Don't Work, and all those papers passed some sort of editorial review process at some point.

No, peer review is not going to do much to improve the quality of archival data. If someone really wants to fill up the low-level bins with junk, there's not much stopping them. You could sit down and draw out a bunch of stuff no one's ever made before, come up with plausible paper syntheses of all of it, use software to predict reasonable NMR spectra (which you might want to jitter around a bit to cover your tracks), and just flat-out fake the mass spec and elemental analyses. Presto, another paper that no one will ever read, until eventually someone has a reason to make similar compounds and curses your name in the distant future. The problem is, such papers will do you no real good, since they'll appear in the crappiest journals and pick up no citations from anyone.

Lowe is talking about chemistry, but the observation is widely applicable. Especially for papers that rely on empirical work with painstakingly assembled datasets, the only way for peer reviewers to do the kind of thorough vetting that many commentators seem to imagine is implied by the words "peer review" would be to . . . well, go back and re-do the whole thing.  Obviously, this is not what happens.  Peer reviewers check for obvious anomalies, originality, and broad methodological weakness.  They don't replicate the work themselves.  Which means that there is immense space for things to go wrong--intentionally or not.

After all, Michael Bellesiles' central work was all peer reviewed--and passed with flying colors even though some of the numbers in one of his most important tables did not add up correctly.  One could argue that the fatal mistake Bellesiles made was to get greedy--to fake a controversial thesis capable of winning him awards.  This was an area that a lot of people were working in, and a lot of other people got interested in once Bellesiles took such a bold stand.  Both groups were bound to attempt to replicate his work.  If he'd just been satisfied to grind out a series of fake papers on less well-trafficked subjects, he might not have been found out for decades, if at all.

This is not to say that the peer review system is worthless.  But it's limited.  Peer review doesn't prove that a paper is right; it doesn't even prove that the paper is any good (and it may serve as a gatekeeper that shuts out good, correct papers that don't sit well with the field's current establishment for one reason or another).  All it proves is that the paper has passed the most basic hurdles required to get published--that it be potentially interesting, and not obviously false.  This may commend it to our attention--but not to our instant belief.