Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’ ” [4].
As for why these practices are so prevalent, many scientists blame increased competition for academic jobs and research funding, combined with a “publish or perish” culture. Because journals are more likely to accept studies reporting “positive” results (those that support, rather than refute, a hypothesis), researchers may have an incentive to “cook” or “mine” their data to generate a positive finding. Such publication bias is not in itself news—back in 1987, a study found that, compared with research trials that went unpublished, those that were published were three times as likely to have positive results [5]. But the bias does seem to be getting stronger: a more recent study of 4,600 research papers found that from 1990 to 2007, the proportion of positive results grew by 22 percent [6].
Of course, rising retraction rates also reflect the fact that scientists, journalists, and amateur watchdogs have begun scrutinizing research more closely. New data-analysis tools play a part, as does plagiarism-detecting software [7]. So do a number of ambitious recent efforts to replicate findings—with dismaying results. In 2012, a researcher then at the biotechnology company Amgen wrote in Nature that when his team tried to reproduce 53 landmark cancer studies, they could replicate just six [8]. And according to a news report in Nature, a project aiming to reproduce the findings of 100 psychology papers has managed to replicate results for only 39 of them (the project’s findings are still under peer review) [9].
This heightened scrutiny—the very scrutiny that likely contributed to the retractions surge in the first place—could help reverse the tide, by providing a powerful disincentive to bad behavior. As more scientific misconduct is exposed and shamed, researchers who were previously tempted to play fast and loose with their data may now think twice.
The Studies:
[1]McNutt, “Retraction of LaCour and Green” (Science, June 2015)
[2] Grieneisen and Zhang, “A Comprehensive Survey of Retracted Articles From the Scholarly Literature” (PLOS One, Oct. 2012)
[3] Fang et al., “Misconduct Accounts for the Majority of Retracted Scientific Publications” (Proceedings of the National Academy of Sciences, Oct. 2012)
[4] Fanelli, “How Many Scientists Fabricate and Falsify Research?” ( PLOS One, May 2009)
[5] Dickersin et al., “Publication Bias and Clinical Trials” (Controlled Clinical Trials, Dec. 1987)
[6] Fanelli, “Negative Results Are Disappearing From Most Disciplines and Countries” (Scientometrics, Sept. 2011)
[7] Giles, “Special Report: Taking on the Cheats” (Nature, May 2005)
[8] Begley and Ellis, “Drug Development: Raise Standards for Preclinical Cancer Research” (Nature, March 2012)
[9] Baker, “First Results From Psychology’s Largest Reproducibility Test” (Nature, April 2015)