Over the past decade, scientists have been wrestling with the possibility that many published findings may not actually be true. The worry is that poorly designed studies, intense pressures to publish eye-catching results, and—more rarely—misconduct, have led to a “reproducibility crisis.” In a survey of 1,500 researchers, conducted last year by the leading journal Nature, 90 percent agreed that such a crisis exists, with 38 percent billing it “slight” and 52 percent calling it “significant.”

These introspective concerns have fueled a burgeoning “reproducibility movement,” where researchers in psychology, cancer research, neuroscience, genetics, and other disciplines are developing ways of making science more reliable. One solution is to encourage “open science,” where researchers share their data so that others can more easily verify their work, and where they publish in freely accessible journals so their results aren’t locked behind expensive paywalls.

Both the reproducibility and open-science movements have built up a lot of steam. But both have matured during an auspicious time for American science—a time that many sense has come to an end with the election of Donald Trump to the presidency.

President Trump and members of his administration have repeatedly denied the reality of human-made climate change and are attempting to roll back measures to curb it. They have questioned the repeatedly proven safety of vaccines, gagged federal researchers, and proposed huge funding cuts that would hamstring the nation’s scientific infrastructure. In this environment, many are concerned that attempts to improve science could be judo-flipped into ways of decrying or defunding it. “It’s been on our minds since the first week of November,” says Stuart Buck, Vice President of Research Integrity at the Laura and John Arnold Foundation, which funds attempts to improve reproducibility.

The worry is that policy-makers might ask why so much money should be poured into science if so many studies are weak or wrong? Or why should studies be allowed into the policy-making process if they're inaccessible to public scrutiny? At a recent conference on reproducibility run by the National Academies of Sciences, clinical epidemiologist Hilda Bastian says that she and other speakers were told to consider these dangers when preparing their talks.

“Openness and reproducibility may be core to how science works, but they can be misused or turned into ways of pursuing ideological attacks,” says Brian Nosek from the Center of Open Science, a figurehead of the reproducibility movement. They can be applied selectively, so that demands are made only of certain fields, like climate research. They can be applied asymmetrically, so that “it’s about this position, rather than that one where we need openness,” says Nosek. Or they can be applied inflexibly, so that studies that have good reasons for being less reproducible or open are excluded from policy-making.

These moves are evident in the HONEST Act—a bill that was passed by the House last week. As I wrote last month, the act would restrict the Environmental Protection Agency (EPA) to developing regulations based only on studies whose methods, materials, software code, and data were open and accessible. “There’s a lot of sloppy science that’s out there—irreproducible science,” a House Science Committee aide told me. “If the scientific data is public, and other scientists are able to look at it, we think that would make the underlying science of these rules less contentious.”

This rhetoric exactly matches what the reproducibility and open-science movements have been saying, and it describes genuine problems in science. “They’re right that government agencies should strive to use science that people have access to,” says geneticist Michael Eisen, an open-science firebrand who is running for Senate. “The EPA is problematic when it relies on hidden industry data that people can’t evaluate, and the public has every right to be skeptical of those decisions. The best way to protect against that is to have sunshine on the data.”

But he and others say that the HONEST Act is a disingenuous solution to that real problem. In practice, it would “gratuitously handcuff” the EPA and prevent it from considering studies are necessarily less transparent, including those that use confidential medical records or proprietary information. The Act would also force the agency to do a lot of extra costly work—either redacting confidential information, or asking scientists to dredge up all the data and code from old studies. “It won’t produce regulations based on more open science,” says Eisen. “It’ll just produce fewer regulations.”

Calling for reproducibility “is a good thing if being done in an economic vacuum, but given their budget, that’s a crippling constraint,” adds Jeff Leek, a statistician at Johns Hopkins University Bloomberg School of Public Health. The Congressional Budget Office estimated that the HONEST Act would take $250 million a year to enforce, and Trump’s recent budget blueprint would slash ten times that from the agency’s pocket. “I’m very much in favor of reproducibility, but if we make those kinds of demands we need to fund them,” says Leek.

The reproducibility movement is already asking researchers to do more with less. At a time when federal science funding had hit a plateau, scientists were told to upload their data to online repositories, and spend more time replicating each other’s work. That takes time, money, and effort, and is less likely to secure the glamorous publications that are critical for grants, careers, and prestige.

On top of that, Trump is now proposing to cut $5.8 billion from the National Institutes of Health (NIH), $900 million from the Office of Science at the Department of Energy, and $250 million from National Oceanic and Atmospheric Administration (NOAA). These cuts would worsen the very conditions that lead to sloppy science in the first place, by creating a hyper-competitive world in which researchers are incentivized to cut corners and get “exciting” but unreliable results.

Some scientists are also worried that the reproducibility movement could provide law-makers with justification for their cuts. “The way it could get weaponized is by saying: Just stop the false stuff, keep the true stuff, and we can cut half the budget,” says Nosek. “But that’s like saying the roads have a lot of potholes, so we should ban driving.”

These concerns are keenly felt in psychology—the field at the epicenter of reproducibility shake-up. “For years, there have been attempts by Republicans to do away with social science funding at the National Science Foundation,” says Buck. “That’s led to sensitivity about “attacking” the reproducibility of social science research, because it could play into those efforts. ‘Let’s not talk about the problems in research because it’ll cause ourselves too much trouble and sweep it under the rug.’”  

Everyone I spoke to felt that this is the wrong approach. “Do we say: Hey, let’s not self-scrutinize? That’s not even a consideration,” says Nosek. “I don’t really understand what the option is here,” echoes Bastian. “You can’t just ignore science’s problems if people take our criticisms in a way we don’t like. I think the answer is more openness not less.”

“The right response from the scientific community isn’t to focus on an external threat from the government. We must recognize that part of the reason we’re in this situation is we’ve been lax in thinking about our internal problems,” says Eisen. “We have to face up to them. We get a lot of public support and funding and we owe it to them to make science work in the best possible way.”

He notes that the open science movement owes a significant victory to Republican congressman Ernest Istook, who repeatedly demanded that the NIH should make all its funded research freely available. “This isn’t a guy who’s politics I would agree on, but that was an example where not hiding a problem had a good outcome,” he says. “Not every criticism of science is invalid just because it’s being made by Republicans.”

Leek agrees that science should be open about its problems, but he argues that the reproducibility movement suffers from the same overblown claims that it decries in other fields. A seminal 2005 paper titled “Why Most Published Research Findings Are False” was based on a theoretical argument, but is often taken for established fact. A paper by pharmaceutical company Amgen claimed that the firm could only confirm the findings in 11 percent of landmark cancer papers, but “never produced a single shred of data to support that claim.” Another paper claimed that the U.S. wastes $28 billion every year on irreproducible research—a “huge claim without sufficient evidence.” “All of these headline-grabbing claims are in danger of being repurposed.” says Leek. “I think that it’s important to be careful about our rhetoric.”

Christie Aschwanden, a reporter at FiveThirtyEight who has won awards for her coverage of the reproducibility movement, adds that scientists and journalists have to be better about communicating uncertainty. “It feels like there are two opposite things that the public thinks about science: that it’s a magic wand that turns everything it touches to truth, or that it’s all bullshit because what we used to think has changed,” she says. “The truth is in between. Science is a process of uncertainty reduction. If you don’t show that uncertainty is part of the process, you allow doubt-makers to take genuine uncertainty and use it to undermine things.”

“And it’s absolutely crucial that we continue to call out bad science,” Aschwanden says. “If this environment forces scientists to be more rigorous, that’s not a bad thing.”

Bastian concurs, noting that the cost of downplaying the reproducibility movement are greater than the risks of the movement’s rhetoric being co-opted. “The possibility that there might not be really serious improvements in the way we deal with science is more of a concern to me than anything else,” she says.