Hinterhaus Productions / Getty

It took $1.1 billion and a 1,000-strong team to prove Einstein right about gravitational waves. In 2016, the scientists behind the Laser Interferometer Gravitational-Wave Observatory, or LIGO, announced that they had finally detected these ripples in the fabric of space and time, formed by colliding black holes. “LIGO was a masterpiece of 21st-century engineering and science,” says James Evans, a sociologist at the University of Chicago who studies the history of science. “But it was perhaps the most conservative experiment in history. It tested a 100-year-old hypothesis.”

“Big science,” of which LIGO is a prime example, is becoming more common. Funding agencies are channeling more money toward ever larger teams working on grand projects such as cataloging the diversity of our cells or sequencing the genomes of all species. There’s even a growing field of meta-research dedicated to studying how teams work—the science of team science.

Some projects require these large teams, and three members of the LIGO team eventually won a Nobel Prize. But the comparative neglect of small teams and solo researchers is a problem, Evans says, because they produce very different kinds of work. He collaborated with his colleague Lingfei Wu to look at more than 65 million scientific papers, patents, and software projects from the past six decades. In every recent decade and in almost every field, Wu’s analysis found, small teams are far more likely to introduce fresh, disruptive ideas that take science and technology in radically new directions.

“Big teams take the current frontier and exploit it,” Evans says. “They wring the towel. They get that last ounce of possibility out of yesterday’s ideas, faster than anyone else. But small teams fuel the future, generating ideas that, if they succeed, will be the source of big-team development.”

That “runs counter to the usual thinking that large teams, which are typically better funded and work on more visible topics, are the ones that push the frontiers of science,” says Staša Milojević, who studies information metrics in science at Indiana University Bloomington. She recently found a similar pattern by analyzing the titles of 20 million scientific papers and showing that bigger teams work on a relatively small slice of topics in a field. Other scientists have made similar points, but what Evans describes as a “Go teams!” attitude still persists. The results of the new analysis should “temper some of that enthusiasm for large teams and demonstrate that there may be a tipping point after which their benefits decline,” says Erin Leahey from the University of Arizona, who has previously written about the “overlooked costs of collaboration.”

The new analysis is based on the ways in which researchers cite past work. For example, when scientists cite Einstein’s groundbreaking 1915 papers on general relativity, they tend not to refer back to the papers that Einstein himself cited. “They see it as a conceptually new direction that’s distinct from the things on which it built,” Evans says. But if scientists “think that something is an incremental improvement, they’ll tell the whole story in the references.” For example, a 1995 paper describing a long-theorized state of matter called a Bose–Einstein condensate is almost always cited together with the papers in which the physicist Satyendra Nath Bose and Einstein predicted the stuff’s existence.

Wu quantified these differences using a “disruption score,” originally created by other researchers to measure the innovativeness of inventions. Wu showed that it works well for scientific research. When ranked by their scores, papers that describe Nobel Prize–winning work appeared in the top 2 percent, as did those chosen by scientists who were asked to name the most disruptive papers in their field. Reviews that summarize earlier work are in the bottom half of the rankings, while the original studies they’re based on appear in the top quarter. It’s a “simple yet brilliant” method, especially because it works across data sources as diverse as papers, patents, and software, says Satyam Mukherjee of the Indian Institutes of Management.

Having tested this score in various ways to show that it’s valid, Wu used it to show that small teams produce markedly more disruptive work than large ones. That’s true even for patents, which are innovative by definition. It’s true for highly cited work and poorly cited work. It’s true in every decade from the 1950s to the 2010s. It’s true in fields ranging from chemistry to social sciences.

So why are small teams more disruptive? It’s possible that they do more theoretical work, while big teams (such as LIGO) are needed to test the resulting theories, but Evans and his colleagues couldn’t find any evidence for this in their data. Another possibility: The most groundbreaking scientists prefer working in small teams. But Evans doesn’t buy that, either. Even when the same people move from small teams to larger ones, he says, they end up doing less disruptive and more incremental science.

Instead, he and his colleagues found that large teams tend to build on recent, prominent work, while small teams delve more deeply into the past, drawing inspiration from older ideas that may have long been ignored. (Evans didn’t use a fixed definition of “small” or “large,” but most of his analyses compared teams ranging from one to 10 people; some scientists might consider a 10-person team to be on the small side.) At first, Evans was surprised by that difference; surely, large teams have more eyeballs and more collective memory? But he now suspects that scientists on large teams also argue and interfere with one another, and that they’re more likely to find common ground in yesterday’s hits. Large teams also require lots of funding, which makes them more pressured to pay the bills and drives them toward safer work. “What does a big movie-production studio bet on: Slumdog Millionaire or Transformers 9?” he asks.

But small teams also pay a heavy cost. Their disruptive work has no ready-made audience, and is less obviously relevant to their peers. As Evans and his colleagues found, such work takes much longer to be recognized and cited. Even if it eventually influences larger teams, as it often does, enough time passes that other researchers are less likely to cite the original, disruptive work.

You Na Lee, who studies scientific innovation at the National University of Singapore, says that research teams are now effectively behaving like firms, which also tend to be more disruptive at a small size. “This study is evidence that the ecology of science and the ecology of innovation are becoming very similar,” she says. The big difference is that the business world actively encourages entrepreneurship and small start-ups. That’s not true for science, but “unconditionally allocating pots of government grants for small wild spirits can be a bold policy move,” she says.

But Evans cautions that money won’t work in isolation. When he and his colleagues analyzed funding trends from 2004 and 2014, they found that when small teams were funded by top government agencies such as the National Science Foundation, they were no more likely to produce disruptive work than large teams. Something about the current funding environment seems to strip small teams of their natural advantages, forcing them to behave like big ones. “It’s not that we can just shove money in their direction,” Evans says.

Still, he argues that agencies must find better ways of encouraging small teams. They don’t just do different kinds of science, but they create work that large teams then build upon. Disenfranchise them, and you destabilize the foundations upon which big science rests. “In 10 years, we’ll be wondering where all the big ideas are,” Evans says. “Some people will wonder if science is slowing down and we’ve eaten all the low-hanging fruit. And the answer will be yes, because we’ve only built engines that do that.”

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.