Is America Really Running Out of Original Ideas?

The nation’s crisis of originality isn’t in our minds, but in our markets.

Illustration of a broken light bulb with an American flag as its cap.
The Atlantic

About the author: Derek Thompson is a staff writer at The Atlantic and the author of the Work in Progress newsletter.

Sign up for Derek’s newsletter here.

Recently, I wrote that America was running out of ideas.

As evidence, I pointed to the demise of original blockbuster films, the great stagnation in productivity, slowed progress in science, the lead-footed pharmaceutical industry, and the glacial pace of infrastructure development. As one paper on the decline of new ideas in science and technology summarized this sorry state of affairs: “New ideas no longer fuel economic growth the way they once did.”

Over the past few days, I’ve been collecting criticism of the piece, in the form of tweets, emails, and phone calls. I’ve decided that the article was wrong. Not catastrophically wrong, in the big picture. But importantly wrong in the details. America is running out of something. But it’s not ideas, exactly. So what is it?

One way to answer that question goes back to a famous observation made by the Austrian economist Joseph Schumpeter. He said that new technology has two distinct stages. In the first, invention: We discover something new. In the second, innovation: The discovery is turned into a product or service to be sold in the market.

“I reject the notion that we’re running out of ideas,” says David Krakauer, the president of the Santa Fe Institute. “You suggest that the problem is invention. But I see no evidence that people are less ingenious. I see the problem as moving their genius into the world. The problem is the second stage of Schumpeterian innovation.”

What exactly does that mean? It means that the fault is not in our minds, but in our markets.

Take movies, for example. In the essay, I observed that the share of Hollywood blockbusters that are sequels, adaptations, or reboots has increased steadily this century. But is this evidence that today’s screenwriters are “running out of ideas”? Not really, and suggesting that they are innately less capable of conceiving of non-sequels than they used to be is kind of absurd. (And I’m a little embarrassed that I implicitly made that suggestion!)

What’s changed isn’t minds but markets—namely the international market for blockbusters. In the 1990s and early 2000s, the box office globalized around the same time that cable TV (and, eventually, streaming) came around to gobble up tens of billions of dollars’ worth of Hollywood stories. Studios responded by focusing on hits with worldwide appeal—it’s just a matter of fact that CGI is a more superior cinematic export than talky dramas—even as the talky-drama writers got plenty of opportunities to write for TV. Simultaneously, the rising cost of producing and marketing blockbusters encouraged the major studios to place their bets on safe projects.

These market shifts have created a self-perpetuating cycle: American moviegoers (criminally!) ignore well-reviewed original movies without CGI, since many of them reserve their few annual movie tickets for stories they already know. Movie studios both respond to this audience behavior and drive it, by investing more heavily in action-packed franchises, validating and deepening the audience desire for explosive sequels. Thus, the box office has been transformed by market dynamics from a showcase of original storytelling to a destination for new installments in familiar franchises.

That’s the art market. How about the science market?

Scientists often bemoan the state of originality in their field. New ideas are getting “harder to find.” Progress in large fields of science and technology is “slowing down.” Scientific knowledge has been in “clear secular decline.” (One wonders about the originality of their bemoaning.)

But today’s researchers aren’t getting worse at coming up with ingenious ideas— they’re getting better at mastering the market of scientific funding, which may not reward ingenuity.

Today’s scientists typically rely on grants from government agencies such as the National Institutes of Health and the National Science Foundation. This grant-writing process is so grueling that for many researchers it can account for up to 30 or 40 percent of their working hours. Although the NIH and the NSF are well-meaning organizations, they’ve created a very specific market for scientific research. Researchers are more likely to be funded if they can prove deep expertise, which has tipped the scales in favor of older scientists. Researchers are more likely to get funding if their proposals seem plausible to several members of the peer-review process, which encourages scientists to prove that the questions they’re asking have sort of already been answered.

When you put these market choices together—a bias toward older investigators over younger researchers, a preference for deep expertise over cross-disciplinary exploration, and an emphasis on plausible projects rather than radical ones—you get exactly what you bargained for. Today’s scientists spend their time begging institutions for money to produce incremental science that clusters around a small set of seemingly safe ideas.

In the field of hardware technology, we find a similar story: America’s inability to build doesn’t come from a lack of original thinking but from a failure to create the right kind of market to sustain it.

In the 1950s, America invented the photovoltaic cell—the technology behind solar energy. For decades, we spent more on solar R&D than any other nation. But we lost our technological advantage anyway, as my colleague Robinson Meyer has written, because America’s innovation system has invested in pure science while tacitly encouraging the private sector to commercialize whatever it likes. The idea was that the NIH and the NSF would water the field of science, and then the private sector would come along for the harvest. But a step is missing here: identifying the right stuff for harvesting.

Other countries didn’t miss that step. In the ’80s and ’90s, Japanese industrial policy pushed companies to adopt solar panels in consumer electronics such as pocket calculators and wristwatches. The United States had no similar plan to translate scientific research into commercial technology. “We need the government to support a thriving industrial sector and incentivize companies to deploy new technology, as Japan’s government does,” Meyer wrote.

This sort of industrial policy makes some people thrilled (the government promoting tech!) and some people uncomfortable (the government picking winners?). We don’t need to resolve the long debate on the benefits of industrial policy right here. The deeper point is that Japan and the U.S. structured their solar-panel market very differently and got very different results. The U.S. lost the technological frontier in solar, not because we had worse ideas, but because we had a different theory of the market in which to grow them.

Two weeks ago, I wrote that the U.S. needed a revitalized culture of experimentation. Now I think that what we really need is more experimentation in markets, because our markets are failing to promote new ideas that drive progress and growth. In art, science, and technology, our outcomes are lagging indicators of our markets. For better results, build better markets.