The Disruption Myth

The idea that businesses are more vulnerable to upstarts than ever is out-of-date—and that’s a big problem.

Matt Chase

In the late 1970s, Dick Foster, a fast-rising young management consultant at McKinsey, began to notice something at once unsettling and exciting. McKinsey and other consulting firms spent most of their time helping companies do what they already did, only more efficiently. Yet Foster, an engineering and applied science Ph.D. who was one of the firm’s first experts on the technology industry, only had to look around him to see leading firms that seemed to be efficiently managed getting blindsided by upstart competitors.

After several years of research, and a close reading of Thomas Kuhn’s The Structure of Scientific Revolutions (which introduced the concept of the paradigm shift), Foster came up with an explanation. What threatened these well-run market leaders were what he called “technological discontinuities”—moments when the dominant technology in a market abruptly shifted, and the expertise and scale that the companies had built up suddenly didn’t count for much. One example: when electronic cash registers went from 10 percent of the market in 1972 to 90 percent just four years later, NCR, long the leading maker of cash registers, was caught unprepared, resulting in big losses and mass layoffs.

Foster’s 1986 book, Innovation: The Attacker’s Advantage, described this phenomenon, offered tips for surviving it (just being aware of the possibility of a technological shift was the first step), and predicted that there was much more to come as giant waves of innovation in electronics, software, and biotechnology buffeted the economy. “The Age of Discontinuity,” Foster called it, borrowing the line from the management guru Peter Drucker.

The book did well, but the expression didn’t stick. “I will forever rue the day I didn’t call it ‘disruption,’ ” Foster now says. That was left instead to Clayton Christensen, a consultant and an entrepreneur who headed to Harvard Business School for a mid-career doctorate in 1989 and started teaching there three years later. For his dissertation,

Christensen studied technological shifts in the computer-disk-drive industry and began to refine his observations—which were quite similar to what Foster had seen in other industries—into an academic theory of “disruptive innovation.”

Starting with a 1995 article for the Harvard Business Review and then the 1997 book The Innovator’s Dilemma,

Christensen began to hammer the phrase into the business world’s collective consciousness.

It was the rise of the Internet that really gave the concept wings—in part by so clearly illustrating not only the risks disruption poses but the opportunities it affords. And after a bit of backtracking at the bottom of the dot-com bust, the belief that digital disruption stalks the Earth, threatening all before it, has only gained in adherents. Nowadays every corporate executive wants to disrupt; the word has become a mark of forward-thinking decisiveness—though it is sometimes attached to strategies that are more about cost-cutting than game-changing. And in Silicon Valley, belief in disruption has taken on a near religious tinge. All that disrupts is good; all that stands in disruption’s way (such as, say, San Francisco taxi companies or metropolitan daily newspapers) deserves to perish.

It’s a lot to put on a word that once just meant “to break apart,” and not surprisingly, the disruption-promotion industry has been experiencing a backlash. An app is now available for the Firefox and Chrome browsers, for example, that replaces every mention of the word disrupt with bullshit. And in June, the Harvard historian Jill Lepore caused a mini-sensation with a long and uncharacteristically ill-tempered essay in The New Yorker that not only decried the overuse of the word but took Christensen to task for cherry-picking case studies to buttress a theory that she said really wasn’t all that good at predicting anything. There was little evidence, Lepore argued, that disruptive upstarts, or companies that disrupted themselves, consistently won out. Christensen responded, in a long and uncharacteristically ill-tempered interview with Bloomberg Businessweek, that Lepore had been doing the cherry-picking by ignoring most of his work since 1997.

I’m in no position to officiate this debate: my employer has published many of Christensen’s books and articles over the years, and I’m on friendly terms with him. (He declined to comment for this article.) It is fair to say, though, that Lepore’s assault has shown no sign of breaking the business world’s obsession with disruption. In fact, it may have accelerated the dissemination of Christensen’s ideas by inspiring so many responses explaining and defending them.

“I must tell you that in 1986 I never would have guessed the extent to which this has grabbed the imagination of the nation,” says Christensen’s predecessor, Dick Foster. “It’s really quite extraordinary.”

What may be even more extraordinary, however, is the growing disjuncture between all this talk of disruption and its actual practice—at least so far as we can measure it. Thanks to data that the Census Bureau began releasing a decade ago, economists can now track what they call “business dynamism” in ways they couldn’t before. As researchers have dug into these numbers, they’ve found that most metrics of dynamism and upheaval in American business have actually been declining for decades, with the downturn steepening after 2000. Fewer new businesses are being launched in the United States, the average age of businesses is increasing, job creation and job destruction are on the wane, industries are being consolidated, and fast-growth businesses are rarer.

Before 2000, the decline was most pronounced in the retail and service sectors, and it didn’t necessarily contradictthe age-of-disruption theme. “We’ve moved away from mom-and-pop to Walmart,” says John Haltiwanger, an economist at the University of Maryland and a co-author of much of the recent business-dynamism research, “and the evidence suggests that this has largely been good for productivity.” New national chains armed with new technologies were the attackers, local retailers were the incumbents, and their collisions generally resulted in consumers getting a better deal. There was in fact plenty of upheaval in the top ranks of the business world in the 1980s and ’90s, as newcomers crashed the Fortune 500 list with increasing frequency. And the high tech sector was, as widely perceived, a hotbed of entrepreneurship and growth.

All of that activity seems to have peaked, however, a year or two after the stock market did in 2000. Measures of big-business volatility began to drop.High-tech start-up activity and what economists call the skewness of growth—how quickly the fastest-growing companies in a sector are outpacing the median company—declined below the levels of the mid-’90s and stayed there. Most worrying of all, the burst of productivity growth that started in 1995 and is widely attributed to the use of new information technologies also seems to have ended in the early 2000s.

Productivity growth is crucial to raising living standards, and economists have come to ascribe much of it to technological change. Research has shown that in the past, this change came in sudden bursts, with fast-growing new firms providing much of the impetus.The spectacular rise in living standards that began in Europe and North America just over 200 years ago is thus largely the work of disruptive innovations. Without a new burst of them, we may face “secular stagnation”—an extended period of slow growth.

It’s possible, of course, that the business-dynamism numbers fail to capture some of the economy’s actual dynamism. In the technology sector, many upstarts have in recent years opted to sell themselves to Google or Amazon and do their disrupting as part of an already large organization that has learned a thing or two from Foster, Christensen, and others about how to foment innovation. Furthermore, because several of the metrics are based on job counts, what we’re seeing may be less a decline in dynamism than the rise of new, technology-intensive companies that simply don’t need many workers. The messaging service WhatsApp, when Facebook bought it earlier this year for more than $16 billion, had just 55 employees.

But it’s also possible that a decades long accretion of regulation has come to weigh on new-business formation and growth; that for all the tales of Silicon Valley swashbuckling, most Americans have become more cautious and less entrepreneurial; or that—and this argument springs straight from Christensen’s keyboard—the pressures of the financial market and a preoccupation with corporate financial metrics have left most businesses “afraid to pursue what they see as risky innovations” and focused instead on cutting costs.

Still, some companies are pursuing risky innovations and disrupting established industries. Business publications are full of stories about them: Google and Uber and Amazon and Salesforce and Workday and many more. They just haven’t had a measurable impact on the overall economy yet. One group of economists says to give it a few years— the adoption of new technologies has always affected productivity in fits and starts, and the rise of smartphones and cloud computing and Big Data will show up in the numbers eventually. The other view is that today’s technological innovations pale in significance beside electricity and the internal combustion engine—they’ll have some positive impact, but growth will be slower than it used to be.

What these arguments share is the conviction that, however sick many of us may be of hearing about it, disruptive innovation is something we need more of, not less. We, in this case, means some abstract collection of current and future humans—not people with jobs that are about to get disrupted out of existence. The uneven dispersal of rewards from technological change is always a problem, and may be especially fraught this time around. But uneven progress still seems better than  no progress at all.