Ask Jack Dorsey, the co-founder of the social network Twitter and the mobile-payment start-up Square, what his two companies have in common, and he has a quick answer: “They’re both utilities.” Mark Zuckerberg might agree: he spent years trying to convince people that Facebook is not a social network but a “social utility.”
It’s an intriguing choice of words for such of-the-moment entrepreneurs. Utilities tend to be boring, slow-growing beasts. They also—and this is the more important point—tend to be monopolies that are either regulated heavily by governments or owned outright by them.
Indeed, once they get beyond a certain size, technology companies do become wary of the word. Google has been called a utility by lots of people, but you won’t hear the company’s executives using the term (at least, I couldn’t find any examples). And Zuckerberg, when asked in 2010 whether, as a utility, Facebook ought to be regulated, said he hadn’t meant the word that way at all: “Something that’s cool can fade. But something that’s useful won’t. That’s what I meant by utility.”
Yet there are lots of useful things in the world—clothing, breakfast, this issue of The Atlantic—that no one would ever think of calling a utility. Yes, there is an innocuous class of computer software known as utilities. But what companies like Twitter, Square, and Facebook—not to mention Google, Amazon, and Apple—aspire to, and in some cases have achieved, is a status similar to that of traditional utilities like Ma Bell. They attempt to position themselves such that customers can’t get around them, or can’t afford to leave them. And when they succeed, they start appearing to some customers, would-be competitors, and regulators like scary monopolies that somebody needs to do something about.
The connection between attractive business opportunity and monopoly is not new. Pursuing a “short run” monopoly, the economist Joseph Schumpeter wrote in 1942, is what profit-seeking enterprises do—in the process, driving significant innovation and economic growth. In the 1970s, the business-school discipline of strategy arose as the study of how to build and defend these short-run monopolies—a sort of mirror image of the antitrust classes long found in law schools. “Strategy is antitrust with a minus sign in front of it,” says the Columbia Law School professor Tim Wu, who has taught both subjects. That is, strategy tries to maximize what antitrust tries to minimize.
What is new is that the path from looking for an edge to being attacked as a monopoly has gotten a lot shorter—and that gaining a monopoly seems such a plausible goal within some of the fastest-growing parts of the economy. Standard Oil had been in business for 36 years when the Justice Department sued it for antitrust violations; AT&T for 97. By comparison, Microsoft was just 15 when federal regulators started looking into its business practices, 23 when Justice sued. Google, a mere 14 years old, is already under antitrust investigation.
Today’s technology entrepreneurs are well aware of the tight link between profit and monopoly. Few are as open about it as the PayPal co-founder and early Facebook investor Peter Thiel, who has described monopoly as the natural goal of any smart tech entrepreneur. But everybody gets the basic idea. “There’s a joke in Silicon Valley,” says the UC Berkeley economist Carl Shapiro: “ ‘You know you’ve really made it when you’ve got antitrust problems.’ That’s the sign of success.”
The modern theory of monopoly began its rise in the mid-1980s, when a handful of scholars—Shapiro among them—noted some salient characteristics of a fast-growing new industry. Many information-technology businesses, observed Stanford’s W. Brian Arthur, benefit from increasing returns: as they make more of something, the cost per piece keeps falling. This is especially true of software, for which the cost per piece moves quickly to zero. (Increasing returns had been deemed in the late 19th century to be the mark of a natural monopoly, an industry that would inevitably be dominated by one entity.)
Another trait that characterized many technology businesses, these same scholars observed, was lock-in, or prohibitive switching costs. Companies that committed to getting their mainframe computers from, say, IBM would eventually find switching to another provider hugely expensive and disruptive. (Later, with the PC, Microsoft was able to shift the lock-in from hardware to software.)
But most intriguing of all was the enormous power of network effects. A telephone “without a connection at the other end of the line … is one of the most useless things in the world,” AT&T President Theodore N. Vail wrote in the company’s annual report in 1908. “Its value depends on the connection with the other telephone—and increases with the number of connections.” In 1980, Bob Metcalfe, an inventor trying to persuade people to buy his $5,000 Ethernet cards, which connected computers in a local area network, came up with a formula that expressed the value of a network as the number of connections squared. The specifics of “Metcalfe’s Law” have frequently been challenged, but the basic idea that networks add value exponentially as they grow has not.
For society as a whole, though, these phenomena can have a dark side. In a famous paper, the Stanford economic historian Paul David described in 1985 how the ubiquitous QWERTY keyboard layout had been devised mainly to prevent jamming of primitive typewriter mechanisms. Later, as typewriters improved, there were repeated attempts to supplant QWERTY with configurations that allowed for faster typing. But by then the layout’s high switching costs had made it an impregnable standard. Economic forces, wrote David, “drove the industry prematurely into standardization on the wrong system.”
Brian Arthur borrowed the term path dependence from physics to describe this predicament, and wove it into an alarming story about how technology standards develop and get locked in even when there may be better options. The saga of Sony’s Betamax, which purportedly delivered better picture quality but lost out to VHS as the dominant videocassette, was told and retold. Research by other economists soon muddied the tales of both VHS and QWERTY—they may not have been such obviously inferior technologies after all. But the behavior of Microsoft, which by the early 1990s was using its market power to shove aside innovative competitors like Apple, WordPerfect, and Lotus, certainly seemed like an example of technology heading down the wrong path.