Looking at the Sun

A case of Japanese industrial success and American failure that can’t be explained by American economic rules. Could the rules be wrong?

THIS is not supposed to be an ideological age, but the ideas we use to explain events are very powerful. The idea that Communist control of Vietnam would lead to Communist control of all of Asia required the United States to fight there. Standing up to the Viet Cong and the North Vietnamese represented a chance to correct the mistake made when no one stood up to Hitler—or so it seemed while the domino theory prevailed. In economics the idea that there can’t ever be too much competition, or that the government will gum up whatever it touches, requires us to assume that whatever the market decides is for the best.

During last year’s presidential campaign the columnist George Will offhandedly reminded his readers that the “lesson of the late 20th century" was government’s inability to hit whatever economic target it set for itself. Therefore it was folly for the Democrats under Bill Clinton even to talk about devising a “national economic plan.”The significance was that Will could say this offhandedly, and he could safely assume that most educated Americans would agree.

Aspiring planners of national economies were thick on the ground, here and especially in Europe, as recently as the 1940s and 1950s. But recent history has been chastening, at least to people paying attention. . . . Planners say, with breezy confidence: Why wait for billions of private decisions in free markets to reveal possibilities and preferences? Government in the hands of clever people like us can know what is possible and preferable.

Suppose this paragraph, and the idea behind it, had been a little different. Suppose it had said,

Aspiring planners of national economies were thick on the ground in America as recently as the 1950s. Even now they dominate the governments of Europe, except for England, and those of East Asia, except for Hong Kong.

Perhaps it’s just a coincidence, but each of these “except for” countries is the industrial sick man of its region. The European planners have succeeded in some ways but failed in many others. In East Asia, government in the hands of clever people has generally achieved what it set out to do.

This history might be chastening to people paying attention to what it really means—but that’s hard for us, because we like people to talk to us in English, and the Anglophone world tells us not to question what we already think.

This, of course, would represent a different idea, which would fit reality into different patterns, leading to different results. With another mental map of the world, people might feel differently about the nature of economic change. The prevailing American idea requires us to view industrial rises and falls as if they were the weather. We can complain all we want, but in the long run there’s nothing much we can do, except put on a sweater when it’s cold. Or the American idea makes economic change seem like an earthquake: some people are better prepared for it than others, but no one can constrain the fundamental force. A different idea—that industrial decline is less like a drought than like a disease, which might be treated—would lead to different behavior.

In the early 1960s American strategists “knew” certain things about military power. They knew that the most dangerous potential conflict was between the two superpowers. If the United States and the Soviet Union were not fighting each other, then either of them would beat whichever lesser power it faced. It was hard to make those ideas fit the facts of Vietnam and Afghanistan. American economists “know” certain things about business competition now. It is very hard to make those ideas fit the facts of the semiconductor industry, America’s great success story of the early 1980s.

The story matters more than others, because of the importance of semiconductor chips to high-tech industrial growth. But even if the products themselves were insignificant, the gap between how Americans think about industry and what happened to this particular industry would be provocative. Americans have learned to explain away industrial failure with reasons that seem to make sense when applied to TV makers, or the Big Three of Detroit, or traditional heavy industry. These reasons—shortsighted management, obstinate unions, neglect of education and investment—do not fit the semiconductor case. A better understanding requires a look at ideas that are in very limited circulation among people educated and trained in English-speaking countries—ideas in direct conflict with all that we “know.”

THE AGE OF INNOCENCE

CALIFORNIA is blessed with a sense of newness, and in 1980 that sense was especially vivid. The Santa Clara Valley had just been baptized Silicon Valley and was the center of the semiconductor industry for the entire world. The new buildings to house that industry looked fresh and almost flimsy without seeming cheap. They were like mushrooms that spring up after a rain, fragile but in their own way perfect. In the downtown of Cupertino, one of the newest and freshest-looking of Silicon Valley’s cities, stood a public library with a large sculpture shaped like a conquistador’s helmet. Against it the sun emitted a coppery glow. Such structures were this new economy’s answer to the concert halls and art museums that had gone up in Detroit and Chicago in the previous century: people were making money, and they were spreading it around.

Just a few blocks from the library was one of the main sources of the money, Apple Computer, which was then growing so rapidly that its offices were scattered among half a dozen rented sites. If Cupertino had had a skyscraper, from its top you would have been able to see the other great successes of the valley. They included Intel, with headquarters in Santa Clara; Hewlett-Packard, in Palo Alto; the upstart Zilog, in Cupertino, which had been founded by exiles from Intel; and a host of others.

Within a few years it would become difficult to remember the sense of amazement and all-embracing promise that had emanated from Silicon Valley. But in the years before 1980 the industry did things never done before, creating things no one had ever seen. As Detroit had at the turn of the century, when new car companies were springing up practically overnight, Silicon Valley conveyed the sense that its industries were remaking the world around them—and, as in Detroit earlier, some of the people who had made the original discoveries were still in charge.

The manufacturers of the valley could be divided into three main categories. At the top of the industrial food chain were the computer makers—companies like Apple and Hewlett-Packard, whose brand names the public knew and whose products sat on desks in offices and homes. Below the computer makers were the semiconductor makers—Intel, Motorola, Texas Instruments, Zilog, National Semiconductor, and many others. As the industry grew through the 1980s, the public came to know their names as well, but until very late in the decade, when computer users around the country began buying extra RAM or faster processors for their home computers, the semiconductor makers sold their products to the computer industry and other industrial consumers. At the base were the semiconductor-equipment companies— Varian, Eaton, and Perkin-Elmer, among others. They were virtually invisible to the public, but without them Intel and Motorola could not have made chips, and without chips Apple and HP could not have made computers.

The varied branches of this industry were working together in California just the way American economic theory said they should. Few companies in this industry were big and integrated. Most were small, specialized, and agile. The giants of those days, IBM and Xerox, were present in the valley but served largely as brooder houses from which ambitious engineers and breakthroughs in basic research would emerge. (AT&T’s Bell Labs had played the same role in the industry’s early days.) Every branch of the industry seemed unchailengeably strong—as U.S. car makers had seemed in the 1950s, as U.S. software houses believe themselves to be today.

Japanese manufacturers, who had already created problems for other, less advanced-seeming American industries, were barely on the map. At the urging of the Japanese government a number of big Japanese firms—Toshiba, Hitachi, Fujitsu, NEC—were by the mid-1970s building both chips and computers. But they still relied overwhelmingly on American suppliers for chip-making equipment; they had no way to match it on their own.

Because of everything that was going right in Silicon Valley, its workers—and most of the American public—saw a kind of commonsense economic and moral logic to its rise. These industries were forward-looking and flexible. They used advanced technology and they created good jobs. They invested for the long run; they thought of labor and management as a team; they weren’t crying for government protection, as so many of America’s “sunset” industries were. Opinion polls in the 1980s still showed great American confidence in the country’s technical prowess. Koreans might move from textiles to steel, Japanese from steel to cars, but Americans would move from cars to computer chips, and who could tell what would come next?

THE SUDDEN COLLAPSE

LESS than ten years later everything about this industry had changed. The very companies that had supposed they demonstrated the moral superiority of the American way of doing business seemed by the late 1980s to be making the opposite point.

Through the late 1970s American companies dominated chip making and the manufacture of semiconductor equipment—the machines used to produce chips. Of all the machines in the world used to make semiconductors in the middle and late 1970s, about 85 percent came from America; about 80 percent of those used in Japan were American.

By 1982 Japanese semiconductor companies were making more “dynamic random access memory” chips, or DRAMS (pronounced “dee-rams”), than American firms. These DRAM products are what most computer users think of as “memory chips.” Three years later Japan’s production of semiconductor products as a whole exceeded America’s. As Japanese companies increased their output of chips, they also turned to Japanese suppliers for chip-making equipment. By 1985 American equipment makers had barely 25 percent of the market in Japan, already the world’s largest and fastest-growing market.

The best-publicized American success was in the area of microprocessors—the chips that interpret the instructions in computer programs and control the operations of the computer as a whole. The two American powerhouses in microprocessor making were Intel and Motorola. Intel had started out as a memory-chip company—indeed, it had invented the DRAM chip in 1969—but, like many others, had been forced out of that business as Japanese producers came in. Unlike the others, however, it had found a new and amazingly lucrative niche in producing microprocessors. Motorola, which had been actively fighting trade wars in Japan, and whose processor chips were used in the Macintosh and in a variety of industrial applications, also prospered.

Each of the two companies had worked to have its chips designed into major computer operating systems. Intel’s chips dominated the IBM-compatible world of computers running DOS, Microsoft Windows, and OS/2; Motorola’s ran the Macintosh and other Apple products. These relationships provided a steady cash flow that buoyed Intel and Motorola when the rest of the American semiconductor industry was foundering.

In 1978 the two largest merchant semiconductor firms in the world were Texas Instruments and Motorola. (IBM was ranked as the world’s largest producer, but it used chips for its own products, unlike the merchant firms that sold the chips they made.) By 1983 Japanese semiconductor companies were spending more on new facilities than American ones. By 1985 they were spending more on research and development. By 1986 the top three merchant firms were NEC, Hitachi, and Toshiba, and there were six Japanese firms among the ten world leaders. At that time, according to one report, “most non-Japanese producers of DRAMS had been eliminated from the world market or marginalized; Japanese firms controlled 90 percent of world production of 256K DRAMS.” Even AT&T was leaving the DRAM business by the end of the decade. Early in 1991 I met a minister of the Korean government who had come to Washington to try to put together alliances of Korean and American technology firms. If they tried to make it on their own, he said, they were both certain to be eclipsed by Japanese firms.

The chips were raw material for other advanced products: they went into computers, VCRs, and increasingly into cars. NEC, Hitachi, and Toshiba were best known not for the chips they sold but for their computers, VCRs, machine tools, and similar finished goods that incorporated chips. The United States has twice as many people as Japan, an economy that through the 1980s was more than twice as large as Japan’s, and a computer-and-semiconductor industry that was regarded as leading the entire world. Yet in 1988 Japan’s consumption of semiconductors—its use of them as components for cars, computers, and countless other highvalue products—was greater than America’s.

In addition to providing the supplies for computer makers and other “downstream” industries, the chip makers were themselves major customers. They bought from the “upstream” equipment makers that made the complex and expensive capital goods necessary to produce chips. As the American semiconductor industry dwindled, so did the related American industries that bought from and sold to it. In theory this need not have happened—American etcher makers could sell equipment to Hitachi rather than to Advanced Micro Devices; American computer makers could get all the DRAM chips they needed from NEC rather than from sources nearby in Silicon Valley. But it didn’t work that way.

As the semiconductor industry grew in Japan, it evolved in such a way as to promote linkages with other Japanese firms—and prevent them with non-Japanese firms. In the mid-1970s Japan’s Ministry of International Trade and Industry launched its VLSI projects, the most visible of numerous efforts to coordinate the growth of Japan’s computerrelated industries. The acronym stood for “very-large-scale integration,” and the projects involved preferential access to capital, government-sponsored research, strategies for licensing technology from foreign (mainly American) suppliers, and other means to help Japanese producers overcome the foreign lead in high-tech production.

By the logic of American-style economic theory, such government-sponsored efforts were both unnecessary and self-defeating. They were unnecessary because if customers in Japan wanted chips, chip-making machines, or computers, they could always just buy them from suppliers in other countries. There was no need to create new industries from scratch. And the efforts were self-defeating because government interference would raise the price that Japanese industrial customers had to pay, handicapping the firms that used the chips.

Japanese commentators, when writing in English, often claim that precisely this logic prevailed in Japan. “We Japanese, like people everywhere, import when other countries can provide competitively attractive goods and services,” one famous Japanese spokesman asserted this year. Yet this nationality-blind, purely price-minded mentality cannot explain the way the Japanese semiconductor industry grew. The strategy of developing high-tech manufacturing abilities within Japan rather than buying high-tech goods from outside Japan succeeded.

With each passing year Japanese firms had a larger share in every part of the industry: machinery, chips, and computers. By the end of the 1980s American equipment makers— the companies that produced the specialized, ultra-precision equipment for each of the twenty-five-plus steps necessary to make a semiconductor—still outsold Japanese suppliers in every part of the world except Japan. But the market inside Japan had become so large, and was so thoroughly dominated by Japanese suppliers, that Japanese equipment makers outsold American ones worldwide.

In Japan the companies that made chips were tightly connected to the companies that bought chips—and connected by something beyond the prospect of business advantage which momentarily binds buyers to sellers in the American marketplace. The closest analogy from American life is the military. Just as the U.S. Air Force, with its allies in industry and Congress, competes bitterly against the Army and the Navy (and their respective allies) for budget dollars and prestige, so do Toshiba, NEC, and Fujitsu, with their allies, compete bitterly against one another for primacy and market share. Yet each kind of competitor recognizes limits to its rivalry. Fundamentally it is on the same team as its rivals, and at certain points all must suppress their immediate interests for the common good. According to Western economic theory there is virtually no “shared interest” among business competitors. Members of the American military system, and of the Japanese business system, need no theory to articulate why they are on the same side.

THE UNEXPECTED CHIP SHORTAGE

IN 1985, just as the last non-Japanese producers of DRAM chips were about to drop out of the business because of sustained low-priced exports from Japan, a strange thing happened. For at least two decades the price of chips had steadily gone down. Factories became bigger and more efficient. Producers learned how to eliminate defects, which meant they had fewer flawed chips to throw away, which in turn reduced the overall cost per finished chip. Yet starting that year 256-kilobyte DRAM chips started to become expensive and scarce. Prices went up, and stayed up, as they never had before.

One of the industry’s chronic headaches had been the “learning curve,” a rapid fall in the price of a new chip six or eight months after it was introduced. Financial experts at Intel in 1980 explained to me that the retail price of a chip began to plummet once competitors figured out how to make it. Previously exotic products like 64K chips and 80286 processors became mere commodities, and forces of supply and demand drove the price through the floor. This was a blessing for consumers, but it meant that companies had to be quick to develop new products, so that they could earn their profits in those first six or eight months of premium pricing. In the past, high prices had always led to increased supply—the more the money that was coming in, the more of it the companies reinvested in factory space. But as the price of 256K chips mysteriously rose, Japanese companies did not invest in any more 256K production space.

The exact causes of the DRAM shortage are still hotly debated in Japan and the United States. In brief, most discussion in America assumed that the United States itself had caused this price rise, through its complaints about Japanese “dumping” of chips in the previous three years. Reports published in Japan suggested that the Japanese industry itself engineered the rise, taking advantage of the OPEC-like control it had finally attained over chip supplies. Either way, the results were unmistakable. An American high-tech industry that had fed on chips, using more and more memory to produce bigger and faster machines, was suddenly cut off from its supply. Chip prices were going up, defying the industry’s collective experience—and no matter what the price, some chips seemed impossible to find.

Officials of the Ministry of International Trade and Industry (MITI) began regulating the flow of precious DRAM chips out of the country, much as oil ministers had regulated output during OPEC’S heyday. MITI had long maintained a list of goods subject to government approval before they could be sold to customers in China. An official of Hitachi, one of Japan’s major DRAM producers, said that during the shortage “it was easier to get approval from MITI to sell [goods subject to government control] into China than it was to get approval to sell DRAMS into the United States.” Japanese producers that were strong in DRAMS but relatively weak in other semiconductor products, such as “application-specific integrated circuits,” or ASICS, began offering package deals to American customers on a take-it-or-leave-it basis: We’ll let you have DRAMS if you’ll buy our ASICS. American computer makers bought expansion boards from Japanese suppliers simply to strip off and use the DRAM chips.

The shortage was intense through most of 1988. Its most enduring effect was to aggravate a division between two branches of the American high-tech industry, chip makers and computer makers. In Japan these industries generally saw themselves as allies—and in many cases were actually part of the same firm. In America the chip and computer makers were often at odds. For the chip makers, of course, higher chip prices meant higher revenues. For the computer makers, higher chip prices meant higher costs. So although the American computer makers might theoretically agree that it was wrong for Japanese chip makers to dump chips, at least in the short term the dumping was a boon. More important, although the evil of dumping was theoretical, the hardship caused by the chip shortage was immediate for the U.S. computer makers. Most of them retained a “never again” mentality after the 1988 shortage. They realized the dangers of exposing themselves to another DRAM shortage by antagonizing suppliers in Japan.

These disagreements came to a famous climax early in 1990. IBM had spent much of the previous year pushing for a new U.S. chip-making consortium. The idea behind the project was that America’s biggest chip-using and chip-making companies would band together to help strengthen the domestic semiconductor industry. Computer companies such as Digital Equipment and Hewlett-Packard—which, like IBM, used tremendous numbers of chips—would join semiconductor companies such as Intel and LSI Logic in building a large, advanced chip-making facility to be called U.S. Memories. The computer companies would then commit themselves to buying a certain share of the plant’s future output, even if in the short term they could obtain chips more cheaply from Japanese or Korean suppliers. The project’s backers argued that in the long term the computer companies and the whole domestic high-tech industry would be stronger if American-owned chip companies survived.

U.S. Memories had been proposed in 1988, during the great chip shortage. As it held its first organizational meetings, in 1989, chip prices began drifting down. By the summer of 1989 another chip glut seemed to be in the making. Japanese manufacturers had expanded their output, and U.S. computer makers could get all the chips they wanted at good prices. In September of 1989, amid falling chip prices, Apple Computer announced that it wouldn’t support U.S. Memories. Apple spokesmen said there was no need for the project, now that DRAM chips were cheap and plentiful again. By November, Compaq, Sun Microsystems, Unisys, and Tandy had all rejected U.S. Memories as well.

At a showdown meeting in Dallas on January 10, 1990, Sanford Kane, the president of U.S. Memories, asked the remaining partners to state explicitly how much money they would put up and how many U.S. Memories chips they would buy. IBM and Digital were the only two computer companies to make significant commitments. HewlettPackard, one of the original supporters of the plan, backed off. By the end of the meeting it was clear that U.S. Memories was dead.

“These guys have a tactical view of the world; they don’t think strategically,” Sanford Kane told Stephen Kreider Yoder, of The Wall Street Journal, shortly after the meeting. They were able “to so quickly forget that a year ago they were screaming for this”—that is, for some alternative to Japanese suppliers. “For them, it’s ‘Don’t worry, be happy.’ Just close your eyes and blindly go on.” Kenneth Flamm, then a semiconductor expert at the Brookings Institution and now a Pentagon official in the Clinton Administration, said in the same article that the U.S. Memories failure “goes to prove Akio Morita’s contention that U.S. business has a tenminute time horizon.”

With a longer perspective, those who argued for a U.S. consortium said, computer companies would realize how deeply they were threatened by reliance on foreign suppliers. If, for instance, Compaq was selling laptop computers in competition with Toshiba, but both of them relied on Toshiba chips and screens from Sharp, then in the long run Compaq would lose.

At about the time U.S. Memories failed, Ingolf Ruge, a German technology expert, said,

The goal of the Japanese . . . is a world monopoly on chips. They have even announced this publicly, and they are acting with this in mind. About a year ago, all Japanese manufacturers suddenly cut back on production, shooting prices way up. This monopolistic policy is currently costing companies like Nixdorf [a major German computer maker] tens of millions of Deutschmarks.

DEBILITATING DEPENDENCE

THROUGH the early 1990s American politicians would often threaten to deny the American market to foreign manufacturers as a way of getting what the United States wanted. Since America was by far the world’s biggest importer, this was sometimes a plausible threat. In 1992 Japan, for instance, sent 28 percent of all its exports to the United States, its largest customer, and only six percent to its second largest customer, Hong Kong. But the threat was less plausible than many Americans believed, because in many areas Japanese companies were the only known suppliers. The American market for fax machines was large, but there were no domestic manufacturers. In 1991 the U.S. Office of Technology Assessment issued a report on international competition in high-tech manufacturing. It concluded that for a number of steps in the semiconductor-making process Japanese companies were the only suppliers. No American companies had the machinery or technical knowhow to compete.

Also in 1991 one Japanese electronics maker irritated the U.S. government by selling equipment to Iran. Some federal agencies proposed punishing the company by refusing to buy its products. The main resistance came from the Pentagon: the American military realized that it could not develop several projects it had under way if the company were barred from selling in the United States. During the brief war against Iraq the U.S. military grumbled about the difficulty of getting supplies and crucial parts from Japanese industries; the Japanese government, after all, was much less enthusiastic about the war than the American government was. The most celebrated American weapons used during that war, including the Patriot missiles that intercepted some of the Iraqi SCUDS, all relied for their operation on a kind of ceramic chip packaging that no American company produced in significant quantities.

Several years earlier the National Security Agency, which is in charge of intercepting, processing, encoding, and decoding millions of messages a year, took the extraordinary step of setting out to build its own semiconductor-manufacturing facility. It went to this extreme to avoid having to buy foreign-manufactured semiconductors, which could conceivably be tainted with virtually undetectable yet potentially catastrophic viruslike hostile programming. Despite its determination to avoid relying on foreign sources the NSA found that it could not. For certain kinds of semiconductormanufacturing equipment it had no alternative but to buy from suppliers in Japan.

The NSA was asked what it would do when its foreignmade machines needed servicing, as they inevitably would. Would it allow service representatives from the parent company to come to its supersecret premises and work on the machines? A government official involved in buying the machines smiled ruefully, according to another U.S. government official who witnessed the exchange, and replied, “Oh, we’d never risk having a foreign citizen enter the facility. If we can’t fix it ourselves, we’ll just throw that piece of equipment out and purchase a new one, even if it means several million dollars for replacement instead of several hundred dollars for repair.”

Just before the Gulf War the Defense Intelligence Agency circulated a highly classified and controversial draft report about the American military’s growing dependence on Japanese high-tech equipment. The report warned that the Japanese government was explicitly considering how it could exploit the military’s dependence to gain additional leverage over the United States.

In 1990 Andrew Grove, one of the founders of Intel, gave a speech contrasting the Japanese and American semiconductor industries. Among the most striking differences, he said, was how much denser and deeper the Japanese network of suppliers was. One Japanese company had decided to get into the DRAM business and was up and operating within about a year. For immediate service it could call on suppliers of machinery and components located in Japan. “At one point,” Grove said, “there were twelve hundred vendor employees”—that is, representatives of companies making the components and machinery—“swarming all over the location where this company’s first factory was being built.” He added, “We couldn’t do this in the United States for love or money. There are no twelve hundred vendor employees and there are no dozens of relevant suppliers.”

By the early 1990s American semiconductor and computer companies were routinely complaining to the government, but their complaints had taken on a strange new fatalistic tone. In many cases they were not even asking for a better opportunity to sell to Japanese customers, or to sell in competition with Japanese firms in other markets around the world. Rather, they were complaining because they were having trouble buying the best chip-making equipment from Japanese suppliers. In the fall of 1991 a report from the General Accounting Office said that a third of the American companies it surveyed had had problems buying up-to-date components from Japanese suppliers, even though the components were already for sale within Japan. Among the crucial components were flat display screens for laptop computers and the most modern versions of “steppers” for producing semiconductors. One company said that the delays had cost it $1.4 billion in sales, and another said the problems in getting display screens had put it “essentially out of business.” William Spencer, the president of an American chip-making consortium called Sematech, was asked early last year which kind of silicon wafers U.S. manufacturers would be using in the long run. “That’s easy to answer,” he said. “It depends on what Japan, and to a degree Germany, will sell us. We no longer have sufficient silicon sources of our own, so what we can make depends on what they will sell us.”

At about the same time, several American newspapers carried the news that Intel, clearly the giant of Silicon Valley, had nearly perfected a technology called “flash memory,” which could revolutionize the electronics business yet again. Because normal computer chips do not store information when the power is off, computers require bulky storage devices, especially hard-disk drives. Flash memory retains its contents when the machine is turned off, so it could eventually eliminate disk drives. Perhaps more important, it could add enormous amounts of memory to small devices not normally thought of as computers, from toys to compact-disc players to organizers like the popular Sharp Wizard series.

This looked like a comeback for the American industry —and it was so reported in a euphoric story in The Washington Post, in February of last year. But on the same day the Post story appeared, Jacob Schlesinger, of The Wall Street Journal, presented the same news in a very different light. Intel had come up with the technology—based, for once, on an invention that originated in Japan. But Intel could not make the chips. Even Intel—touted throughout the American press last year as the classic high-tech success— lacked both the money and the manufacturing capacity. It would rely on Sharp to do the actual production work. “The Intel-Sharp pact shows growing American dependence on Japan, even when the U.S. has the technological edge,” the Journal story concluded.

From Intel’s point of view the alliance with Sharp would help avoid another peril. The most valuable customers for flash-memory chips would be consumer-electronics companies—the ones that made TVs, VCRs, computer games. Those companies were mainly based in Japan. Everything in Intel’s experience indicated that if it did not form a partnership with a Japanese company it would eventually be frozen out of this new market.

Just before the war against Iraq began, in December of 1990, the Nomura Research Institute, which is connected to one of the world’s largest stock-brokerage houses, Nomura Securities, of Tokyo, released its survey of the semiconductor and computer industries. Anyone who has tried to talk delicately around an unpleasant truth (“Well, I’m sure Johnny is trying his best in math”) could recognize the tone of this document. For instance, “The widely held view is that the declining market share indicates loss of American competitiveness, but we believe this is not necessarily the case.”

The report sympathetically but firmly stated that American firms simply could not compete in the future.

As the memory capacity of DRAMS increased, management of the manufacturing process and the capability to make massive capital investments became the major determinants of competitiveness, and since Japanese companies were stronger than U.S. companies in both respects, their share of the world market expanded rapidly. ... A second trend is that the U.S. will fade as a growth market.

And so on. It is difficult for Americans to compete with Japan’s “oligopolistic dominance,” which came from “weeding out of weaker companies unable to keep up with heavy capital expenditure,” the report said. “In the technology-intensive computer and semiconductor industries, the basic trend since the start of the 1980s—market share gains by Japanese companies, market share losses by American companies—will likely continue in the 1990s.” Whenever American firms started to show weakness, the report said, they usually went on to collapse. “The U.S. market is hard on losers. Companies that have reached the limits of growth rarely have a chance to come back.” Large Japanese firms, in contrast, simply did not go bankrupt, and were able to buffer losses in one division with earnings from another.

The Nomura report—which was intended for domestic Japanese consumption rather than for inspection by Americans—included a nightmare version of a Horatio Alger story. It concerned a young, talented, and ambitious American software engineer known as Mr. A. He starts with a big high-tech company, as would his counterpart in Japan. Then this company is acquired in a hostile takeover and Mr. A is laid off—a fate that would not befall his counterpart in Japan. He moves to a fast-growing company but is laid off again when business slows down. He joins yet another company but, hardened by his experience, resolves to “keep the knowledge and skills he was acquiring to himself as a means of ensuring his job security.” After five years with this firm Mr. A considers jumping ship and starting his own firm, with the help of venture capitalists. The Nomura report drew these lessons:

Mr. A’s experiences in the statusand class-conscious culture of the American company suggest why the United States is so imbued with an entrepreneurial spirit, yet why American industry is so sloppy when it comes to factory productivity and quality control.

“CULTURAL” EXPLANATIONS

HOW can we explain what happened to the American semiconductor industry? The familiar reasons for industrial failure—greedy bosses, pigheaded unions, rampant short-termism, overregulation by the meddlesome state—don’t seem to apply.

The Japanese explanation is simpler. During the 1980s most Japanese high-tech industries thrived. Japanese commentators and politicians are quick to see the “unique” traits of the Japanese people as the explanation for almost any phenomenon in Japan. Therefore Japanese discussions of the semiconductor industry have stressed “harmonious” working patterns, attention to detail, and related Japanese characteristics that supposedly make it natural for Japanese companies to excel.

At a Hitachi semiconductor factory on the island of Kyushu, in 1986, I walked through fabrication areas that looked very much like their counterparts in Texas and California. The hardware and the procedures seemed so similar. Why were Hitachi and other Japanese manufacturers doing so much better?

“The starting level of production is not so different here from in America,” the manager who was escorting me said —in English, since he had studied for a while in the United States. He paused, and indicated thoughtfulness with the distinctive Japanese gesture of “teeth-sucking.” (This involves opening the mouth, putting the bottom teeth against the upper lip, and inhaling, with a sucking, slurping sound. Its body-language message is “Whew! I need to think about this for a minute.”) When he recovered, he said, “The starting level is similar—but ours improves much more rapidly than in America. There is a difference of culture. It is often said that we Japanese are united as a single people, or even race. In the U.S. there are so many people with different backgrounds and religions and races, it is harder to work together in harmony—unless you are all Swedes in Wisconsin.”

Americans themselves have, in their time, fallen back on ethnic theories to explain success or failure. But ethnicity and race are no longer part of any respectable discussion of business trends. Instead the emphasis is on the ultimate justice of the market. We all assume that the efficient companies will survive and the inefficient will fail. Governments can try to tamper with this law of nature, by offering subsidies or shielding producers from foreign competition. But in the short run these attempts will harm a nation’s consumers, by raising prices, and in the long run they will weaken the nation’s producers, by delaying the moment when old industries are cleared away so that new ones can rise.

In the late 1980s this view was summed up in a grim comment made by a Republican official during a meeting about American semiconductor makers. Sometimes the words have been attributed to Richard Damian, the director of the Office of Management and Budget, and sometimes to Michael Boskin, the chair of the Council of Economic Advisers. Sometimes they have been attributed to someone else. Each of the supposed speakers denies uttering the sentence, but the underlying argument, if not the specific wording, accurately reflects a position all of them have held for years: “If our guys can’t hack it, then let ‘em go.”

This vision of “creative destruction,” in the famous words of Joseph Schumpeter, is indispensable as a guide to most economic activity. But it can’t quite explain the semiconductor case. All the familiar variants of the “If they can’t hack it” argument apply to certain parts of the American economy, but they don’t tell us what went on in Silicon Valley.

Consider several elements of the standard analysis. The first and most familiar part is that American management has simply forgotten why it is in business. Too many companies are run by financiers, not engineers or production experts. The executives have feathered their own nests, ignored and mistreated their workers, been too lazy to learn what it takes to please German or Japanese customers, and in general failed to do their best.

A second, related explanation is that American products have become shoddy and backward—and a nation that still excels in basic science has forgotten how to put its inventions to commercial use. The most popular Japanese products of the 1980s were typically hatched in American labs. The United States is following the unwholesome trail blazed by the English, who by the 1930s were famed for having brilliant tinkerer-inventors but an increasingly feeble manufacturing base. Before and during the Second World War, British scientists came up with most of the crucial breakthroughs in radar technology. Yet the high-volume production was done in America, where the huge wartime investment in radar provided a foundation for the postwar American electronics industry.

The third standard belief is that American managers have been ruinously focused on the short term. Given a choice between spending $10 million for new equipment that will pay off in five years and using the money to boost earnings next quarter, they have too often pumped up the earnings, because their own pay is tied to results right now.

All these failings are serious, and one or more of them may seem to explain what has gone wrong when we examine American businesses that are failing. But they weren’t failings of the semiconductor industry in 1980, when it was on the verge of its precipitous decline.

By the rules that American politicians and journalists use to explain success or failure, there was no obvious reason why the industry should have been eclipsed so fast—indeed, relatively much faster than any of the “bad” old industries like steel and cars. The semiconductor industry was in trouble less than two decades after its basic product was invented and barely one decade after its most rapid growth. It is as if other industries in which the United States now considers itself dominant—movies and music, pharmaceuticals, university education, software—were on the skids by the end of the nineties. A Saturn plant for GM, a mini-mill for steel makers, a radical new program of teaching Japanese or German in the schools—all these might be necessary for ginning up the economy. But on the evidence of what happened to Silicon Valley, perhaps those things would not be sufficient. Maybe our understanding of competition is not sufficient either.

DIFFERENT GOVERNMENTS, DIFFERENT SYSTEMS

PETER Drucker, the business strategist, in his 1989 book The New Realities gave one view of how Japan had changed the competitive landscape.

The emergence of new non-Western trading countries— foremost the Japanese—creates what I would call adversarial trade. . . . Competitive trade aims at creating a customer. Adversarial trade aims at dominating an industry. . . . Adversarial trade, however, is unlikely to be beneficial to both sides. . . . The aim in adversarial trade ... is to drive the competitor out of the market altogether rather than to let it survive.

Most American commentators have offered narrower explanations for competitive shifts. For example, Michael Porter, of the Harvard Business School, claims several times in his lengthy 1990 book The Competitive Advantage of Nations that Japanese chip makers succeeded mainly because they were quicker-moving than their American competitors. They made the crucial switch from one form of chip technology to another (from “bipolar” to “metal oxide”) faster than the Americans—and this switch, Porter says, “catapulted the Japanese to industry leadership.”

But this explanation only raises further questions. Why, exactly, would American businesses have been slower to switch from one kind of chip to another, especially when in the preceding decade they had seen more clearly than anyone else the importance of moving fast? Was it simply that they were complacent and satisfied, like the barons of Detroit or Pittsburgh in their dominant days? That doesn’t seem likely—the rhetoric that poured out of Silicon Valley in those days was full of exhortations to stay alert, to adapt or die, to keep moving ahead. Had they forgotten that there were competitors overseas? Did they dismiss them, the way auto makers had dismissed the tinny, laughable cars from Japan? Hardly. Even in 1980 Silicon Valley rang with analyses of the “Asian challenge.”

Individual firms in America may have made strategic errors, and individual firms in Japan may have been persistent and skillful and shrewd. But something else was going on that shaped the actions of these firms, and its effects sometimes dwarfed what any one firm could do. Something was going on that did not often show up in the speeches and the editorials about quality control and foreign-language training and good morale on the factory floor. That something involved factors left out of standard discussions of “improving competitiveness.”

One of these missing factors was government. On its way up and on its way down, the semiconductor industry was driven not just by private companies—although they made every crucial operating decision and came up with every new design—but by a network of government-business interactions. The role of the government is often considered an embarrassing afterthought in American or British discussions of how economies should work. Yet in America, as in every other country that spawned a semiconductor industry, government incentives and pressures shaped the way the industry grew. All around the world the design, production, and marketing of chips were carried out by private firms. It was these firms—America’s Intel and Japan’s Toshiba, Korea’s Samsung and France’s Thomson—that journalists and politicians mainly discussed. But each firm took the shape it did largely because of policies imposed by its government.

Last year the Semiconductor Industry Association published a history of the industry, which says that the industry’s growth since the 1950s can be divided into five stages. The first four are these:

1) The pioneering American efforts in the 1950s and 1960s to develop transistors and integrated circuits, and to learn to make them efficiently and in large quantities.

2) The Japanese entry into the field in the 1960s and subsequent rapid catch-up with American producers in the 1970s and 1980s.

3) The effort of European manufacturers, principally Philips, of Holland, and Siemens, of Germany, to survive in the business in the 1980s, despite the American technical lead and the Japanese competitive surge.

4) The emergence of other East Asian producers, especially those in Korea, to challenge the Japanese in the late 1980s.

Every one of these steps, the history says, depended fundamentally on government policies. Using the emphasis of italics the authors wrote,

Government policies have shaped the course of international competition in microelectronics virtually from the inception of the industry, producing outcomes completely different than would have occurred through the operation of the market alone.

The SIA authors are not, of course, purely detached historians; their organization has lobbied for the U.S. government to respond more aggressively to Japan’s industrial policy for semiconductors. And the SIA can offer evidence to substantiate its view of events. Let’s consider the various stages again.

U.S. genesis. American companies led the world in semiconductor technology thanks to their own efforts, expenditures, and ingenuity—but also thanks to U.S. government support. The government did not directly finance the crucial research that led to integrated circuits, but the companies making these investments understood that if the products were successful, the Department of Defense and NASA would be standing in line as customers.

For instance, in 1962 NASA announced that it would use integrated circuits—the first simple chips, produced by Texas Instruments, Fairchild Semiconductor, and other suppliers—in the computer systems that would guide Apollo spacecraft to the moon, and the Air Force decided to buy ICs to guide its Minuteman missiles. Every history of the semiconductor business regards these contracts as a turning point; they guaranteed a big and relatively long-term market, which no private purchaser could have offered at the time.

The early government purchases let manufacturers increase their volume of production. As volume went up, price went down, and commercial customers began buying more and more chips. This was not an explicit “industrial policy,” but it had the same effect: it gave the companies a reason to invest in new products and new factories.

The Defense Department played another crucial role. Government contracts had paid for some of the research that led to patents. In those cases the Defense Department required the companies, in effect, to share the patents with the American industry as a whole. Sharing with foreign producers was at this time moot, since there were virtually none; moreover, the Defense Department could use national security as a justification for excluding foreign-owned companies. By normal market logic, a company would have no incentive to share its discoveries for nothing. But the government insisted that such discoveries be treated as a public good. This naturally made it easier for small companies, which became the hallmark of Silicon Valley, to find niches and go to work.

The U.S. government did not run a single semiconductor factory, and no doubt would have failed if it had tried. But it provided conditions that could not otherwise have existed, especially the reliable markets in the crucial early years, and thereby got the industry off the ground. The government has never directly operated farms either—and if it had, it would have been less efficient than the sturdy, self-disciplined folk who settled the American heartland. Yet many of the farmers received their land through government land grants; the territory was originally mapped by government (often Army) cartographers; new seeds were continually developed by the government’s agricultural experiment stations. The government did not directly make airplanes, even during the Second World War, and if it had, they would not have been as good as those made by Boeing or Lockheed or what was then known as Douglas Aircraft (and is now McDonnell Douglas). But, even more than with semiconductors, the government provided the initial market—directly, through the War Department, and indirectly, through air-mail contracts that got fledgling airlines going. Americans still talk about “government interference” and “industrial policy” as if they faced a choice between grim, Soviet-style central planning and entrepreneurs completely on their own. In reality, even as lively and entrepreneurial an American industry as semiconductors reflects a mixture of visible and invisible hands.

Japanese ascendancy. The role of government in the rise of the Japanese industry is more familiar and is taken for granted. Without heavy government involvement, Japan’s semiconductor industry would not exist. In the beginning, during the decade or two or three after the Second World War, non-Japanese firms had difficulty exporting to Japan, were virtually forbidden to invest in Japan, and were forced to license their advanced technology to Japanese producers at fees set by the Japanese government if they hoped to do business in Japan at all.

These aspects of Japanese policy had long-lasting consequences—in the 1960s and 1970s foreign suppliers could not build up the relationships with customers in Japan which, they now know, are essential for doing business there—and the rules established by the Japanese government had nothing to do with enhancing free-market competition as an end in itself. Market forces were tools: Sony and Matsushita and NEC and Toshiba competed against one another as hard as they could. But the goal—building an industry in Japan run by Japanese people—was decided on and carried out by the government.

Why does this matter? Because it is so much at odds with the prevailing American idea about why industries rise and fall. Michael Porter says that Japanese semiconductor makers succeeded because of the fierce “internal competition” in Japan. This is a favorite theme in analyses of Japanese success: because Japanese companies had to try so hard to stay alive at home, they were naturally better prepared for competition overseas. Yes, competition inside Japan is intense. But is that what made IBM and Texas Instruments decide to license Japanese patents?

A leading trade economist, Gary Saxonhouse, says that the postwar Japanese emphasis on high-value manufacturing is a “natural" part of Japan’s high literacy rate and “factor endowment.” With a lot of people and with not much space, the Japanese naturally tend to produce sophisticated, valuable products. In isolation this may sound logical enough. But it doesn’t fully explain how Japan went from having virtually no semiconductor industry in 1970, when it was already crowded and the people were already well educated, to holding the lead in 1990—or why the foreign share of its market remained constant through the eighties, no matter what ups and downs were occurring in technology and market position and corporate strength elsewhere in the world.

“If our guys can’t hack it”—this is a natural reflection of everything we have been taught about industrial development. But a more realistic view is the conclusion of one authoritative history of the industry, Competing for Control, by Michael Borrus. “State policy would not have succeeded without the efforts, investments, and strategies of Japanese industry,” it says. “But the industry would almost certainly have failed without the state.”

■ Phases three and four of the industry’s growth, according to the SIA account, were European survival and the appearance of the new East Asian entrants. In each case, as with the U.S. and Japanese examples, government policy made a decisive difference. European governments unashamedly promoted national champions in technology. The French government, in an effort to promote a domestic high-tech industry, subsidized the installation of Minitels, computer terminals for looking up phone numbers and other simple functions, in most homes. The Korean case is less familiar to the Western public but even more overt. Chun Doo Hwan, who took power after the previous strongman was kicked out in a coup, believed in state-run industrialization. In the mid-1980s his advisers persuaded him, in the words of a magazine report, that “the only way to crack the world semiconductor market would be to orchestrate a massive development project involving every important Korean company in the business.” By the end of the decade analysts from the World Bank—which is committed to the proposition that state guidance usually fails—concluded about Korea that “there is no doubt that the government has sought explicitly to encourage the development of high-tech industries like computers and semiconductors by designating them ‘strategic industries’ entitled to certain preferential treatment.”

That is, the growth and survival of semiconductor industries has not been just a matter of “hacking it,” or fostering competition, or having good schools—although each of these ingredients has played a part. In fact the evidence seems to run exactly the other way. Every country that has waited for the industry to develop “naturally,” through the flux and play of market forces, is waiting still. Canada is as populous and as committed to education as some countries that now make semiconductors. It had as much natural advantage for the semiconductor industry in 1980 as Korea did —indeed, much more, since it was richer, better educated, and closer to major markets and research centers in the United States. Hong Kong ten years ago had as much natural inclination for making semiconductors as Singapore—it was just as crowded, just as much influenced by Confucian culture. The governments of Korea and Singapore deliberately cultivated the industry; the governments of Canada and Hong Kong did not. Korea and Singapore now have a semiconductor industry; Canada and Hong Kong do not. Governments may not be able to pick winners, but they seem to be able to make winners.

WHEN GOVERNMENT INTERFERES

AND there was a fifth stage in the modern semiconductor industry. The authors of the industry study called it “U.S. revival.” “Recovery” or “respite” might be a more appropriate term. Starting in the late 1980s some parts of the U.S.-based industry flickered back to life. The graph lines showing worldwide market share for American-made chips dropped steadily through the late 1970s and the early 1980s— and then, around 1988, ticked slightly up again. Graphs for each specific kind of chip—processors, memory chips, ASICs—all showed similar changes at about the same time.

What happened? Was the main factor the fall in the dollar’s value, which made American chips cheaper for buyers around the world? Probably not—there had been very little correlation between currency changes and worldwide market share before 1988. (That is, when the dollar went up, American market share went down. When the dollar went down, American market share also went down.) Was it owing to a dramatic improvement in American education? Hah. Was it because the companies themselves tried harder in the late eighties than they had in the golden age of the early eighties? Perhaps. The kings of the American industry, Intel and Motorola, kept racing each other with new designs and improved manufacturing processes. The industry in general, like the American auto makers it had once scorned, put new emphasis on quality and service.

But something else happened at about this time: the U.S. government intervened—“interfered,” if you prefer—in the workings of the market to protect American manufacturers. In 1986 the United States and Japan signed the Semiconductor Trade Arrangement, which was renewed and modified in 1991. The results of these negotiations differed from most trade agreements in that they did not attempt to remove barriers or create a level playing field in the Japanese market. The agreement was, in effect, a quota bill, paying less attention to the rules of competition in Japan than to the result. In 1986 the two governments expressed their “expectation” that by the end of 1991 U.S. companies would supply 20 percent of the semiconductors bought in Japan. At the time this agreement was signed, U.S. companies accounted for less than 10 percent of the Japanese market—and about 65 percent of the market in the rest of the world. (The deadline for reaching 20 percent was later extended to 1992.)

The Japanese government hotly denied that this “expectation” was any kind of enforceable promise. When talking to American politicians and reporters Japanese government officials often claimed, doing their best to keep a straight face, that setting a market-share target would mean government interference with private enterprise. Surely the Americans didn’t want that! Yet Japanese press reports made clear that the government was behaving internally as if it believed it had to meet the target of 20 percent. MITI was twisting the arms of big Japanese industries, encouraging them to design in foreign chips when they planned new products.

In 1987 the U.S. government also decided to support Sematech, the chip-making consortium. Its purpose was to encourage American semiconductor companies to cooperate in research areas too risky or expensive for any of them to undertake independently. The government offered research contracts and subsidies to defray research costs. The cost to the federal government was $ 100 million a year, which was matched by contributions from the industry. And in the wake of all this government “interference” the fortunes of the American semiconductor industry finally improved. The American share of the Japanese market had remained flat, at about 10 percent, through the years when it was supposedly determined by pure market forces. It began rising, into the teens, in the late 1980s, and it neared 20 percent late in 1992, just as the Semiconductor Trade Arrangement had specified.

CAPITALISM AND PLAIN CAPITAL

WHAT happened in Silicon Valley says something about exactly where our standard economic theories may mislead us. Beneath all the ups and downs of high-tech competition, one difference between the Japanese and American industries matters more than anything else. The Japanese companies had more money. They could build newer and bigger factories, because they had more capital to invest. They could prevail in price wars, because they had bigger war chests with which to cover losses. They could retain their work forces, because in recessions they did not have to lay employees off. They could invest in ten potentially promising technologies at once (and fail at nine of them), because their R&D budgets allowed more room to experiment.

According to the deepest assumptions of capitalism, governments can never outguess the market about where money should go. All that governments can—and should—do is make sure the necessary signals flow. Signals come in the form of prices; therefore the paramount goal is to “get prices right.” After that everything should work on its own.

This vision of capitalism is like the American vision of democracy. The government has no right to define what a good society may be. All it can do is set the rules by which people express their views, exercise their rights, and cast their votes. According to American theory, if the system is fair, the results by definition must be good. So it is with the business system: if competition is on a level playing field, if new competitors can enter the game, if customers can choose freely and fairly among the offerings, then whatever happens will be for the best. The right amount of money will be available for investment and the right number of new ideas will pop up for putting the money to use.

By this logic the semiconductor industry in 1980 was structured about as well as any industry could be. It was fluidity and frictionlessness exemplified. The chip-making merchant firms of the valley acted as much like the “economic men” of academic theory as real human beings ever will. In the textbooks, economic man goes through life calculating the costs and benefits of different decisions—and is not swayed by irrational or sentimental concepts, including “Buy American” (or, for that matter, “Buy Japanese” or “Buy French”). The economic men who ran companies in Silicon Valley had every reason to sell equipment to whoever came up with money—even when they knew that many of the buyers might in the long run become their competitors.

In the early 1980s Silicon Valley companies raced one another to get more customers, to remove cost and waste from their operations, to increase their production runs. Every one of these steps drove prices down, down, down. The more intense the competition, the lower the prices—and the lower the prices, the thinner the profit margin. In the search for higher profits firms were forced to innovate once again.

If the American approach boiled down to getting prices right, the Japanese approach boiled down to getting enough money—not worrying about theoretical efficiency, not being concerned about the best rules for competition, but focusing only on getting the nation’s money into the hands of its big manufacturing firms. If companies could get more money than their competitors, they would eventually prevail, no matter how “fair” the competition might be. This was a view of capitalism that depended on capital. It did not concentrate on rules but focused instead on one goal: to build industry as quickly as possible. It was a means of catching up, and it worked.

What was most mysterious about this approach, from the Western perspective, was that it seemed to be divorced from normal calculations of profitability. American semiconductor officials, when preparing a dumping complaint against their Japanese rivals, found in 1985 a portion of a Hitachi sales presentation that emphasized the “win at any cost” spirit. It said,

Quote 10% Below Competition
If they requote . . .
Bid 10% under again
The bidding stops when Hitachi wins . . .
Win with the 10% rule . . .
Find AMD and Intel sockets . . .
Quote 10% below their price . . .
If they requote,
Go 10% again
Don’t quit until you win!

ANOTHER WORLD

WHEN the American industry was doing everything right according to American economic theory, it began to collapse. When European, Japanese, and Korean producers broke all the rules of American “market rationality,” they started to pull ahead. The very existence of the American industry could not be explained by natural market forces—nor could its implosion, nor could its partial recovery in the late 1980s. Every one of these trends reflected not just market forces, not just the dead hand of the state, but some interaction between the two that is usually missing from the public debate.

You don’t have to care about semiconductors, or really even about economics, to be intrigued by this tension. A well-established set of theories, which undergirds our national policy and runs through nearly every speech and editorial about America’s economic health, cannot explain what has happened in a major industry in the real world. Academic economists have offered nuances and refinements of the theories which more closely fit the facts of the semiconductor case. For example, they emphasize that the “externalities” of a high-tech industry can make it sensible for governments to subsidize the industry’s growth. Strong semiconductor and aircraft industries generate high-wage jobs and make it easier to attract other high-value industries in the future; therefore governments may be sensible in offering subsidies, even though the simplest version of economic theory says that the choice should be left strictly to the invisible hand. But very few of these refinements make their way into the public debate, where we’re usually presented with the stark choice between free markets and state control.

We can resolve this tension by disregarding the evidence —as most of us do when concentrating on consumer products like cars and TVs rather than semiconductor chips as the symbol of trade problems. We can invent exceptions and special clauses to account for the variation—much as the Ptolemaic astronomers did when they tried to fit the motion of the planets into their theory that the sun revolved around the earth. Or—we can look again at our basic ideas.