The Roaring Nineties
As the chairman of Bill Clinton's Council of Economic Advisers, and subsequently as the chief economist of the World Bank during the East Asian financial crisis, Joseph Sitglitz was deeply involved in many of the economic-policy debates of the past ten years. What did this experience tell him? That much of what we think we know about the prosperity of the 1990s is wrong. Here is a revised history of the decade, by the winner of the 2001 Nobel Prize in Economics
At the height of the 1990s economic boom—a period of unprecedented growth—capitalism American-style seemed triumphant. After sluggishness in the 1970s and 1980s, productivity in the United States had risen sharply, to levels that exceeded even those of the boom following World War II. Globalization was in full swing, and in ways that redounded distinctly to the good of this country. The North American Free Trade Agreement (NAFTA) and the so-called Uruguay Round of international trade negotiations promised to bring untold benefits to our economy. The flow of capital to emerging markets had multiplied sixfold in just over six years—a remarkable increase, driven by the search for ever higher returns. U.S. representatives at G-7 meetings and elsewhere boasted of our success, preaching to the sometimes envious economic leaders of other countries that if they would only imitate us, they, too, would enjoy such prosperity. Asians were told to abandon the model that had seemingly served them so well for two decades but was now seen to be faltering. Sweden and other adherents of the welfare state appeared to be abandoning their models as well. The U.S. model reigned supreme. There was even talk of a radical New Economy, in which incomes would soar and the very idea of a business cycle would be relegated to history.
There is no question that the nineties were good years. Jobs were created, technology prospered, inflation fell, poverty was reduced. I served in the Clinton Administration from 1993 to 1997, and all of us who were involved in U.S. economic policy during those years benefited from a happy confluence of events. We eagerly claimed what credit we could for the prosperity; the American people, wanting to believe that the economic good times were a matter not just of luck but, rather, of good management, willingly gave credit to those responsible for shaping economic policy, in the hope that under the continued stewardship of such policymakers this prosperity could be prolonged.
But the recession of 2001 showed that even the putative best of economic management could not insulate the economy from downturns, and that the business cycle was not dead. The bursting of the stock-market bubble showed that New Economy rhetoric contained more than a little hype. And the Enron, Arthur Andersen, Merrill Lynch, and Adelphia scandals presented another side of American capitalism. Now the economy is setting new kinds of records: WorldCom's is the largest bankruptcy in history; the fall in the stock market is the largest in decades. As the market has plunged, those who confidently ploughed their savings into stocks have found their retirement incomes in jeopardy.
It would be nice for us veterans of the Clinton Administration if we could simply blame mismanagement by President George W. Bush's economic team for this seemingly sudden turnaround in the economy, which coincided so closely with its taking charge. But although there has been mismanagement, and it has made matters worse, the economy was slipping into recession even before Bush took office, and the corporate scandals that are rocking America began much earlier.
The history of the 1990s needs to be rewritten. How are we to assess that decade in light of what we are seeing today?
For seven years, from 1993 to 2000, I was in a position to observe closely what was going on in Washington, first as a member of the Council of Economic Advisers, later as the chairman of that body (a Cabinet-level position), and then as the chief economist and senior vice-president of the World Bank, in the tumultuous years of the global financial crisis and the faltering transition of Russia and the other formerly Communist countries to a market economy.
I could see what effects individual players can and can't have on the behavior of the economy. We in the Clinton Administration took office at the right time. Some of what happened was the consequence of forces set in play well before: Investments in high technology finally began to pay off, leading to increased productivity. Income inequality declined, owing in part to a change in the "education premium" (the difference in income between those with and those without a college education), which in turn was due to ordinary market forces of supply and demand. Some Clinton policies—including increased support for Head Start, an effort to compensate for the disadvantages of some of America's poorest children—will make a difference in the future, but that difference won't show up in the data for years. Future administrations will be the beneficiaries of the wise policies instituted by the Clinton Administration.
Of course, even if the long-term productivity increases were partly attributable to longer-term forces predating the Clinton Administration, its policies were pivotal in the recovery (though for reasons that were quite different from those often cited by Administration officials). The Fed deserves credit for not spoiling the boom—and it had the potential to do so. The inflation-fearing Fed could have slammed the brakes on the economy too hard, by raising interest rates too high, bringing the boom to a premature end—as it had done several times in earlier decades. Chairman Alan Greenspan's good sense prevailed over the fears of the inflation hawks on the Fed's board, and for this Greenspan should be praised.
But at the same time, the groundwork for some of the problems we are now experiencing was being laid. Accounting standards slipped; deregulation was taken further than it should have been; and corporate greed was pandered to—though not to the extremes taken by the Bush Administration. The U.S. economy will pay the price for years to come. Many of the mistakes were debated during my time at the White House; it sometimes seemed that we were arguing mainly over the soul of the Democratic Party. But those debates were also about the future of the U.S. economy.
Any story of the economic boom of the nineties has to begin with the recession that preceded it, in 1991—a recession brought on in part by the long-overdue bursting of the 1980s real-estate bubble. This bubble was caused primarily by the tax giveaway of 1981 and by the poorly designed financial-sector deregulation carried out under Ronald Reagan. The infamous savings-and-loan debacle—in which the U.S. government had to bail out banks devastated by nonperforming real-estate loans—dearly cost not only the federal budget but also the economy. In its aftermath new banking regulations were put in place, and the flow of capital dried up—as did, in a slowly unfolding way, the economy itself. The Fed failed to recognize the underlying source of the problem. It lowered interest rates, but not quickly enough. The economy went into a recession that was described by many as short and shallow, but it didn't feel that way to those who lost their jobs. Indeed, a closer look at the data shows that the downturn was serious; as measured by the gap between the economy's potential and its actual performance, it was as bad as the average postwar downturn. Bill Clinton was to benefit from this and other economic miscalculations—in ways that go well beyond his election (which itself owed much to the faltering economy).
Imperfect Information
"In recent years economists have begun to question the previously unchallenged notion that the economy had a 'natural rate' of unemployment...."
When Clinton took office, according to the conventional wisdom, he became convinced that before committing substantial government spending to important social programs, he had to restart the economy—and to do that he had to reduce the federal budget deficit. As a result of Reagan's tax cuts and the increases in expenditures that both his Administration and Congress had pushed for, the deficit had soared to close to five percent of the gross domestic product. Though Clinton had to trim his own ambitions, he did the right thing and cut the deficit. Interest rates came down, and the recovery began.
But there's a basic problem with this story. It is inconsistent with what is taught in virtually every economics course in the country—namely, that deficits are good for employment, and that reducing the deficit during a downturn is a particularly bad idea. (Those of us advising Clinton were, of course, aware of this; that's why we tried, as far as possible, to "back load" the deficit reduction—that is, to have greater deficit reduction in future years. By then, we hoped, the economy would have recovered, because the markets, anticipating this reduction, would bring down interest rates.) But if deficit reduction should have slowed the recovery, to what can we attribute the recovery's vigor? To a series of lucky mistakes, I believe. By lowering the deficit we inadvertently ended up recapitalizing a number of American banks, and this, as much as anything else, refueled the economy.
Here is how it worked. In the aftermath of the savings-and-loan debacle, new regulations required banks to maintain adequate capital on which to draw if things went sour. The amount of capital banks need, of course, is related to how much risk they assume. Economists who were thinking about the problem, including Michael Boskin, the head of the first President Bush's Council of Economic Advisers, agreed that in this context risk included not only bankruptcy but also a decrease in the value of assets. From this perspective, long-term government bonds are risky, even with no chance of default, because they can decrease in value when interest rates rise. But during the early 1990s the Fed decided to allow banks to ignore this risk and treat long-term government bonds as safe. This made the banks happy by increasing their profitability, at least in the short term, because long-term bonds yielded high returns. By taking deposits and buying long-term bonds, they were able to make seemingly large profits. (In 1991, for instance, long-term government bonds were yielding 8.14 percent while Treasury bills averaged 5.4 percent and rates on certificates of deposit were typically far lower.) This was a dangerous strategy. If interest rates had risen, as they might well have if runaway deficits had continued, then bond prices would have plummeted, and the federal government would again have been left to pick up the pieces. (In other words, the strategy was "profitable" only because of inappropriate accounting and regulatory practices; the banks should probably have been forced to continue setting aside reserves to protect them against the risk of a drop in the price of long-term government bonds, but they were not, and they did not do so.)
Fortunately, owing in part to Clinton's success in cutting the deficit, long-term interest rates did come down. The price of long-term bonds increased. The risky gamble had paid off, and as a result the banks' balance sheets were greatly improved. And because long-term interest rates were now low and long-term bonds were a less attractive investment, banks began to look elsewhere for profits. They went back to their real business, which is lending.
Here the second lucky mistake occurred. The Fed, like many other forecasters at the time, thought that inflation would pick up as soon as unemployment fell below about six percent. This critical rate, below which unemployment cannot go without stimulating inflation, is called the NAIRU—the "non-accelerating inflation rate of unemployment." If the Fed had been able to anticipate the extent to which lending, and hence economic activity, was subsequently to pick up, it would have tightened monetary policy (by raising interest rates) early on, in an effort to thwart inflation before it started. But the Fed didn't foresee the increased economic activity, and so it didn't raise interest rates—to the economy's good fortune. By September of 1994 unemployment had crashed through the NAIRU barrier, falling by December to 5.5 percent—but, contrary to the Fed's model, inflation didn't pick up. Eventually unemployment fell another two percentage points, still without stimulating inflation. This course of events powerfully benefited the poor: the reduction in unemployment reduced welfare rolls as much as any other measure we might have undertaken. It played a major role in other social changes, too, including a drop in the crime rate.
The Fed had not fully appreciated the consequences of rapid changes in the labor market: higher levels of education, weaker unions, a more competitive marketplace, increased productivity, and a slower influx of new workers meant that the economy was able to operate at much lower rates of unemployment without triggering inflation. As evidence mounted that lower unemployment need not mean inflation, Alan Greenspan, to his credit, grasped the new reality. While the inflation hawks at the Fed continued to fret (they said inflation had to be shot before one saw the whites of its eyes), Greenspan raised interest rates more slowly than they wanted. If the hawks had had their way, the period of growth would very probably have been cut short.
Thus the way was paved for the largest peacetime expansion in U.S. economic history by a combination of pragmatism, luck, and fortunate mistakes. But other mistakes turned out to be less salubrious.
America has always taken pride in its innovativeness. Not all sorts of innovation, however, lead to higher productivity. Sometimes companies can increase their profits more by figuring out how to avoid taxes than by producing better products; sometimes they can maximize their wealth by gulling unwary investors rather than by actually inventing goods that yield high returns.
It is understandable that business wants to boost profits and that Wall Street enjoys the rise in share prices that comes from these higher profits. But as we have seen with Enron, Xerox, and WorldCom, it is important for the credibility of our stock markets that the profits booked be real, not based on phony accounting. Alas, the U.S. government got involved in options accounting, to the detriment of our financial markets.
The Financial Accounting Standards Board, the supposedly independent body that, as its name suggests, sets accounting standards, recognized a problem in the fact that executive stock options were assumed for accounting purposes to have no value. The problem was that when executives are paid in stock options, what they receive does not come out of thin air: other stockholders' share values are diluted when the options are exercised.
In 1993 and 1994 the FASB proposed changing the treatment of executive stock options. But the companies that use options were happy with the status quo. They fully realized what it would mean if their shareholders better understood how such options decreased the value of their shares: stock prices would fall, and so would executive compensation. Executives didn't want this to happen—at least not until they had cashed in. Wall Street and Silicon Valley, among others, had a mutual interest in making the FASB back down. So they turned for help to both Congress and the Administration. They got support from the Treasury and Commerce Departments. But we at the Council of Economic Advisers thought it wrong that powerful political forces should interfere with the decisions of the FASB. The board was supposed to be independent precisely to avoid such meddling. We noted that although it was difficult to put a value on options (the reason companies gave for wanting not to give them a value at all), it was also difficult to measure many other items in the accounting framework. Providing an accurate estimate of depreciation, for instance, is far more difficult than accurately estimating the value of an option. Clearly, a value of zero was wrong, and we could and should do better. But as is often the case, politics won over principle: the Treasury and Commerce Departments sent a letter to the FASB, arguing against accounting for options. Other pressures were brought to bear, and the FASB finally gave in. As the events of the past few months illustrate, this was a mistake.
In truth, more than the principle of good government was at stake. Share prices provide signals for investment. When those prices don't reflect the best information available, the risk increases that resources will be poorly deployed.
Thus did one factor contributing to the stock-market bubble endure, and the insidious idea that fancy financial techniques could be used to mislead shareholders was reinforced. We were all told, of course, that what was really going on with options and executive compensation was explained in the footnotes of annual reports. But even sophisticated analysts sometimes couldn't interpret what was being said; and no one could figure out Enron's footnotes. The whole point of having accounting standards is to make it easier for investors to evaluate what is going on inside a firm.
Misleading accounting practices, along with tax incentives, encouraged companies to reward their executives with stock options rather than with incentives that might have improved the companies' true performance in the long run. This created a vicious circle, in which executives had even more reason to engage in misleading accounting practices. The executives were being rewarded not on the basis of their companies' performance but on the basis of stock prices.
The examples of Enron and Global Crossing prove that incentives matter, and that markets do not always provide the right incentives. That is why government has an important role. Every game has to have rules, and government sets the rules of the economic game. If the rules promote special interests, or the interests of corporate executives, then the outcomes are not likely to promote general interests, or the interests of small shareholders.
The accounting industry, meanwhile, had ample incentive to cooperate with acts that were thought of not as defrauding American investors—after all, they, too, seemingly gained in the short run—so much as "realizing full market potential." To his credit, Arthur Levitt Jr., the former head of the Securities and Exchange Commission and one of the true and often unsung heroes of the Roaring Nineties, recognized the potential for conflict of interest when accounting firms were not only doing their routine business—that is, accounting—but also making millions by providing consulting services. He pushed for regulations to prevent accounting firms from doing lucrative consulting for companies that they also audited. Unfortunately, the accounting firms and their allies, who said "Trust us," prevailed. Now we recognize Levitt's wisdom: when incentives for misbehavior are strong, trust can go only so far.
Corporate CEOs had other accomplices, especially within the investment-banking community. Investment banks disseminated poor information that led to mispricing; to high volatility, as their misdeeds gradually became apparent; and, finally, to a lack of confidence in the markets.
It is now common knowledge that analysts at investment houses deliberately touted stocks their firms represented even when internal e-mails suggested that those stocks were not well thought of. Levitt pushed for fair disclosure, so that information provided to a few analysts would be publicly provided at the same time, but without much success.
While Levitt endeavored unsuccessfully to reduce conflicts of interest, legislative changes made matters even worse. Some twenty-five years earlier America had begun its love affair with deregulation. It was clear that something was wrong with the vast array of regulations put into place during the New Deal. The world had changed, and the regulations had not kept pace. Too often, however, the question was not What is the right regulatory structure? but How do we get rid of the regulations as fast as possible?
America should have learned a lesson from the savings-and-loan debacle, which cost U.S. taxpayers several hundred billion dollars and was one of the main factors in the 1991 recession. The collapse of the S & Ls resulted from excessive deregulation—and bad accounting practices—during the Reagan years, which is why in 1989 the Bush Administration imposed a more balanced regulatory regime.
Stock options and other badly designed compensation schemes proved as problematic in the financial sector as elsewhere, with even more serious consequences. When the market was booming, some investment banks were emboldened to violate regulations and ethical standards by demanding kickbacks or extra commissions from people awarded valuable initial-public-offering (IPO) allocations, a practice exposed by the now famous SEC investigation of Credit Suisse. The banks worked with Enron and others to create sham transactions—for example, disguising loans as prepayments on energy contracts. The banks made money from the deals, and the value of their stock went up—just as Enron's did when investors were unable to decipher the true magnitude of its outstanding liabilities.
If there was ever a time not to push deregulation further, the nineties was it. Even legitimate new financial-engineering techniques meant that investors and regulators alike were having an increasingly difficult time assessing companies' balance sheets. Such innovations had created new opportunities for those who wished to provide misleading information; deregulation would simply expand those opportunities. But the forces for deregulation were never greater than in the Roaring Nineties: the profits to be made were enormous, and with the abiding faith in the market economy seemingly confirmed by that economy's stupendous performance, banking interests saw an unprecedented opening. For more than half a century commercial banking had been separated from investment banking, with good reason. Investment banks push stocks, so if a company whose stock they have pushed needs cash, it becomes very tempting to make that company a loan.
The Glass-Steagall Act of 1933, which separated investment banking from commercial banking, recognized the conflicts of interest that can arise when the two are conflated. But concerns about keeping them separate were put aside after the arrival at the Treasury Department of Robert Rubin, in 1995. The big banks saw getting rid of Glass-Steagall as an opportunity to become even bigger. Treasury argued that scrapping the law was of no consequence, because banks had learned how to circumvent it anyway. (If this had been so, the appropriate response would, obviously, have been to try to limit the circumvention.) Treasury also argued that it could address the conflicts of interest (which it admitted) by constructing barriers between the banks' parts—"Chinese walls," they were called. Of course, if such measures had worked, that would have undermined the most cogent argument for eliminating the formal separation in the first place. One cannot simultaneously claim that it is important that banks be integrated, to take advantage of what economists call economies of scope (the benefits that businesses can reap by working in many different areas), and also that it is important for the parts of a bank to be compartmentalized, to avoid any conflicts of interest. In retrospect it is clear that Chinese walls did not work—or did not work well enough to prevent serious problems from arising. For example, banks continued to lend to Enron even as its problems began to surface; the profits the banks made (they got fees for Enron's deals) more than compensated them for the risk in lending.
It is no coincidence that three of the sectors involved in today's economic problems—finance, telecommunications, and electricity trading—were all subject to deregulation. Almost every major episode of deregulation gives rise to a bubble-and-burst cycle, and the progression of events in these three sectors proved to be no exception. In telecommunications a new regulatory regime was required; the previous one, sixty years old, was clearly unsuited for the New Economy. No matter what legislation was passed, problems would have arisen. But as market forces were unleashed, it became part of the conventional wisdom that whatever company established itself in the marketplace first would be able to make untold profits. The race began. Naturally, the race required money, and deregulation in the financial sector played a central role here.
The complexities of the new economic world—new technologies, new financial instruments, a more integrated global economy—were putting strains on the old regulatory system. Change was clearly needed. Accounting had to learn to deal with new financial instruments, such as derivatives, and the myriad of techniques by which liabilities could be moved off the balance sheet; regulatory bodies had to cope with globalization and new technologies. But special interests, their power augmented by an unwavering faith in markets, remained dominant in policymaking and continued to chant the mantra of deregulation.
Deregulation policies did help to fuel the economy in the short run. They created a stock-market bubble that made some investors into millionaires overnight. But they also fed an irrational exuberance (to use Alan Greenspan's famous phrase) that eventually led to a huge misallocation of resources. Money that could have gone into basic research, to improve the country's long-term prospects; money that could have been spent to improve the deteriorating infrastructure; money that could have been invested in improving both dilapidated inner-city schools and rich suburban ones, instead went into useless software, mindless dot-coms, and unused fiber-optic lines.
Those who were supposedly guiding the country's economy benefited from the euphoria brought on by false accounting no less than did the CEOs. Yes, the statistics looked good in the final years of the bubble, the final years of the Clinton Administration. But the bursting of the bubble has put America's economy in jeopardy.
Even as the belief in unfettered and self-regulating markets led to government's retreat from areas where it was needed, there was, ironically, somewhat less enthusiasm for eliminating government from areas where its role was far more questionable. Early in the Clinton Administration, Labor Secretary Robert Reich, along with the Council of Economic Advisers, pushed for reducing what is now commonly known as "corporate welfare"—direct subsidies, tax breaks, protection from foreign competition. However, little progress has been made in reducing corporate welfare; in fact, new forms have developed and old forms have been altered to keep them alive.
The Tyranny of Financial Markets
"There is a newly emerging tyranny attempting to suppress democratic discourse about issues of economic policy that are vital to prosperity...."
One example is the global aluminum cartel that was spearheaded by Paul O'Neill, who later became George W. Bush's Treasury Secretary. Shortly after taking public office, last year, O'Neill announced that the problem with the world was not too much capitalism but too little. Yet seven years earlier, as chairman of the board of Alcoa, he had asked for government help in stopping market forces from operating, because they were leading to a worldwide decline in aluminum prices. The aluminum industry concluded that the best way to restore "stability" to the market (meaning high prices and corporate profits) was an international cartel. This cartel put the ordinary workings of competitive markets aside: each member country was assigned a fixed output.
Another example is the multibillion-dollar tax break for U.S. export firms, which so infuriated our trading partners that in 1998 they brought action before the World Trade Organization and won. Rather than abandoning the subsidy, Congress attempted to make it compatible with WTO rules. Nevertheless, just nine months after the new legislation was passed, in November of 2000, the WTO ruled that it, too, was not compliant.
A third example is the argument of Clinton's Treasury Department (pushed by Wall Street lobbyists, whose firms stood to make millions) that the government agency charged with making enriched uranium, the key ingredient in nuclear weapons as well as in nuclear power, should be privatized—which, as one critic said, would put America's security up for sale, and which looks increasingly absurd as anxiety over nuclear proliferation has increased.
Finally, and perhaps most memorable of all, there is the publicly orchestrated but privately financed bailout of Long-Term Capital Management (LTCM), one of the world's largest hedge funds, in September of 1998. Hedge funds are typically aimed at the well-off, usually the very well-off; and although the term "hedge" suggests that investors hedge their bets, they often take large speculative positions.
Only a year earlier, in response to allegations from the Prime Minister of Malaysia and others that hedge funds were responsible for the financial meltdown in East Asia, Treasury and the International Monetary Fund had declared that such funds were simply too small to cause a problem, even in a small country like Malaysia. But now, with friends in dire need of help at LTCM, they argued that the demise of a single firm might wreak havoc in the global financial marketplace. Whether this fear was justified will never be known; but the sudden turnaround in attitude was jarring. Earlier, Treasury and IMF officials had talked about the importance of maintaining a strict separation between government (including financial regulators) and the private sector. Not doing so, they said, would inevitably lead to crony capitalism, as it allegedly had in East Asia. But here was a government regulator serving as the ringleader in a private bailout. During the East Asia crisis U.S. officials had condemned conflicts of interest in corporate governance—yet in the case of LTCM, CEOs were using their power to commit their publicly owned companies' resources to bail out a firm in which in some instances they had private equity interests. Around the world the LTCM bailout was seen as the emblem of our hypocritical attitude during the Roaring Nineties.
The Clinton Administration's foreign economic policy is generally regarded as another great 1990s triumph. In its early days the Administration took heroic political risks: pushing through NAFTA and the Uruguay Round, and circumventing Congress to finance a bailout of the Mexican economy. But from our current vantage point such acts appear to have set the stage for one of our greatest failures—the mishandling of the U.S. approach to globalization. That mishandling has left the world in general with a heightened sense of economic insecurity, and the developing world with a strong feeling of unfairness.
"The Social Contradictions of Japanese Capitalism" (June 1998)
The economic woes of Asia have been much written about—in purely economic terms. But behind many of those woes lies a social crisis in Japan. By Murray Sayle
Take the East Asia crisis, a disaster for which the U.S. Treasury Department is at least partly to blame. Treasury, in conjunction with the IMF, encouraged rapid capital-market "liberalization"—that is to say, the opening of underdeveloped markets to the onslaught of highly speculative investment, which can move in and out overnight and leave economic devastation in its wake. With the high savings rate in the East Asian countries there was no need for them to open up rapidly; those countries, in other words, had enough domestic capital for productive investment to make the need for an influx of foreign capital less urgent. But the fundamentalist market ideology demanded that the free flow of capital that had worked for the United States be allowed to benefit developing countries as well. That such free-flowing capital would benefit speculators was clear; but there was little evidence that it would promote economic growth. Indeed, the overwhelming evidence—shown in a number of studies by the World Bank—is that rapid liberalization is extremely risky for developing countries. Treasury ignored this evidence and pushed for faster liberalization. It won—and the world lost.
Having in large part created the East Asia crisis, Treasury and the IMF then designed a bailout. More than $100 billion was used to help shore up Asian countries' sinking exchange rates and to provide funds with which to repay Western banks. It was the banks as much as the countries that were being bailed out. In fact, East Asia didn't benefit much. High exchange rates (higher than they might otherwise have been) did allow a few rich people to spirit money out of their countries on favorable terms, and the bailout did enable some Western banks to recoup more than they otherwise might have. But in the countries the bailout was designed to help, unemployment soared, GDP and real wages plummeted, and governments were left billions of dollars in debt.
The countries that fared the best were precisely those that didn't heed the so-called Washington Consensus. Malaysia, which not only had no IMF program but also, despite sharp criticism from Treasury, had imposed controls on the outflow of capital, experienced the shortest and shallowest downturn. China avoided a downturn altogether by pursuing expansionary monetary and fiscal policies—the exact opposite of what Treasury and the IMF were recommending for other countries in the region. Meanwhile, Thailand, the country that followed U.S. advice most closely, did not return to the pre-crisis level of GDP for more than four years.
Wall Street's Handmaiden
"To [Alan] Blinder and fellow CEA member Joseph Stiglitz, Treasury was acting as Wall Street's handmaiden and taking insufficient account of the risks involved in exposing developing countries to the ebbs and flows of global money markets...."
East Asia is no anomaly. Much the same pattern—a misguided market liberalization followed by a major economic crisis followed by a shortsighted bailout attempt—has occurred in Russia, for instance, and in Argentina. And although the economic scars of these crises are deep, the political scars may turn out to be even deeper. I do not believe that the United States designed these policies to benefit itself at the expense of other countries—but the fact is that in the short term some U.S. and European companies did benefit from these crises. Western banks made money when they went into East Asia, Russia, and Argentina, and they made money when they were brought in to help restructure economies in the aftermath of the crises. The IMF pushed policies—among them very high interest rates—that exacerbated the downturns and led to bargain-basement prices for exports. Treasury and the IMF then insisted that the countries sell their assets at these low prices. At the macro level the United States benefited both from lower prices for imports and by becoming a safe haven for capital fleeing crisis-ridden countries.
Today countries around the world view with cynicism the economic ideas we were trying to export. They came to believe that our push for liberalization and privatization was guided in no small measure by our own corporate and financial interests. Our bailout plans, which provided billions of dollars to help repay banks but denied millions of dollars in food and fuel subsidies for the very poor, only confirmed this impression. So did, for example, Treasury's successful resistance, during the East Asia crisis, to Japan's proposal for an Asian Monetary Fund, which would have allowed countries in the region to help one another. Japan itself offered $50 billion—much as the United States had done for Mexico during its economic crisis not long before.
In the midst of the East Asia crisis many economists, along with some officials at Treasury and the IMF, blamed the affected countries for a lack of financial transparency and for the prevalence of crony capitalism there. Transparency is important, as the accounting scandals show so forcefully. The issue is one to which I was especially attuned, because the work for which I was to be awarded the Nobel Prize centered on the consequences of imperfect and asymmetrical information (that is, information some people have and others don't), including the problems presented by conflicts of interest. But the crisis was largely the result of the overzealous market liberalization promoted by Treasury and the IMF. The quick recovery of Malaysia demonstrates this clearly: had the crisis stemmed from deep-seated institutional failures, as alleged, the country could never have recovered as quickly as it did.
In fact, two standards were applied: Asian banks and companies were told to become more transparent even as the United States resisted regulations that would have required European and U.S. banks, offshore banking centers, and hedge funds to do the same. America's hypocrisy became even more evident in the aftermath of September 11, when the role of secret offshore bank accounts in financing terrorism became clear, and the U.S. position regarding such accounts suddenly changed.
Our emerging understanding of the 1990s requires that we admit, to ourselves and to the world, that we were engaged in a misguided attempt to achieve growth on the cheap. Instead of curbing consumption to finance our boom, we borrowed—heavily, year after year—from abroad. We did this to fill the widening gap between what we were saving and what we were investing—a gap that opened in earnest under Ronald Reagan but grew under George H. W. Bush and Bill Clinton, and has reached new dimensions under the new President Bush. (At least during the Clinton years borrowing went to finance investment, rather than—as in the Reagan and first Bush Administrations—a national consumption binge.) Borrowing cheaply for high-return investments makes sense, of course, if all goes well: returns are more than sufficient to pay what is owed, with interest. For years we were extraordinarily lucky.
Moral Hazard
"Alan Greenspan freely admitted that by orchestrating a rescue of Long-Term [Capital Management], the Fed had encouraged future risk takers and perhaps increased the odds of a future disaster...."
However, in the 1990s we began to test our luck, not to mention that of the countries we told to follow our example, and we continue to test that luck. We have put ourselves deep in debt, not to finance productive investments but, rather, to finance wasteful projects: in the 1980s empty office buildings; in the 1990s fiber-optic systems that will not see light for years, and software that has interfered with business productivity rather than enhancing it; today a tax cut that disproportionately benefits the rich, fueling a consumption extravaganza that, though it may have prevented a greater slowdown, has not provided the foundations for future economic growth. It is still not clear how much of the private so-called investment of the 1990s was sheer waste; but even if we consider that only a fraction of the erosion in stock values is attributable to bad investments, the figure must be in the hundreds of billions of dollars. We are still so well off that we may not suffer immediately from this diminution in our wealth, but the consequences are already becoming clear: a loss of confidence not only in markets, and especially the stock market, but in government; a suspicion that the system is rigged to be an insider's game; a blow to America's moral leadership abroad. The attack on American-style globalization may be driven by Luddites and protectionists—but it is fed by a perception of American hypocrisy and the unfairness of the new global regime. The Uruguay Round—which forced developing countries to open up their markets to the products of the developed countries, while leaving in place protection and subsidies for many of the goods produced by the developed world—was so unbalanced that sub-Saharan Africa, the poorest region of the world, actually ended up worse off. The interests of drug companies were put ahead of those of the millions of people suffering from AIDS and other diseases, whose lives were jeopardized when the drug companies insisted that the production of low-cost generic drugs in developing countries be shut down.
If we don't learn from our mistakes, for which the private sector and the government both bear responsibility, we may not be so lucky next time. That said, we shouldn't disparage the successes of the 1990s, even if we can't be sure who is responsible for them. And some of those successes look particularly impressive in light of what has happened since. Putting America's fiscal house in order was a hard-won achievement, for which Robert Rubin deserves great credit. The sudden reversal, in just a year, as the result of a misguided tax cut, shows how frail such a victory can be—how success can be undone in short order by bad policy. The Clinton Administration managed to resist pressure from the steel industry for protection; the Bush Administration, however, caved in, reinforcing already strong perceptions of American hypocrisy. The SEC under Clinton put initiatives in place to reduce conflicts of interest, at least recognizing the problem. Under Bush seeming conflicts of interest have become the order of the day; only under the force of public outrage has anything been done.
In many ways the fundamentals of the U.S. economy are strong, and they were strengthened during the 1990s. The New Economy is real, even if its significance has been exaggerated. New technology has engendered increases in productivity that will continue to make an enormous difference in our living standards. Conditions that sustain low rates of unemployment have both fueled economic growth and given us an opportunity to address important social problems—particularly those involving the exclusion of less skilled laborers from the job market.
The fact that the New Economy is real, however, doesn't mean that we've understood it. In explaining our success in the nineties to ourselves and the world we have largely drawn on a set of myths that desperately need debunking: that deficit reduction by itself led to the economic recovery of the 1990s; that the brilliance of our economic leaders created our newfound prosperity; that deregulation and self-regulated markets are the key to sustaining that prosperity, and should thus be exported to the rest of the world; and that American-style globalization is based on high-minded principles of equality and social justice and will inevitably lead to global prosperity, benefiting not only financial markets in America but also the poor in the developing world.
These myths arguably served a purpose. The deficit-reduction myth, for instance, rallied the country behind the politically hard measures (which passed the House by a single vote) that were required to restore fiscal responsibility after twelve years of soaring deficits. The globalization myth helped us move toward overcoming protectionist sentiments. But no matter how useful these myths were in the short term, ultimately they are harmful. The deficit-reduction myth suggests that if, say, Argentina or Japan is in a recession and has large deficits, cutting those deficits will bring back prosperity. But almost all economists recommend instead an expansionary fiscal policy, fueled if necessary by larger deficits. The myth that prosperity was the work of our economic heroes is dangerous too: It shifts attention away from where it should be—on policies. And it increases the vulnerability of the economy: economic vicissitudes inevitably cast doubt on our heroes' ability to perform miracles, and a loss of confidence in these heroes will bring a corresponding loss of confidence in the economy.
Economies are like large ships: they cannot be turned around quickly. Moreover, they change so slowly that cause and effect are not always clear. As it happens, statisticians now tell us that growth in the 1980s was more robust than we thought, and that growth in the late nineties was less robust than we thought. We had invested heavily in computers and high technology for decades, but the investments mysteriously kept failing to be reflected in data on the nation's productivity. In the 1990s the payoff finally came—and the credit went to the short-term policies of Rubin and Greenspan. Then, assuming we had discovered the answer to the world's economic ills, we pushed those policies onto other countries. Our economic system has enormous merit, but it is not the only system that works; other systems may work better for others. The Swedes, for example, though they have modified their traditional welfare system, have not abandoned it; the security that it provides not only reduces extremes of poverty—still so prevalent in America—but also encourages the kind of risk-taking that is essential in the New Economy. Living standards have improved every bit as much, new technologies have spread every bit as fast, in Sweden as in the United States. And the Swedes have in fact weathered the latest global slowdown better than we have.
Because we are the strongest country in the world, others are looking for us to falter; our hubris, the overselling of capitalism American-style, fed their hostility. The cracks in our system that have now been exposed have provided ample opportunity for America's critics to say "I told you so." If the selling of U.S. capitalism and democracy was one of the primary objectives of American foreign policy, our conduct was self-defeating.
The fact is that the world has become economically interdependent, and only by creating equitable international arrangements can we bring stability to the global marketplace. This will require a spirit of cooperation that is not built by brute force, by dictating inappropriate conditions in the midst of a crisis, by bullying, by imposing unfair trade treaties, or by developing hypocritical trade policies—all of which are part of the hegemonic legacy that the United States established in the 1990s, but seem to have become worse since the beginning of the new Administration.
The recent protests at meetings of global financial leaders, in Seattle, Prague, Washington, and Genoa, came as a rude shock to many Americans. It became clear that globalization as we are promoting it is intensely unpopular, as is the United States itself. To those of us who spend much of our time in developing countries, the protests weren't surprising, but to people who believe in the myths of American-style globalization, they were an absolute mystery. Why, people asked, should countries whose economies we were helping feel such antipathy toward us and our policies? The answer comes in large part from the simple fact that globalization has left many of the poorest in the developing world even poorer. Even when they are better off, they feel more vulnerable. Argentina was touted as the A+ student of reform. Looking at Argentina, they ask, If this is the result, what is in store for us? And as unemployment and the sense of vulnerability increase, and the fruits of what limited growth occurs go disproportionately to the rich, the sense of social injustice increases too. We have focused so hard on our own economic mythology, and on managing globalization to our short-term benefit, that we have been blind to what we're doing to ourselves and the world.