It's (Not) the Economy, Stupid
Bill Clinton was elected on an untenable premise: that it is the job of the President to manage the economy. Yes, that is what we have come to expect of Presidents; but this expectation, the author argues, is punishingly at variance with anything any President can credibly deliver
by Charles R. Morris
As all the world knows, "the economy" was the overriding issue in the 1992 presidential election. President George Bush "mishandled" the economy. He lost. Bill Clinton promised to handle it more skillfully. He won. I will argue that he won on a false issue, and that the main criterion on which our presidential elections have come to be decided--managing the economy--is a sham.
The assumption that the President manages the economy is the core of prevailing political wisdom, dinned into the public mind by a generation of pundits, a convention of discourse endlessly repeated but rarely examined. The fact is, presidential elections have become referenda on the business cycle, whose fortuitous turnings are personified in the President--thus the "Bush recession" yields to the "Clinton recovery." This is not economics, it is anthropology--an exercise in collective magic. Presidents are properly accountable for their executive and legislative performance, and there is no question that federal actions can affect the economy, sometimes profoundly. Eliminating the budget deficit in four years, for example, might create a nasty recession or a runaway boom--economists, as always, disagree. But the effects would almost certainly be substantial, however unpredictable. Modern political campaigns, however, are fought on the premise that Presidents can manage the economy, that they can take detailed actions that have a precise result--such as raising productivity, reducing unemployment, or increasing investment. In that sense, how much control do Presidents really have over the economy? The answer is, Very, very little.
Mind Over Matter
The "economy" itself is really just a metaphor for the enormously complex stew of daily personal and commercial transactions among some 250 million Americans. The deceptively precise numbers that purport to measure "savings" or "growth" or "income" are crude approximations compounded from a slag heap of samples, surveys, estimates, interpolations, seasonal adjustments, and plain guesses. It takes months, even years, for economists to sort through the numbers and figure out what really happened--if they ever do. There is still no consensus on what caused the Great Depression. The most recent returns on the U.S. economy show signs of respectable growth. But economists are arguing fiercely over whether it is too little or too much, the morning lark of a solid long
term recovery or just a dead-cat bounce. And, of course, they disagree even more fiercely on the policy prescriptions that flow from their prejudices: "Do something or things will get worse!" Or, "IF you do something, things will get worse!"
Clinton's ill-fated economic-stimulus program exemplifies the confusion. Certainly it was small: at $16 billion, hardly more than the rounding error in the national accounts. And no one could argue with a straight face that a porridge of new playgrounds, vaccination programs, and Headstart dollars would visibly improve the economy. But while the Nobel laureate economists James Tobin and Robert Solow warned that the stimulus was too small, Wall Street worried that the President would spook the bond market. If bondholders decided that the economy was growing too fast, or the government was borrowing too much, and inflation was heading up, interest rates would rise and undercut the recovery--the last thing Clinton wanted.
At one point Secretary of Labor Robert Reich, a key designer of the Clinton economic strategy, argued that the "psychological impact" of the stimulus program was what really counted. The emphasis on changing people's minds about the economy sounded a little like official explanations of American policy in Vietnam--bombing just enough to discourage the enemy. The resemblance, in fact, is no accident, for both the image of the President as button-pusher in the economic engine room and the heavily psychologized tactics in Vietnam are squarely in a peculiarly American tradition of thinking about society and the economy.
Pan the camera back almost a hundred years: there stands an aging Henry Adams, the historian and descendant of Presidents, agape before a giant electrical dynamo at the Great Paris Exposition of 1900, ready to fall on his knees, "bewildered and helpless, as in the fourth century, a priest of Isis before the Cross of Christ." Transported by this Damascene vision, Adams set out to construct a "dynamic theory of history," seeking the fundamental laws, like those of electricity or magnetism or the kinetic theory of gases, that govern the affairs of men.
Adams's search for a scientific history was tempered by self-mocking irony. But to minds less steeped in the past, America's leap into the Machine Age and the mighty transformations of American social and industrial relations opened entirely new intellectual vistas. The new "scientific" outlook was thoroughly positivist (nothing existed, or at least was worth talking about, if it could not be measured); it was atomistic (all things were condensed from identical particles); and it was statistical (the intricate but predictable dance of countless freely
colliding molecules of gas was choreographed by a few simple, immutable laws). In a series of easy stages the same logic was applied to virtually the entire field of human endeavor.
To begin with, the mapping from a mechanistic physics to the burgeoning science of economics seemed entirely natural, precise, and complete: correct prices, for instance, arose from the statistical interaction of countless atomized market participants obeying the simple canons of rational self-interest. Around the turn of the century American universities developed a defense of American liberal capitalism as the regime most consistent with a scientific outlook, and the economist's style of thinking rapidly colonized the rest of the social sciences. Franklin Giddings, at Columbia; Edward A. Ross, at the University of Wisconsin; and Charles Horton Cooley, at
the University of Michigan, pushed the still-nascent study of sociology toward statistics and measurement. (Ross and Cooley started their careers as economists.) The program of the American Sociological Society, organized in 1905, was entirely "scientific," seeking the basic forces, or "sympathy," that bound society together, trying to discover a praxis of "social control" for a liberal society, teasing out the rules of the "social equilibrating-apparatus." By the 1920s all of political science was being recast into a study of the market behavior of utility-maximizing individuals. The same behaviorist faith inspired John Dewey's confidence that schools could be organized like "great factories," to turn out self
reliant citizens who would people Dewey's vision of a liberal democracy.
After a period of eclipse during the Depression, the scientific pretensions of American economics and its sister social studies were powerfully reinforced by the sweeping triumph of Keynesianism. Ignoring Keynes's own warnings about the waywardness of real markets, American academics forged a rigidly mechanistic vision of the economic apparatus: pull this lever and investment rises, turn this flywheel and consumption goes up--all the pieces clicking smoothly into place like stainless-steel tumblers. Faith in a deus in machina prompted John Kennedy's wildly unprescient declaration in 1962 that there were no ideological issues left to solve; the country faced only "technical problems...administrative problems."
The economist's vision of the rational actor was formalized in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior (1944). Game theory supplied a metaphysics for the arms race, and led Robert McNamara in his will-o'-the-wisp pursuit of a precise equilibrium of "mutual assured destruction" with the Soviets, as if nuclear arms were a problem of tariffs and quotas, like trade. Since rational actors in rational games reach mutually beneficial accommodations through signaling, it was forever a puzzlement to the civilian theorists in the Pentagon that Ho Chi Minh obdurately persisted in misinterpreting their carefully calibrated bombing campaigns.
Even American ethics has become hardly more than a branch of economics. Dewey always struggled, if not very successfully, against the economist's equation of values with mere wants, although it seemed an obvious implication of his pragmatist teachings. There is no such struggle in John Rawls's A Theory of Justice, the most discussed ethical work of the past generation. Rawls proceeds by constructing a system of goods-maximizing choices by highly rational atomized individuals who lack any history or social ties. The same economistic bias also explains much of the American obsession with personal rights and legalistic procedure: if all wants are theoretically equal, there is nothing left for moralists to do but tinker with process. Economists, after all, care only that the corn auction works; whatever price emerges will ipso facto be the right one.
The interventionist bias of the early Keynesians has created a leftish aura around the more "scientific" social theories. But the same empiricist, atomistic view of society, the insistence on equating choices with values, is a major theme of American conservatism, from Milton Friedman's willingness to legalize drugs to the current "school choice" movement. Thomas Sowell and Robert Nozick take issue with Rawls using exactly the same microeconomic tools that Rawls uses--they just wind up with diametrically different results. The philosophic differences between the left and the right in America, that is, most often reduce to a barren instrumentalism: Should government pull the levers, or do the levers move themselves?
The reach and power of the economic paradigm in America is impressive enough; but what makes it all the more remarkable is that there is almost no compelling reason to believe that it works any better in economics than it did in guerrilla warfare, let alone in sociology, politics, or morals.
Economics in the Real World
Financial-market professionals often take for granted the practical uselessness of economists. Recently, for example, Robert Beckwitt, a bond manager at Fidelity Investments, the big mutual-funds company, tracked the success of Wall Street's economists in forecasting interest rates. Bond values depend critically on interest rates, and bond traders basically make bets on whether interest rates will rise or fall when they buy and sell bonds. One of a Wall Street economist's most important tasks, therefore, is to forecast rate trends, and many hours of analytic energy and vast amounts of computer power are lavished on the problem. According to Beckwitt's data, however, the best bond-investment strategy over the past decade would have been to do exactly the opposite of what the consensus recommended: when the economists said "Sell," the wise investor bought--and achieved, in fact, quite outstanding returns. Charles Wolf, at the Rand Corporation, once compiled a box score for the major economic forecasters; the fit between most forecasts and actual outcomes was approximately random. A running joke at The Wall Street Journal is a regular feature that pits top stock pickers against a portfolio compiled by throwing darts at a wall: year after year the dart board is neck and neck with the analysts.
Forecasting in economics is not the same as plotting the path of a planet. Fundamental axioms in economics have a disturbing tendency to flip upside down with little warning, as if gravity suddenly made objects float. For many years it was received wisdom in economics textbooks that if the Federal Reserve increased the money supply (by loosening the credit reins), interest rates would fall. In theory, money is a commodity like any other, and if there is more of it, its price, or the rate of interest, should fall. For years, in the main, the theory held true. Then, at some elusive moment in the 1970s, investors decided that loose credit caused inflation, which was also arguably true. The Ford and Carter Administrations had greatly expanded credit to help cushion the 1970s oil price shocks, and inflation was rising rapidly. If lenders expect inflation, they will insist on higher interest rates to protect the value of their money. Almost overnight the financial headlines executed an about-face: if the Federal Reserve loosened credit, it was thenceforth taken for granted that interest rates would rise, not fall. The earth, having been round, was now flat, and the economic astrolabes were adjusted accordingly.
Alan Greenspan, the chairman of the Federal Reserve, was recently lectured by a Senate committee for not pushing interest rates down faster to spur an economic expansion. (Lower interest rates presumably induce greater borrowing and therefore greater economic activity.) In fact Greenspan had been pushing interest rates down very aggressively for more than a year. But the Federal Reserve controls only short-term rates, primarily through its overnight lending to member banks. So while short
term rates fell very rapidly during 1992, longer-term rates--the ones investors mainly care about--hardly budged. Investors didn't know which law applied. Would Greenspan's aggressive loosening make money plentiful and lower rates? Or would it trigger inflation and raise rates? The result was a kind of paralysis: short-term rates went down and long-term rates stayed up. The beneficiaries were the banks, who could suddenly borrow very cheaply from the government and lend the money right back at much higher interest, by buying longer-term government bonds. The whole point of the exercise, of course, had been to increase bank lending to business; but since banks could make so much money playing the Treasury market, lending to business actually dropped.
There is no escaping the pervasive influence of the federal behemoth. About one out of every four dollars spent in the land is spent by, or put in the pocket of the spender by, the federal government. If the government lurches left or yaws right, a big chunk of the economy lurches or yaws with it. But it is hardly a surgical policy instrument; there are few obvious levers to pull. At bottom the government engages in four kinds of economic activity. Half a trillion dollars or so is passed out to citizens each year, either directly, as in Social Security payments, or in the form of medical services. Some $300 billion finances the global sprawl of the U.S. military. Another $300 billion pays for the three-million-strong army of federal civil servants and a ragout of federal programs, from Headstart ($2.7 billion) to dusty relics like the Rural Electrification Administration ($1.2 billion). Finally, the government borrows some $300 billion each year from banks, insurance companies, and pension funds and then cycles most of it back as interest on the debt--a majestically rotating wheel of money.
Any economic actor as big as the federal government can, like John Steinbeck's Lennie, clearly do some Very Important Bad Things, whether it intends to or not. In 1982, for example, Congress and the Reagan Administration chickened out on a $10 billion savings-and-loan crisis. Instead of paying the tab and closing down the industry, they covered it up with phony accounting and looser rules, and turned it into a $150 billion problem just a few years later--managing along the way to create a vast supply of unusable commercial real estate, and to roil all the world's financial markets by unleashing a great flood of federal borrowing for the sake of unlucky depositors. By the same token, a complicated but very different series of policy decisions in Japan during the 1980s has created an almost identical American-scale S&L-style crisis there. Once in a while, on the other hand, an opportunity arises to do a Very Important Good Thing. Actually fixing the U.S. health-care system, so that costs were brought under control, the uninsured received minimal benefits, and workers could change jobs without fear of losing coverage (if all that is indeed possible), would be just such an Important Good Thing.
But for the most part the federal apparatus trundles irresistibly ahead, a vast, splay-footed creature moving pretty much under its own power. A web of laws, regulations, long-term procurement contracts, treaties, jealously guarded congressional prerogatives, and deference to hallowed practice collectively overbear the occasional deflecting obstacle. Gentle nudges, pluckings at the coat sleeves are of no avail against such tremendous inertial force; a minor initiative like the Clinton stimulus program is simply lost underfoot. Only very occasionally an utterly determined and utterly resourceful President, as Richard Nixon was, can bend the machinery to his will. Fearing for his re-election in 1972, with the economy slowing and inflation rising, Nixon and his Treasury Secretary, John Connally, engaged in a bravura performance of economic browbeating. They slapped on wage and price controls, broke the link between the dollar and gold, gunned up the money supply, and managed to wring out a year of strong growth with low inflation just before the election--at the price of triggering a huge recession during Nixon's short
lived second term. The economy, that is, can be managed over the short term, but only by relentless and violent clubbing, not by pushing buttons, and not by Presidents who are queasy about their methods or care about the consequences.
Truly important economic decisions, as often as not, zip by unnoticed by the public, probably by the President, because their consequences can take such a long time to become clear. For example, housing prices rose strongly through most of the 1980s, partly in response to severe supply problems in the early part of the decade. The housing shortage is usually attributed to the Baby Boomers' entering adulthood. The population entering retirement, however, was almost as big, and their housing behavior was something new in history. In 1973 Nixon and the Democratic Congress had sharply increased Social Security payments and indexed them to rise faster than inflation. For the first time, a retiring population had the wherewithal to keep their homes, and by and large they did so, instead of moving in with their children as previous generations had done. A substantial share of the nation's housing stock was thus withdrawn from the normal inventory-refreshment cycle, precipitating a boom in housing construction a decade later.
The new independence of the elderly, in fact, has had a long list of profound consequences: the growth of retirement villages, the boom in states like Florida and Arizona, new nursing-care patterns for the failing aged (the proverbial youngest daughter now typically lives in another city), and much more. Many of these changes, of course, represent progress. The point is merely that putting a greater share of national income in the hands of oldsters has had powerful effects far beyond the simple shifting of relative poverty indices among age cohorts.
Such examples are numerous. The country's economic structure has been profoundly affected by three decades of federal, state, and insurance
company policies aimed at expanding third-party coverage of health services. Last year the number of U.S. hospital workers increased by about as many people as are employed by all American computer makers. The Pentagon's aggressive search for miniaturized weapons technologies in the 1950s and early 1960s funded a host of unconventional electronics geniuses, helped create California's Silicon Valley and, by extension, the American venture-capital industry, and is still the source of much of the U.S. edge in advanced computer technologies. Economic models, by and large, just skate by issues like these. They're messy and unpredictable, and they involve tremendous effects from seemingly small actions--wars won or lost because of a missing horseshoe. And they are, depressingly enough for the traditional "scientific" economists, the way the real world works.
The beauty of machines lies in regularity and order--the predictable harmonies of the Newtonian solar system, the eternal canons of a clockmaker God. Specify the initial conditions of any system--say, a table of billiard balls. Introduce a force--the moving cue ball--into the system and calculate the angles and forces of the subsequent collisions, and you will have specified the future position of the balls. That is the fundamental premise of the modern study of economics. But billiard balls don't work like that.
The celestial machine of Newtonian physics is a simple system, a small number of bodies revolving around a single massive body. But systems with many interacting elements, "complex" systems, do not behave like simple systems. You may specify the initial position of the billiard balls to any arbitrary level of precision--the sphericity of the balls, the smoothness of the surface, the trueness of the bumpers. But there will be some level of imprecision remaining, some element of the unknown, some fleeting shadow of randomness. If you strike successive "identical" tables of balls identically, their final positions will vary widely after only a very small number of collisions. Very tiny degrees of randomness, fed through a very small number of collisions, will lead to radically varying outcomes. If God removed an electron from the end of the universe, it would change the collision pattern of the air molecules in my room. A few days later there might be a storm in a neighboring state.
The real world is messily complex. Water flows in turbulent patterns; tiny variations in the flow quickly build up to big structures--eddies, riptides, vortices--that merge, expand, disappear. Molecules in a cell accrete tiny electrical variations and patterns of connections with other molecules. Suddenly a threshold is crossed and huge events are triggered--the cell divides, or pours forth viruses, or becomes cancerous, or dies. Mutations build up invisibly in the fossil record, and suddenly there is an enormous florescence of new species and life forms; the face of the earth changes. Real financial transactions are rarely structured like the rational two
person game so beloved of economic theorists: add just one more player and create a modest range of choices, and the computer simulations spin out a rich new world of shifting combinations and coalitions. Chess is a game between only two players, with only moderately complex rules, but it quickly spirals into an infinity of outcomes. Scientists are just now developing the super-computers and the mathematical tools needed for problems like these. Even planetary orbits, it turns out, are much less stable and predictable than was once assumed.
Is the economy a complex system? Put even more radically, is the economy "computable"? That is, is it hopeless even to try to model it? The center of thinking about such issues is a small think tank in the foothills of the Rockies--the Santa Fe Institute, devoted to exploring the new understanding of complexity. Since the mid-1980s it has facilitated a remarkable dialogue between scholars such as the Nobel Prize-winning scientists Murray Gell-Man and Philip Anderson and the economist Kenneth Arrow, a Nobel laureate and perhaps the country's finest mathematical economist. W. Brian Arthur, a professor of economics at Stanford, and John Holland, of the University of Michigan, who was trained as a computer scientist, both Santa Fe fellows, are intensely involved in re-evaluating conventional economics in the light of complexity theory.
Both Holland and Arthur are interested in the local microformations in the economy, the vortices in the turbulent stream. Arthur points to the "QWERTY" typewriter keyboard, originally designed to slow typists down so that mechanical keys wouldn't jam. There are many more efficient configurations, but QWERTY is now the unassailable standard, just as the Netherlands, a northern country with a short growing season, is the world's tulip center. Any economic system, that is, may have a very large number of possible equilibrium points, which flies in the face of traditional theory. Worse, raw chance may play a major role in the final equilibrium. "Economic data," Arthur says, "give an illusion of a stable, long-term trend line, but underneath that apparent stability is a constant turnover of structures. Some regions become rich, some stay poor, and often there's no obvious reason why. We need to understand those processes, how they work in the real world, from the ground up." (The idea of looking at smaller structures has immediate relevance. Recent national economic reports have been significantly influenced by recessionary conditions in a few states, such as New York and California. Politically distributed national stimulus programs are, at the very least, an inefficient response.)
Holland points out that the economy is an adaptive system, which increases its unpredictability. Unlike the molecules in a gas, economic agents learn from experience. There is indeed a kind of Heisenberg principle of economic management. If the federal government starts focusing on, say, the money supply as a tool for controlling the economy, the relationship between the money supply and the rest of the economy will surely change, vitiating the original policy assumptions. Biological systems, Holland suggests, are better than Newtonian mechanics as analogues for an economy. "The amount of biomass and its complexity seem to have been increasing at a steady, slow rate, for an extremely long period of time, although there is enormous variation in the successful life forms in any particular period. Economies may work that way. But there may also be some stable solutions; fish, porpoises, and ichthyosaurs at different times developed the same solution--hydrodynamic bodies." Arthur cautions, however, that there are nine or ten solutions to the problem of evolving an eye--pinholes, lens arrays, and the like--and that decidedly unhydrodynamic mollusks and crabs have been very successful colonizers of the seas.
The notion that economies are complex systems--more biological than Newtonian--has powerful intuitive appeal. But it will be a long time before complexity theory will be of much help to policy makers. One frustrating problem is a lack of data. Physicists are used to working with thousands, even tens of thousands, of observations. Industrialized countries have been collecting reasonably consistent economic statistics for only about fifty years; their quality is often poor, and their content changes subtly over the years. Financial-market data are somewhat better, and there is some tantalizing but inconclusive evidence that market behavior may mirror that of complex natural systems. Drop grains of sand, one after another, on the same spot; they will form a pile with a regular shape. Every so often a single grain will start an avalanche; most of the avalanches will be small, but once in a while, and quite unpredictably, one will be catastrophically large, wiping out whole sections of the pile. The market movements of most interest to Wall Street may be just avalanches in a sand pile. For the moment, however, the value of complexity theory in economics and finance is primarily as a cautionary metaphor, a pinprick to the pretensions of pundits, a warning that, at least occasionally, well-intended policies could, as the physicist David Ruelle put it, lead to "wild...fluctuations" with "possibly quite disastrous effects."
There is no escaping economic policy. The federal government is too
large a presence in the American living room to be ignored, too
insistent a claimant when the pie is divided. A President who
feigned not having an economic policy would be engaging in as
empty, and as damaging, a pretense as the most enthusiastic
economic micromanager. The element of unpredictability in
economic systems counsels caution, not nihilism, humility
rather than despair.
The problem is one of domains of discourse. At least since John Kennedy's 1960
campaign to get "America moving again," Presidents have been obliged to adopt
the pose of day-to-day managers. Day-to-day responsibilities imply day-to-day
results; the press demands them, and the voters are trained to expect them. But
let's face it, all the factory whistle-stops in a campaign notwithstanding,
Presidents can't do much to create new jobs for machinists in Topeka. Rather
than admit that, Presidents and press exchange cant, and voters sink into cynicism.
Discourse needs to shift toward "stewardship"
and away from "management." We are a grown-up nation,
with an educated, sophisticated press corps. It should
be possible, although it will certainly not be easy,
to set reasonable goals for political stewardship of
the economy, and develop some reasonable scorecard on the important issues.
Guidelines for sound stewardship might include, for instance, a list like the
Skepticism about one's own cleverness is usually a good policy starting point.
In America, at least,
markets mostly work, after their fashion.
That is, although they almost never produce optimum results,
and often take an uncomfortably long time to work, market outcomes
are usually more nuanced, more subtly adapted to underlying complexities, than a priori designs. When Presidents Carter and Reagan decontrolled oil prices at the outset of the 1980s, supply, demand, and prices all came into a fair degree of balance with surprising speed after almost a decade of administrative floundering. Therefore a good initial bias is perpetually raised eyebrows toward complicated nonmarket solutions.
Pandering produces bad policy. There is an emerging consensus, even among quite conservative economists, that only the federal government can solve the health-care conundrum; indeed, it may be the Administration's single most important economic issue, affecting business and consumers alike. Solving the problem will be enormously difficult, requiring some measure of pain all around. The temptation to opt for politically expedient solutions with severely damaging long-term consequences may be overwhelming. The democratic principle presumes that the public will listen to arguments, consistently and persistently delivered, in favor of doing the right thing. Perhaps it is just a long time since an Administration has tried.
All important policies are long-term. The quandary facing elected officials is that anything important takes more than four years. Even successful health-care reform would not pay visible dividends sooner than the end of the decade. The benefits from careful deficit reduction will never be quantified. Even an Administration resolved to do the right thing over the long term, as the Clinton Administration seems to be, will be cruelly torn between its own decent instincts and the clamor of political advisers for short-term results.
And finally, Presidents should trust their instincts over models. Arguments against deficit spending are almost always cast in instrumentalist terms--its effects on interest rates, inflation, and investment. The truth is that none of these effects can be consistently demonstrated. Deficits, for example, have been rising steadily for ten years, and interest rates and inflation have been falling just as steadily: ten-year and thirty-year bonds have recently been at two-decade lows. A recent Goldman Sachs study found no relation between deficits and interest rates in most industrial countries, and only a weak, and fading, relation in England and America. There is no question that large deficits have economic consequences; it's just hard to prove what they are. It's an example of the complexity problem. The background noise--the effects of recessions, technology cycles, global capital flows, savings rates, monetary policy, commodity shortages, political disruptions--overwhelms our ability to trace the consequences of change in a single variable.
The clashes of economists on the meaning of deficits, unfortunately, obscure the truly important, if "unscientific," reason why they should be eliminated. Deficits are moral hazards. They are fundamentally antidemocratic; they allow government to increase spending without the implicit referendum of a tax increase. It has long been a settled principle of Western civil government that state borrowing leads to profligacy and irresponsibility. If revenues cannot constrain spending, government itself will be unrestrained. We need to eliminate deficit spending not because someone's computer says it will create more investment in 1996, or whenever, but to restore the integrity of our political system.
To President Clinton's great credit, his first economic speeches, stripped of the patina of economistic jargon, really do seem to have been appeals to citizens to do the right thing--to endure some pain in order to bring the appetites of government back into line with its resources, to forgo some personal benefits for a greater good. And judging from the first polls, people have reacted very positively, just as one might hope. But by so baldly appealing to principle, Clinton has assumed a heavy burden of faith
keeping. If taxes go up and the deficit does not go down, it will be a breach of trust not likely to be forgiven or forgotten, and the backwash of voter cynicism could poison the political system for years. Unfortunately, as the specifics of the Clinton program become clearer, the danger of precisely such a result seems very real.
The public and its elected politicians need to reinforce each others' best
instincts, not their worst ones. Presidents will perform up to the standard
the public sets for them. Dropping our insistence that our Presidents
spout cant and pretend to be daily miracle workers, shifting the
focus to the long term, and helping to search out the right principles
of action are the least we owe our Presidents, and ourselves.
Charles R. Morris is the author of several books, including, with Charles H. Ferguson, Computer Wars: How the West Can Win in a Post-IBM World (1993).
Copyright © 1993 by Charles R. Morris. All rights