Let’s get this straight. Big banks that emphasize return to shareholders above all else have been shown to be menaces to society. Yet one of the main responses to the problems banks got into has been to … reaffirm the primacy of shareholders.
Such is the power of the ideology known as shareholder value. This notion that shareholder interests should reign supreme did not always so deeply infuse American business. It became widely accepted only in the 1990s, and since 2000 it has come under increasing fire from business and legal scholars, and from a few others who ought to know (former General Electric CEO Jack Welch declared in 2009, “Shareholder value is the dumbest idea in the world”). But in practice—in the rhetoric of most executives, in how they are paid and evaluated, in the governance reforms that get proposed and occasionally enacted, and in almost every media depiction of corporate conflict—we seem utterly stuck on the idea that serving shareholders better will make companies work better. It’s so simple and intuitive. Simple, intuitive, and most probably wrong—not just for banks but for all corporations.
As Cornell University Law School’s Lynn Stout explains in her 2012 book, The Shareholder Value Myth, maximizing returns to shareholders is not something U.S. corporations are legally required to do. Yes, Congress and regulators have begun pushing the rules in that direction, and a few court rulings have favored shareholder primacy. But on the whole, Stout writes, the law spells out that boards of directors are beholden not to shareholders but to the corporation, meaning that they’re allowed to balance the interests of shareholders against those of stakeholders such as employees, customers, suppliers, debt holders, and society at large.
Proponents of shareholder value argue that, whatever the law says, corporations would be more successful—and do more good—if executives and boards spent less time balancing their various obligations and focused instead on making money for shareholders. This idea began percolating at the University of Chicago and on a few other campuses in the 1960s and ’70s, and it made some sense at that historical moment. American corporations were struggling in the face of global competition and technological change, yet most were complacent. If only executives were forced to pay more attention to their companies’ plummeting stock prices—by the threat of a hostile takeover, perhaps, or by a strong link between their pay and those prices—they might take the risks and make the changes that the times demanded. Or so the thinking went.
These arguments began to reshape corporate practice in the 1980s. By the mid-’90s, they had congealed into the simple doctrine that the job of a chief executive is to keep shareholders happy. Executive-pay packages were stuffed with stock options, and a newly restive breed of professional investors began goading boards into pushing out managers whenever a company’s stock price languished. Underlying both practices was the belief that stock prices were the best measure of corporate performance—which made it pretty easy to judge whether a CEO was doing a good job or not. For a few wonderful years, all of this seemed to work. Corporate America, after lagging behind Japanese and German competitors, made a spectacular comeback, and the U.S. economy boomed with it.