Contents | January/February 2004
More on politics and society from The Atlantic Monthly.
The Atlantic Monthly | January/February 2004
State of the Union
or all its flaws, medical care in the United States has improved enormously over the past several decades. Deaths from heart disease have fallen by 40 percent since 1970. In the mid-1980s HIV was an automatic death sentence; it's not anymore. Since 1990, thanks to better detection and treatment, cancer mortality rates have been falling. (Breast-cancer mortality is down by 20 percent since 1990.) Altogether, medical advances have helped to raise U.S. life expectancy from an average of sixty-eight years in 1950 to seventy-seven years today.
Putting a Value on Health
The way to arrest spiraling costs is to admit that we already do what we say we never will—ration health care—and then figure out how to do that better
by Don Peck
Not only have American lives grown longer, but their quality has improved. The proportion of people over sixty-five with one or more chronic disabilities—such as the inability to walk, or to get dressed, without aid—declined from greater than 25 percent in 1982 to less than 20 percent in 1999. And the development of Viagra and vision-correction surgery, among many other drugs and procedures, has allowed many Americans to prolong pleasures historically associated with youth.
Of course, not all the recent improvements in American health and longevity can be directly attributed to our health-care system; some are as much the result of adopting healthier habits (exercise, better diet) or of dropping unhealthy ones (smoking, excessive alcohol consumption). And even though life expectancy has been rising in America, it remains lower than in many other advanced nations—probably because those nations have lower rates of obesity, broader access to health care, and lesser degrees of wealth inequality. Still, better medical care is the principal cause of improvements in American health and life-span over the past fifty years.
The problem, of course, is that since 1960 health-care spending has grown significantly faster than the economy, meaning that we're spending an ever larger portion of our incomes on medical care. In 1960 health care constituted 5.1 percent of the U.S. economy; in 1980 it constituted 8.8 percent; today it constitutes 13.3 percent. The Centers for Medicare and Medicaid Services (CMMS) projects that health-care spending will grow by an average of more than seven percent a year until 2012, even after adjusting for inflation. Meanwhile, private health-insurance premiums—which rose by 14 percent last year alone—are becoming unaffordable for ever more Americans.
It seems that cutting costs should be relatively easy. After all, health-care delivery in the United States is notoriously inefficient. Consumers lack sufficient information or expertise to make informed choices of physicians, hospitals, and treatments. Also, because most of their health care is paid for by insurance, they tend to overuse the system. Physicians, for their part, usually profit from the tests and procedures they order and perform—whether or not those tests and procedures are truly necessary. Shouldn't it be a simple matter to reduce waste and abuse?
Up to a point, yes. The frequency of a major surgical procedure such as coronary bypass surgery varies widely from physician to physician and region to region, with no discernible difference in health outcomes, on average, between patients who receive such treatments and those who don't. According to one study, 20 to 30 percent of health-care spending goes for tests, treatments, and visits that have no positive effect on either the quality or the length of our lives. If we could identify and prevent even half this spending, we would save some $25 billion to $35 billion each year on Medicare alone.
But this would do little to address the fundamental problem. That's because the largest driver of growth in health-care spending is not waste or price gouging or the slow aging of the population but, rather, the cost of technological innovation. Even when technological improvements make some treatments less expensive and more effective, overall spending often rises. Cataract surgery, for example, used to require up to a week in the hospital and offer only uncertain results. Now it's a quick, highly effective outpatient surgery. Per-procedure costs of this surgery have fallen, on average, by about one percent a year over the long term, after controlling for inflation. But because so many more people opt for cataract surgery today, real total spending on the procedure has risen by four percent a year over the same period. Given the overall growth in health-care spending currently projected by the CMMS, even an immediate drop, through waste reduction, of 20 percent in nationwide spending—which would be highly difficult to achieve—would be undone by new technology-fueled spending in just four years.
Most of the growth in health-care spending has produced real improvements in the scope of medical services and the quality of care. But the number of things we can do to cure disease, eliminate discomfort, and stave off aging is expanding faster than the ability of many Americans to pay for them. Indeed, it appears very likely that growth in medical spending will continue to outpace growth in personal income or GDP over the next few decades—even if we introduce temporary cost-saving measures.
That we spend enormous sums of money for even tiny improvements in health-care quality reflects a social ethos to which most Americans implicitly subscribe: anything that might improve health or extend life, however marginally, should be made available to everyone, at whatever cost. That may seem morally proper. But because of the way that health care is bought and financed in this country, we tend to be blind to the costs, both economic and moral, of taking this ethos too far. Because neither patients nor physicians pay for them directly, expensive tests, treatments, and procedures of only marginal value are routinely ordered, and expensive new technologies that barely improve the ability to detect or treat a disease are widely and rapidly adopted. Of course, not every health plan covers every test or treatment, but most health-insurance plans have been rapidly expanding what they cover. The result is a system in which patients with insurance can order up an expensive test that is one percent more effective than a test costing one third as much—indirectly pushing health-care premiums beyond the reach of many others.
Is there anything we can do about this? Unfortunately, the most obvious way to significantly reduce health-care costs without substantially decreasing the quality of care is rationing—that is, limiting the range of treatments and tests that insurance will cover in certain circumstances, a practice that runs counter to the prevailing any-care-at-any-cost ethos. Hardly a politician dares even to mouth the word "rationing," save as an expression of opprobrium.
Yet the fact is that the system already rations; we just don't acknowledge it openly. Every day on the front lines and in the back offices of the health-care profession ICU nurses, hospital executives, and Medicare and insurance-company administrators make difficult cost-versus-value decisions. How long should a man in a coma be allowed to linger in an expensive ICU bed while others who could benefit from the specialized care wait? Is it worth $7,000 to give Xigris—a drug to treat virulent infections that can develop in hospital settings—to an uninsured patient with less than three months to live? In a recent survey of 620 critical-care physicians, 68 percent said they had rationed medications or procedures in the preceding year. Such decisions are often morally complex, even agonizing—and often benefit patients with money: overall, people who have health insurance receive about twice as much medical care as those who lack it.
Without intervention this gap will most likely widen: a majority of Americans will continue to receive state-of-the-art care, whereas a growing minority will be shut out of the insurance system, finding themselves without access either to the cutting-edge treatments of 2004 or to proven forms of medical care that have been available for decades.
So the key question is not whether health care should be rationed in the United States; it already is. Rather, the question is how health care should be rationed. How should the potential benefits of reduced pain, improved quality of life, or extended life be weighed against the high costs of the medications or procedures involved? And who should weigh them? These are hard questions with high moral stakes. We do ourselves a disservice by dismissing them with a platitude like "You can't put a value on health." That may be true in the abstract, but one can put a value on different treatments and practices. When we decline to do so, we are automatically putting a lower value on other areas, such as education and security, in which increased spending might in fact add more to life expectancy and quality of life. By refusing even to countenance sensible limits on the health care citizens have a right to demand, we make universal health-care coverage—a worthy goal that we are long overdue in attaining—nearly impossible. It would be un-American to suggest that those who can afford truly comprehensive insurance—call it "Cadillac insurance"—should be prevented from buying it. And no one is suggesting that. But if we will not consider that perhaps not everyone who pays premiums should be guaranteed Cadillac insurance, more Americans each year will be left unable to afford any coverage at all. At the very least we need to begin a national conversation about the meaning of "medical necessity"—for instance, does it include knee surgery for someone who is not in acute pain but wants to continue playing recreational tennis or touch football? what about bariatric surgery (stomach stapling) for those who are not morbidly obese?—and to launch an honest discussion about what kind of rationing would be fairest and most efficient.
To start the conversation, here's one scenario: Imagine a system in which everyone has insurance (including prescription-drug coverage) offering a basic standard of care almost equal to what the insured enjoy today, but people who want the very latest and most expensive treatments must either buy supplemental insurance or pay out of pocket. (For one vision of how coverage might be extended with little disruption, see Laurie Rubiner's sidebar.) As innovations prove to offer dramatically better care, or somewhat better care at roughly equal cost, basic coverage would be extended to include them; but the standard for what could be included would be set high (perhaps with the help of an institute like the one proposed by Shannon Brownlee on the facing page). With fewer patients opting for expensive new treatments that are only marginally more effective than older ones, research doctors, drug companies, and medical-hardware makers could devote more of their R&D resources to making existing treatments cheaper and more effective. Though health-care spending will never stop growing completely, it would grow more slowly under this scenario. Similarly, although the rate of improvement in health-care quality might slow marginally, improvement would continue. America would still have care equal to the best in the world—and the system would cover more people. Would that sort of rationing really be so bad?
Don Peck is the director of The Atlantic's editorial-research staff.
Copyright © 2004 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; January/February 2004; Putting a Value on Health; Volume 293, No. 1; 142-144.