On Christmas Eve 1947, George Orwell was admitted to a Scottish hospital with a case of galloping consumption. Orwell had first been diagnosed with tuberculosis almost 10 years earlier, but nonetheless, in what a biographer called “one of the many ill-judged decisions in a life littered with misjudgements,” he had recently moved to a remote and primitive Scottish cottage, where he began work on Nineteen Eighty-Four. There, he developed the night sweats, fever, and weight loss that are hallmarks of active TB. By the time he was admitted to the hospital, Mycobacterium tuberculosis had husked nearly 30 pounds off his already slender frame.
When I was younger and more romantic, I imagined that tuberculosis made you a good writer. After all, so many great ones, from Keats to Chekhov to all three Brontës, seemed to have died of it. Indeed, in 19th-century Europe, the “White Plague” may have caused as many as a quarter of all deaths. Though that proportion had fallen by Orwell’s time, writers from Camus to Bukowski were still contracting tuberculosis, as were millions of their less famous countrymen. Only antibiotics finally conquered the disease.
Victory arrived just barely too late for Orwell. His friends actually managed to obtain a supply of streptomycin, the brand-new anti-TB drug, from America, but it caused such a violent reaction that every morning when he woke, blood from the ulcers in his mouth had glued his lips shut. It had to be soaked off before he could speak. After several weeks, his doctors had to give up. A less powerful new drug called PAS, which he tried in 1949, didn’t make him so sick, but apparently didn’t much bother the tuberculosis bacilli, either. In January of 1950, an artery burst in his lungs, and at the age of 46, George Orwell drowned in his own blood.
It seems a medieval end for a very modern man. But we are not as far from TB as we like to think. It remains endemic in the developing world and is coming back in richer countries, thanks to travel and immigration, but also to a phenomenon that Alexander Fleming, the discoverer of penicillin, warned of in the 1940s: antibiotic resistance.
“What people might not know about resistance,” says Eric Utt, a former antibiotic researcher now working in Pfizer’s science public-policy division, “is that the resistant organisms are already there. This is why we find bacteria that are resistant to new antibiotics, even before those drugs reach the market.” They’re often the loners in the corner with the mutation that just happens to confer immunity to some super-drug. When we bombard their competition with lethal weapons, they get the place to themselves, and eventually, they take over. After generations of this, the super-drug loses its effectiveness.
Worse, other drugs lose their effectiveness, because many bacteria that are resistant to one drug will also resist other drugs in the same class. We are now learning that bacteria trade genes with each other promiscuously, even between different species, so that resistance developed by one strain of bacteria can be acquired by another. The more we use these drugs, the faster they begin to fail.
By 2004, more than 50 percent of staph infections were caused by methicillin-resistant Staphylococcus aureus (MRSA), up from 2 percent in 1987; some are also resistant to vancomycin, a common backup antibiotic. Other disease organisms show similar patterns: pneumococcus, E. coli, and, yes, M. tuberculosis now come in multidrug-resistant or extremely drug-resistant varieties. In 2001, the Food and Drug Administration warned:
Unless antibiotic resistance problems are detected as they emerge, and actions are taken to contain them, the world could be faced with previously treatable diseases that have again become untreatable, as in the days before antibiotics were developed.
We are not quite on the brink of some dystopian Victorian future. But every year, the prognosis for infectious-disease patients gets a bit grimmer. Ramanan Laxminarayan, an economist at the Center for Disease Dynamics, Economics & Policy, says that even extremely drug-resistant TB “can be treated with a couple of drugs. They’re just extremely toxic, and they’re not something you’d want to take”—think blood-sealed lips. And more-powerful drugs tend to cost more than the old drugs. “Right now the cost is in the hundreds of dollars, but the next step will be thousands of dollars,” Laxminarayan says. “In developed countries, it’s manifested in slightly higher average prices of antibiotics. In poorer countries, it manifests as more people sick and dying of resistant infections.”
Even in the rich world, death from infection still looms; MRSA alone kills thousands every year. And firms are not developing antibiotics as fast as they used to. According to the Infectious Diseases Society of America (IDSA), between 1983 and 1987, the FDA approved 16 new antibacterial drugs for use in humans; from 2003 to 2007, it approved six.
Whom to blame for all of this depends on whom you ask. Patients, physicians, hospitals, drug companies, and even regulators have all taken their turn in the dock. But to an economist, when it’s everyone’s fault, it’s really no one’s fault: what we’re witnessing is not a personal failure, but a market failure.
Almost no one develops something like MRSA in his or her own body. Resistance arises over generations of treatment, usually in hospitals with lots of patients. Though resistance is ultimately inevitable, we can slow its emergence considerably. However, doing so requires strict compliance with tedious and often expensive protocols. Each slip contributes only slightly to the problem, so there’s a high temptation to free-ride: Just this once, I’ll skip washing my hands between patients, or Just today, I’ll skip taking the last of the pills that upset my tummy. Anyone who lived in a group house in college knows how this story turns out.
Markets and property rights give people incentives to avert the tragedy of the commons, and have yielded a steady stream of life-saving drugs and medical innovations. But antibiotics are different from most of the other drugs we use. As Kevin Outterson, a professor of health law and bioethics at Boston University, points out, 100 million people could be taking Lipitor and it would remain just as effective as the day it was first invented. Unless we discover something even better, patients could still be taking Lipitor 1,000 years from now. But antibiotics like penicillin inevitably begin to lose effectiveness.
Antibiotics are an exhaustible resource. We should be treating them like an oil field, or an endangered species. Instead, we handle them like consumer electronics. The patent system is designed to promote human invention, not conserve what has already been discovered. Patents are limited to 20 years, so that other inventors can build on earlier innovations. Arthur Daemmrich, a professor at Harvard Business School who studies the pharmaceutical industry, points out that the pharmaceutical patent life is actually much shorter, because the patent clock begins running before the start of multi-year clinical trials necessary to get the drug approved. And when companies finally get to market, they face the risk that a competitor will be close behind with a related drug. As Laxminarayan says, “The pharma companies don’t have an incentive to conserve the effectiveness of their antibiotics. They have [intellectual-property] rights on a drug, but someone else could be developing a similar molecule that will create resistance to my molecule, so I need to sell it as fast as I can.”
Tighter controls on prescriptions, or a tax on antibiotics, might address the conservation problem. But because resistance is inevitable, we also need drug companies to develop new antibiotics. For them, the cost of making more batches of pills is typically trivial; all the cost is in the R&D and the corporate overhead. That means that a profitable drug—the sort of drug that pharmaceutical firms work hard to create—is one that sells as many units as possible.
The problem is, efforts at promoting conservation may discourage innovation—and vice versa. Some hospitals now require infectious-disease doctors to sign off on the use of newer and more powerful antibiotics. But this has a cost. “When a new antibiotic comes out,” Pfizer’s Utt says, “physicians don’t necessarily use it—they tend to hold it in reserve. So by the time it’s being used, it’s already used up part of its marketable patent life.” As a result, fewer large firms may want to spend the time and money to get these drugs approved—according to the IDSA, only two major drug companies (GlaxoSmithKline and AstraZeneca) still have strong active research programs, down from nearly 20 in 1990. Antibiotics are not big moneymakers: Every time a doctor writes a prescription for Lipitor, Pfizer may gain a customer for decades. But short-course drugs like antibiotics sell perhaps a dozen doses.
Of course, we could always jack up the price. This idea is remarkably popular (at least if paired with conservation); even some people who have a pronounced distaste for pharmaceutical firms seem to like it. But extremely drug-resistant bacteria are still relatively rare, which means that in almost all cases, a new antibiotic is competing with an older antibiotic that actually works pretty well at curing a particular disease. If a drug company sets the new product’s price too high, it won’t sell enough units to earn back its investment. Also, reflexive Big Pharma critics would fill every op-ed page in the country with laments about greedy drug makers.
Those same critics suggest that perhaps we should take this out of the invisible hands of the market. Historically, we’ve solved tragedy-of-the-commons problems either through privatization, as Britain did with its land, or through nationalization, as many nations have done with their military and police. If the market doesn’t work, why not try the government?
Even many libertarian types agree that the commons problem seems to call for stronger state controls over antibiotics. But how far should that go? Government and academia perform vital basic research, but they haven’t delivered a lot of working drugs. “What would be nice,” says Daemmrich, “would be to have free-market mechanisms reward new-drug discovery even as the use of antibiotics was limited to infections that don’t go away on their own.”
One possibility is to have the government buy all the antibiotics on a sliding scale: so many billion dollars for a first-in-class antibiotic, half that amount for a second-in-class, and so forth. The government could then restrict the antibiotic’s use. I’ve posed this possibility to people at pharmaceutical companies and gotten a surprisingly warm reception. Another idea, proposed by Outterson and a colleague, Harvard’s Aaron Kesselheim, is to change the reimbursement system so that companies get paid more when fewer of their drugs are prescribed, as part of a conservation plan. “Let’s say Bayer had a diagnostic test that could quickly tell whether you had a bacterial or viral infection. Right now, the only thing that this would do is knock down their unit sales [of antibiotics]. We should reward companies like Bayer if they bring out a diagnostic like this—their unit sales might decrease by half, but if so, we should quadruple their unit price.” Or we could have special rules for antibiotics patents: instead of a 20-year term, make them renewable annually for drug companies that promote conservation.
These ideas sound elegant and simple in a magazine article. In the real world, they’d be messy and controversial. The government would be getting into the business of fixing prices. Likely, it would overshoot, handing windfall profits to firms, or undershoot, leaving us without enough drugs to treat emerging resistant infections. But the potential for such mistakes shouldn’t stop us from trying to pursue creative public-private solutions. We just need to be prepared to face a lot of yelling.
Especially since the way to reward conservation is not entirely clear. Laxminarayan notes, “Whether resistance develops is not entirely a function of what the manufacturer does—it’s a function of what other manufacturers do as well.” Not to mention doctors, and patients, not all of whom are, ahem, entirely compliant.
Rich countries such as the United States can—and should—solve the problem of new antibiotics discovery on their own; as Laxminarayan says, “Show up at a table with enough money, and someone is going to show up with an antibiotic.” But they cannot practice conservation without involving the developing world, an involvement that will require almost unimaginable coordination and cooperation. Laxminarayan likens antibiotics resistance to global warming: every country needs to solve its own problems and cooperate—but if it doesn’t, we all suffer. Coordinating a global response will require years, even decades; any serious revision to the patent system might have to go through the World Trade Organization. Presumably, if the resistance gets bad enough the world will come together on this—but maybe not before there is a real crisis. “The time when it happens,” Laxminarayan tells me, “is when a lot of antibiotics have failed.”
In the meantime, we here in the United States can make a start. “There are a lot of good things hospitals could be doing for infection control,” Outterson says, “but there’s no Medicare billing code for this. They won’t pay a nickel for a hospital that’s extra careful with hand-washing, or uses more-expensive equipment that resists infections.” While we wait for global action, we can develop better guidelines, change Medicare and Medicaid reimbursements, and start building stronger multilateral institutions. We can also start the policy debate over more-radical action, like changing the patent system and revisiting the role of government in the marketplace.
But the troubling possibility remains that this effort may not be fast enough. We won’t see life expectancies plummet—we have much better public health and nutrition than the Victorians did. But we could end up in a world where the risk of infection curtails life-enhancing surgeries such as hip and knee replacements; where organ transplants, which require suppressing the recipient’s immune system, become too risky to justify their cost; or even where pneumonia, which used to kill most of its victims over the age of 60, once again becomes “the old man’s friend.” The longer we ignore our problem, as Orwell did, the more likely we are to share his fate.
This article available online at: