Probably the most controversial thing I've ever written is that the evidence for the effect of health insurance on mortality is not really that strong.  This is not to say that insurance has no effect--this is possible, but not to my mind particularly likely.  But studies purporting to show big impacts are vulnerable to what economists call "unobserved variable bias": because we can't really do massive controlled social experiments on human beings, people who lack health insurance are not exactly like people who have health insurance, except for their health insurance status; they have a bunch of other things going on in their lives that make them less likely to be insured, and which may also affect their health.  (Examples of things that are hard to control for well: they have poor quality social and family networks, have major impulse control problems, have a drinking or drug problem, did not have good parenting, or were born in another, poorer country where they were exposed to disease pathogens and poor nutrition that do not affect American children).  

To illustrate the problem, note that many of the studies that show big mortality impacts from being uninsured show even bigger mortality impacts from being on Medicare and Medicaid, even after controlling for age and income: you are more likely to die if you are on government insurance than if you have no insurance at all.  Is it probable that going on Medicare or Medicaid kills you?  Okay, again, it's possible--I can tell a story about how people in those programs are more likely to get marginally effective surgery or other treatments whose side effects kill them--but it's not really likely. The people who author the studies are quick to point out that people on Medicaid are probably more likely to be there because they started out in poor health (among other things, Medicaid helps provide care for people on disability).  They are sometimes less quick to point out the implications for their figures on the uninsured: it's probably pretty difficult to do a straight-up comparison between different groups.

There are other studies showing substantial effects, but I think even those who think that the bulk of the evidence supports a large impact on mortality have to admit that the difficulty of measurement means that the evidence is mixed--there's not some clear and shining link between health insurance and better health.  For example, one study shows that there's a big change in survival rates for trauma patients right before and after they turn 65 and go on Medicare--but the study's authors say that the effect is much larger than could be accounted for simply by putting previously uninsured patients on Medicare, and another study indicates that people with private insurance are less likely to die after a trauma admission; how do we reconcile this?  Another study, which looked at overall mortality in the elderly population after Medicare was enacted, showed no effect on mortality. A recent study by the GAO showed that children on Medicaid actually have poorer access to physicians than children with no insurance at all.

The result is, I think a sort of rorschach blot where peoples' intuitions have a lot of free play.  Those who find it intuitively implausible that the effect could be anything but large can recruit enough facts to support their belief (and their righteous indignation at those who suggest otherwise); those who are predisposed to believe that the uninsured aren't a big problem likewise have some support, at least on the narrow question of whether or not people are dying from lack of insurance.  

Now along comes the state of Oregon with a nice little controlled experiment.  In 2008, Oregon opened up its Medicaid program to a limited number of low income adults.  Because the state didn't have enough money to cover everyone who applied, they used a lottery.  A large team of health care economists, including Amy Finkelstein and Jonathan Gruber (hereafter referred to as "Finkelstein et al" to save your reading and my writing the entire list) mined that data and found some substantial effects from the program.

And here we see the rorschach effect. Matt Yglesias writes:

One of the most ridiculous aspects of the recent debate over the Affordable Care Act went as follows. Many people in the United States of America believe quite sincerely that over-taxation of rich people is among the most serious problems the country faces. And the Affordable Care Act does a great deal to increase taxes on rich people and use the funds thereby raised to provide Medicaid to currently uninsured people. Those who oppose such measures want to deny life-saving medical care to the poor and near-poor in order to maintain a low tax burden on the wealthy. In order to justify this proposition politically, it would be useful to pretend to believe that giving uninsured people access to Medicaid doesn't actually benefit them at all. Consequently, lots of diligent Googlers started turning up studies that purported to show this, ignoring contrary studies and all common sense.

At any rate, a new rigorous study from Oregon confirms that Medicaid does, indeed, save lives:

This is exactly what the study does not find. Indeed, it pretty much confirms what has come to be my view of the evidence on the impact of insurance: you see a very clear impact on utilization, including a handful of recommended preventative screenings, as well as hospitalizations and other treatments. You see a moderately strong effect on both patient and provider finances: fewer medical bills sent to collections, and lower self-reported financial strain from medical costs.  And people like being insured, so various self-reported measures rise.  The rest is more ambiguous.

For example, the strongest impact on health that they find is that self-reported health status rises by a modest-but-still-significant 0.2 standard deviations: reported depression goes down, while the people who won the lottery were more likely to say that they were in good, very good, or excellent health.  This rules out the theory that people who have more contact with the health system might feel less healthy because their doctor gives them more things to be paranoid about,  but as Finkelstein et. al note, it doesn't quite show that they're actually healthier.  Indeed, about 2/3 of the improvement in self-reported physical health comes almost immediately, before people had a chance to consume much in the way of health services; this suggests that the effect may be psychological rather than the result of any improvement in their physical well being. As the authors say "Overall, the evidence suggests that people feel better off due to insurance, but with the current data it is difficult to determine the fundamental drivers of this improvement."

Meanwhile, while the measures of utilization are strong, the sort of "quality" measures that people often want Medicare to use for reimbursements are considerably less promising:

First, we examined hospital utilization for seven conditions of interest and of reasonably high prevalence in our population: heart disease, diabetes, skin infections, mental disorders, alcohol and substance abuse, back problems, and pneumonia. We found a statistically significant increase in utilization (both extensive and total) only for heart disease. We also explored the impact of health insurance on the quality of outpatient care (admissions for ambulatory care sensitive conditions) and three measures of quality of care for inpatient care (not having an adverse patient safety event, not being readmitted within 30 days of discharge, and quality of hospital). We were unable to reject the null of no effects on either outpatient or inpatient quality, although our confidence intervals are extremely wide and do not allow us to rule out quantitatively large effects. Finally, we examined whether insurance was associated with a change in the proportion of patients going to public vs. private hospitals and were unable to detect any substantive or statistically significant differences.

As for lower mortality--economist-speak for "confirms that Medicaid does, indeed, save lives", the authors didn't find any such thing. I quote: "Panel A shows that we do not detect any statistically significant improvement in survival probability." Mortality and related metrics such as life-expectancy is the easiest thing to measure--we can all pretty much agree on who is dead, and that death is generally pretty bad.  It's also, for obvious reasons, the easiest emotional appeal for proponents of a program.

But death is not that common in the under-65 population, so unless Medicaid has a really large effect on mortality, it's not going to show up in a one-year study.  There will be later follow-ups which may give us more suggestive results, of course--and we're running a giant nationwide experiment starting in 2014 which ought to settle the question one way or another.  If people really die in the tens of thousands every year because they lack insurance, we should see a noticeable downward inflection in US mortality statistics starting late in this decade.

But it's hard to believe that the effect is really huge, precisely because the data tends to be so ambiguous.  As Michael Cannon points out, this is the most vulnerable population--adults, many of them near-elderly, below the poverty line.  This is where you'd expect to see the biggest effect of putting people on government insurance, because this group has very little recourse to alternative health care resources such as employer insurance, or paying out of pocket. (Though some of them seem to have found some anyway; as I make their somewhat confusing tables, 13% of the control group ended up on private insurance.)

I think this really points up the difficulty of finding good measures of "health".  We can come up with all sorts of objective measures, but we have to keep asking ourselves, relentlessly, whether what we're actually measuring is good or bad: is higher utilization of services really improving peoples' lives?  Is lowering easily-measurable blood-cholesterol levels, at the risk of muscle atrophy, an improvement in health, and if so, how much?  With some exceptions, the easiest things to measure are not necessarily the most important things to well-being.

This has implications for whether or not we should have a public health plan--is it worth the cost to make people feel happier about their insurance status, or protect them from uncollected medical bills rather than certain death?  But it also has implications for how we're going to structure the systems we have.  For example, a lot of people are really excited about comparative effectiveness research, which in the public telling, has grown into a painless way to reduce health expenditures by eliminating treatments that don't "work".  But while there are some treatments that just shouldn't be done, there's also a large gray area of things that are hard to measure, or to value--extra days of life for a cancer patient, modest reductions in pain or slight increases in mobility, convenience and lifestyle considerations. 

And then there's that happiness.  As this study shows, we like having access to medical treatment, even if there's no easy-to-measure improvement in our bodies.  Paying attention to people, or telling them that a drug will make them feel better, really can make them feel better.  What do we do about that?  How do we stick that into our equations?

Every time a new study comes out, people on both ends of the political spectrum are quick to seize on it as proof of their prior beliefs.  But in this area, the proof is usually messy.  It rarely tells us exactly what we wanted--and expected--to hear.