What I am saying is that we don't know how big the effect is. Refuting me involves, not saying that well, here's another study showing some effect, but rather, taking a stand and saying we do know how big the effect is, or at minimum, that we can prove it's probably at least 20,000 people a year, the figure I was discussing.
Because of course, size matters. If you want to argue in favor of a national health care system on the basis of improvements in mortality, then the number really has to be quite large. By 2019, the CBO expects the government to be spending just about $163 billion more the exchanges, and the Medicaid/S-Chip expansions. (About $100 billion of that is to be offset by Medicare and other cuts--but I'm just trying to isolate how much we're going to spend to expand coverage, since we could do the coverage expansion without the Medicare cuts, or the Medicare cuts without the coverage expansion--and doing the latter would give us money for other things, so there is an opportunity cost to using them for this).
If 1,000 people die a year, that means that we will be spending $163 million per life saved. If the number is 5,000 we would be spending $33 million. In fact, you need the number of people dying from lack of insurance each year to be quite large--more than 20,000--to get the dollars-per-lives-saved within the ranges that say, the EPA or the NHTSA use for doing cost-benefit analyses on regulations.
Now, obviously, as I've also said repeatedly, there are reasons beyond mortality that we might support this system. Mortality is only one element, albeit an important one. And of course, the CBO numbers are them selves very, very rough guesses.
But I think that journalistic hunger for "a number" has resulted in some very rough numbers with a lot of weaknesses being adopted as a fact in the debates. They're not facts, they're very rough guesses, and they shouldn't have been used as a selling point for this plan without at least some investigation of how reliable they were. Moreover, I know that the people arguing with the study understand the problems, because they suddenly rediscovered them in regards to the data on deaths before and after age 65. A lot of people are arguing that we should ignore the aggregate data on Medicare mortality statistics in favor of Card's paper on the discontinuity between health outcomes for ER admits who are just under 65, and those just over, or some other more targeted work.
I see the argument for using easier-to-measure subgroups in an attempt to isolate causality. But here's the thing: you cannot say, well, aggregate data isn't very good for capturing causality, and also cite the figures from the Urban Institute, or Himmelstein et. al. as if they had some meaning. Those data are far worse than the Medicare data, because at least the Medicare data gives you a natural experiment, and it doesn't try to isolate "the uninsured" on the basis of their insurance status on a single day. Either it's reasonable to infer causality from large and noisy data sets, or it isn't.