Derek Lowe dives into a problem that is far too poorly understood by most of the public: the problem of false positives.

The news of a possible diagnostic test for Alzheimer’s disease is very interesting, although there’s always room to wonder about the utility of a diagnosis of a disease for which there is little effective therapy. The sample size for this study is smaller than I’d like to see, but the protein markers that they’re finding seem pretty plausible, and I’m sure that many of them will turn out to have some association with the disease.

But let’s run some numbers. The test was 91% accurate when run on stored blood samples of people who were later checked for development of Alzheimer’s, which compared to the existing techniques is pretty good. Is it good enough for a diagnostic test, though? We’ll concentrate on the younger elderly, who would be most in the market for this test.The NIH estimates that about 5% of people from 65 to 74 have AD. According to the Census Bureau (pdf), we had 17.3 million people between those ages in 2000, and that’s expected to grow to almost 38 million in 2030. Let’s call it 20 million as a nice round number.

What if all 20 million had been tested with this new method? We’ll break that down into the two groups – the 1 million who are really going to get the disease and the 19 million who aren’t. When that latter group gets their results back, 17,290,000 people are going to be told, correctly, that they don’t seem to be on track to get Alzheimer’s. Unfortunately, because of that 91% accuracy rate, 1,710,000 people are going to be told, incorrectly, that they are. You can guess what this will do for their peace of mind. Note, also, that almost twice as many people have just been wrongly told that they’re getting Alzheimer’s than the total number of people who really will.



People look at tests with small error rates--a false positive rate of, say, .5%, and conclude that if they test positive, that means it's overwhelmingly likely that they have the disease. But this is true only for conditions that are relatively frequent. Take a test for a disease that has a false positive rate of 5%, and a disease prevalence of 1 in 1000--lupus, say. If you test positive in a random assay, what are the odds that you actually have the disease?

Most people--even, apparently, a shocking number of doctors--would say that the odds are 95%. But this is all wrong. If you test 1,000 people for lupus, 1 of them will correctly test positive for lupus--and 50 of them will falsely test positive. The chances are only 1 in 51, less than 2%, that you actually have the disease.

These are in fact the actual numbers for anti-nuclear antibody tests and systemic lupus, at least as relayed to me by my immunologist after I got a borderline positive result on a screen. These suggest that no one should ever do a random ANA; the information it gives is garbage, particularly since they don't treat lupus until you manifest symptoms. Yet lots of doctors, including mine, do.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.