Mark Twain famously decried three kinds of lies: lies, damned lies, and statistics. While Twain himself was no statistician, he did hit upon an important idea. Physicians, scientists, and the general public should be cautious about accepting many research reports at face value. The mere fact that biomedical researchers can find a statistically significant relationship between good health and a particular drug, nutritional supplement, dietary modification, or medical device does not in fact establish that it is healthful. Depending who is analyzing the statistics and how, numbers can lie, and in some cases, they can lure us to perdition.
Consider vitamin E, which is actually a group of fat-soluble compounds necessary for good health. Vitamin E is a type of antioxidant, which means that it interferes with the production of highly reactive oxygen species when fats are oxidized. On this basis, proponents once believed that vitamin E supplementation would produce a host of health benefits, including lowering rates of heart disease and cancer and increasing longevity. Early studies provided statistical support for this point of view. However, it now appears that vitamin E supplementation not only is not associated with decreased mortality in adults but may in fact slightly increase it.
A drug may reliably lower blood pressure or cholesterol levels but provide no benefit when it comes to reducing heart attacks and strokes or prolonging life.
When research finds a positive relationship between some intervention and good health despite the fact that no such positive relationship actually exists, we call it a false positive finding. There are many reasons that false positive findings frequently appear in both the popular press and the scientific literature. These reasons were beautifully summarized by Professor John Ioannidis of Tufts University in, "Why Most Published Research Findings Are False." Simply put, some research models make it more likely for reported research results to be false than true, in part because a great deal of research merely amplifies preexisting biases.
Anyone making health and lifestyle decisions based on the scientific literature or reports of its findings in the popular press needs to understand these pitfalls. One of the most important concerns is the wide latitude researchers enjoy in defining outcomes and designing studies. In many cases, reported outcomes are very far removed from health. For example, a drug may reliably lower blood pressure or cholesterol levels but provide no benefit when it comes to reducing heart attacks and strokes or prolonging life. In some cases, such drugs produce a number of undesirable side effects, and in others they actually turn out to increase mortality rates.
Improving one isolated health parameter such as blood pressure does not necessarily make us healthier overall. To take an extreme case, we have long had at our disposal a substance that is extremely effective against high blood pressure. No one, no matter how high their blood pressure, will remain hypertensive after they take it. In fact, there is no substance known to medicine that can produce a greater reduction in blood pressure. On the downside, the substance in question is an extremely lethal poison. When physicians think about whether or not to prescribe a drug, we need to look at its effect on the whole patient, not just some particular laboratory value.
Another major pitfall concerns the powerful incentives for producing positive results. A great deal of research on drugs and medical devices is funded by profit-seeking corporations, which have a strong interest in seeing their investments bear fruit. The more money such a company invests in developing a new drug or device, the more urgent it becomes to see a substantial return on that investment. The same is true, though perhaps to a lesser degree, for publically funded research. In both cases, people who cannot demonstrate that shareholders' or taxpayers' money has been well spent may suffer for it.
Egaz Moniz received the Nobel Prize in Medicine for developing a form of frontal lobotomy, but the researchers who later showed its poor benefit/risk ratio were not similarly recognized.
One important example of this bias is the reporting of antidepressant efficacy. One analysis of articles in the scientific literature concluded that the effectiveness and benefit/risk ratio of the most popular class of antidepressants had been greatly exaggerated. For example, of 74 studies registered with the Food and Drug Administration, 37 that showed positive results were published in journals, while 22 that showed negative results were not. Moreover, 11 studies that showed negative results were published in a way that suggested a positive result. Overall, 94 percent of published studies indicated a positive result, when only 51 percent were actually positive.