How Health Research Misdirects Us

Improving one isolated health parameter such as blood pressure does not necessarily make us healthier overall. Studies will not supplant the basic principles of living well.
More
RTR1560Y570.png
Jacky Naegelen/Reuters

Mark Twain famously decried three kinds of lies: lies, damned lies, and statistics. While Twain himself was no statistician, he did hit upon an important idea. Physicians, scientists, and the general public should be cautious about accepting many research reports at face value. The mere fact that biomedical researchers can find a statistically significant relationship between good health and a particular drug, nutritional supplement, dietary modification, or medical device does not in fact establish that it is healthful. Depending who is analyzing the statistics and how, numbers can lie, and in some cases, they can lure us to perdition.

Consider vitamin E, which is actually a group of fat-soluble compounds necessary for good health. Vitamin E is a type of antioxidant, which means that it interferes with the production of highly reactive oxygen species when fats are oxidized. On this basis, proponents once believed that vitamin E supplementation would produce a host of health benefits, including lowering rates of heart disease and cancer and increasing longevity. Early studies provided statistical support for this point of view. However, it now appears that vitamin E supplementation not only is not associated with decreased mortality in adults but may in fact slightly increase it.

A drug may reliably lower blood pressure or cholesterol levels but provide no benefit when it comes to reducing heart attacks and strokes or prolonging life.

When research finds a positive relationship between some intervention and good health despite the fact that no such positive relationship actually exists, we call it a false positive finding. There are many reasons that false positive findings frequently appear in both the popular press and the scientific literature. These reasons were beautifully summarized by Professor John Ioannidis of Tufts University in, "Why Most Published Research Findings Are False." Simply put, some research models make it more likely for reported research results to be false than true, in part because a great deal of research merely amplifies preexisting biases.

Anyone making health and lifestyle decisions based on the scientific literature or reports of its findings in the popular press needs to understand these pitfalls. One of the most important concerns is the wide latitude researchers enjoy in defining outcomes and designing studies. In many cases, reported outcomes are very far removed from health. For example, a drug may reliably lower blood pressure or cholesterol levels but provide no benefit when it comes to reducing heart attacks and strokes or prolonging life. In some cases, such drugs produce a number of undesirable side effects, and in others they actually turn out to increase mortality rates.

Improving one isolated health parameter such as blood pressure does not necessarily make us healthier overall. To take an extreme case, we have long had at our disposal a substance that is extremely effective against high blood pressure. No one, no matter how high their blood pressure, will remain hypertensive after they take it. In fact, there is no substance known to medicine that can produce a greater reduction in blood pressure. On the downside, the substance in question is an extremely lethal poison. When physicians think about whether or not to prescribe a drug, we need to look at its effect on the whole patient, not just some particular laboratory value.

Another major pitfall concerns the powerful incentives for producing positive results. A great deal of research on drugs and medical devices is funded by profit-seeking corporations, which have a strong interest in seeing their investments bear fruit. The more money such a company invests in developing a new drug or device, the more urgent it becomes to see a substantial return on that investment. The same is true, though perhaps to a lesser degree, for publically funded research. In both cases, people who cannot demonstrate that shareholders' or taxpayers' money has been well spent may suffer for it.

Egaz Moniz received the Nobel Prize in Medicine for developing a form of frontal lobotomy, but the researchers who later showed its poor benefit/risk ratio were not similarly recognized.

One important example of this bias is the reporting of antidepressant efficacy. One analysis of articles in the scientific literature concluded that the effectiveness and benefit/risk ratio of the most popular class of antidepressants had been greatly exaggerated. For example, of 74 studies registered with the Food and Drug Administration, 37 that showed positive results were published in journals, while 22 that showed negative results were not. Moreover, 11 studies that showed negative results were published in a way that suggested a positive result. Overall, 94 percent of published studies indicated a positive result, when only 51 percent were actually positive.

Simply put, positive results are good business. There is money to be made every time a new drug or device is brought to market. It draws public interest and makes for good news copy. And it is something that most consumers and patients are hungry for. Wouldn't it be great if we could lower our blood pressure, narrow our waistlines, increase our energy levels, elevate our moods, and prolong our lives simply by taking some new pill or making use of some new medical device?

Similar incentives apply to researchers. The careers of physicians and scientists depend in part on how much interest we generate in our discoveries and how much research funding we can attract. Generally speaking, positive results are far more likely than negative ones to result in a presentation at a scientific meeting or a publication in a scientific journal. Major awards are virtually never presented to researchers for negative results, including even results that contradict previous positive reports. Egaz Moniz received the Nobel Prize in Medicine for developing a form of frontal lobotomy, but the researchers who later showed its poor benefit/risk ratio were not similarly recognized.

In some cases, teams of researchers vie with one another in pursuit of statistically significant results. When this happens, the probability that at least one team will come up with positive results is increased, largely because unlikely results become more likely as we increase the number of trials. If one person flips a coin, the probability of coming up with four heads in a row is very low. On the other hand, if 100 people flip a coin four times, there is a good chance that at least one person will get four heads in a row. If only the people who get positive results report their findings, it will appear as though the probability of getting four heads a in a row is far higher than it really is.

Likewise, a drug may have little or no beneficial effect. But if a sufficient number of researchers conduct trials to assess its efficacy, and if these studies are designed in such a way that the probability of a positive result is high (for example, by looking for only a very small effect), then there is a good chance that some researchers will be able to report positive results. In such situations, which are relatively common, the probability that positive results will be found may be not only high but greater than that of negative results. As Ioannidis has demonstrated, the probability of reporting false results often exceeds that of true results.

In most cases, there is probably little harm in eating more foods that are high in antioxidants, making use of nutritional supplements such as fish oil, or taking an extra vitamin tablet each day. We may be wasting money, but the sums involved are generally not high, and it is quite possible that we are achieving some benefit from the placebo effect associated with believing that we are taking good care of ourselves. But when such drugs and devices cost large sums of money and carry substantial risks, we should think twice and then think again before jumping on the parade of fads that often characterizes the health beat.

We all want a shortcut, some magic pill or device that will make up for our bad choices, promote our health, and prolong our lives. The market for such a drug would be worth a huge sum of money. For this reason, someone will always be on hand to provide it to us, for a handsome price. But in the final analysis, living wisely matters most. Where health is concerned, this means having meaningful work in life (whether paid or not), eating moderately from a wide variety of foods, exercising in moderation, getting plenty of sleep, and avoiding excess. The latest research findings offer no substitute for living well.

Jump to comments
Presented by

Richard Gunderman, MD, PhD, is a correspondent for The Atlantic. He is a professor of radiology, pediatrics, medical education, philosophy, liberal arts, and philanthropy, and vice-chair of the Radiology Department, at Indiana University. Gunderman's most recent book is X-Ray Vision.

Get Today's Top Stories in Your Inbox (preview)

Sad Desk Lunch: Is This How You Want to Die?

How to avoid working through lunch, and diseases related to social isolation.


Elsewhere on the web

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

Where Time Comes From

The clocks that coordinate your cellphone, GPS, and more

Video

Computer Vision Syndrome and You

Save your eyes. Take breaks.

Video

What Happens in 60 Seconds

Quantifying human activity around the world

Writers

Up
Down

More in Health

Just In