How to Evaluate Medical Research
We applaud David H. Freedman’s “Lies, Damned Lies, and Medical Science” (November Atlantic), having long been admirers of professor John Ioannidis. We too evaluate medical evidence and train physicians and others in how to analyze studies for reliability and clinical usefulness. However, we believe the problem is larger, and the consequences of applying the results of misleading science more deleterious, than implied.
Low-quality science significantly contributes to lost care opportunities, illness burden, and mortality. For instance, in the 1980s, observational studies “reported” dramatic tumor shrinkage and reduced mortality in women with advanced breast cancer who were treated with high-dose chemotherapy and autologous bone-marrow transplant. But these studies are highly prone to bias; valid randomized controlled trials are required to prove efficacy of therapies. More than 30,000 women underwent these procedures before randomized controlled trials showed greater adverse events and mortality. And we believe less than 10 percent of such trials are reliable.
Individual biases have been shown to greatly distort study results, frequently in favor of the new treatment being studied. Yet few health-care professionals know the importance of bias in studies, or the basics of identifying it, and so are at high risk of being misled. In an informal tally, roughly 70 percent of physicians fail our basic test for critical appraisal, which should be a foundational discipline for all health-care professionals.
Sheri Ann Strite
Michael E. Stuart, M.D.
David H. Freedman states, “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” This inflammatory lead-in, and the article itself, are dangerously specious.
Freedman centers on John Ioannidis, whose principal research tool is meta-analysis. This method, Ioannidis claims, has uncovered widespread flaws in all types of clinical research. Freedman fails to mention, however, that meta-analysis itself has major problems, and is not accepted as a rigorous method of analysis by many leading statisticians. A major problem is the pooling of data from different sources, which include different populations, ways of conducting clinical trials, and ways of appraising the results. This heterogeneous pool of information is blended into a statistical mayonnaise of strong studies and weak studies, for an analysis that is often impossible to evaluate with any degree of certainty. Ioannidis may claim to have mathematical methods that can account for these differences, but there will always remain doubts that he can deal with the inherent problems of meta-analysis.
The devastating consequences of untreated hypertension, the connection between type 2 diabetes and obesity, the prognosis of certain death in childhood leukemia, and poor survival in HIV/AIDS are just a few problems we no longer face because of high-quality clinical research. None of these achievements, or any others, were mentioned by Freedman, who went on a cherry-picking expedition in a field he doesn’t seem to understand.