40 Years After Tuskegee: Reuniting Medical Research and Practice
Guidelines to protect human research subjects impede efficient generation and exchange of knowledge.

It is estimated that we lack sufficient scientific evidence about the effectiveness of over 50 percent of commonly used medical treatments, and that 100,000 patients die annually from healthcare acquired infections. The pressing need for better evidence on how to deliver medical care effectively, safely, and efficiently, however, is butting up against another moral imperative -- protecting patients from unethical research..
For almost forty years, maintaining a sharp distinction between medical research and medical practice has been the cornerstone of biomedical ethics and the federal regulations that oversee how research with human subjects is conducted. The need to build a firewall between research and practice emerged during a period of intense societal focus on egregious violations of human rights that occurred in research, including most notably the Tuskegee Syphilis Study. The dominant concern then was to protect patients and other subjects from risky and exploitative research.
Today, however, the segregation model we put in place to protect patients is actually harming them.
As an ethics scholar, my work is rarely described as "radical." But this week Nancy Kass, Steve Goodman, Peter Pronovost, Sean Tunis, Tom Beauchamp, and I published what some are calling a radical proposal to transform how we think about the roles and responsibilities of pretty much everyone involved in health care -- health professionals, managers of hospitals and clinics, insurers, payers and yes, even patients.
As we see it, research could provide us solutions to some of these pressing problems, but unfortunately, the current ethics model makes it difficult. Any activity designed, even in part, to generate new knowledge about which common treatments really work or how to improve patient safety has to be separated from practice. Once the activity is identified as "research," it is subject to special oversight regulations and requirements that medical practice is not. The segregation model thus functions as an impediment to the efficient generation and exchange of knowledge, produced at little or no risk to patients, that is critically needed to improve the quality and safety of the care patients receive, and to increase the likelihood that the ethical goal of universal access to healthcare becomes a reality.
So my colleagues and I are calling for the end of this segregation model, and replacing it with a new ethics framework that has been designed specifically for the integration of research with practice. A fundamental premise of the new framework is that every medical decision we, as patients, and our clinicians make, and each episode of care we receive, should generate data and evidence that improve the care of patients who come after us; we then, in turn, benefit from what is systematically learned from the care received by patients who come before us. Through continuous, real-time learning, we can provide better care to more people, save lives, become smarter, and wring every dollar of value from the system. This is what the Institute of Medicine has dubbed the "learning healthcare system."
Our framework has two distinctive feature. First, it challenges the view that our thinking about how ethically to protect patients in healthcare oversight should turn on whether an activity is called research or medical practice. Second, the framework sets a moral presumption in favor of learning in healthcare, where health professionals and institutions have an obligation to conduct learning activities, and patients have an obligation to contribute to these activities.
And, yes, this involves patients too. Just as health professionals and organizations have an obligation to learn, patients have an obligation to contribute to, participate in, and otherwise facilitate learning that will improve the quality of the healthcare system. That obligation is not absolute, however, regardless of risk. To be clear, our patient obligation is focused on research that poses no additional risk beyond what patients face in the course of their medical treatment. Research that poses additional risk, for example, the testing of new drugs still unapproved by the Food and Drug Administration, are not included under the obligation and should always proceed only with the express, voluntary informed consent of the patient. Also, under our new framework, policies would be in place to ensure that patients understood what learning activities were underway in their doctor's offices or hospital, and what measures are being taken to protect their interests and rights.
When I was diagnosed as a teenager with a spinal disorder, my parents and I (really, my parents) were given two options. I could have major surgery or be in a restrictive body brace for several years. At the time, there was no scientific evidence about which option worked best. Parents today are in effectively the same position as my parents were, many decades ago. We still do not have sufficient scientific evidence about which children do best with surgery and which with bracing.
Our framework would permit experts in spine disease to do some kinds of research without getting my parents' or my express permission. For example, after we made our decision, these experts would be able to follow what happened to me, and the thousands of other children whose parents chose either surgery or bracing, to learn more about which option worked best for what kinds of patients using information from our medical records.
It's ethically unthinkable to permit decades to go by without any improvement in the medical system that affects so many of us. In the absence of a system that makes respectful, no-risk research an integrated part of healthcare, that's exactly what is going to continue.
A version of this article also appears on Johns Hopkins' Berman Institute of Bioethics Bulletin.