Physicians won't become obsolete any time soon, but the comprehensive integration of everything we know about well-being could revolutionize medical care.
The progress of modern applied science has been defined by a series of outrageously ambitious projects, from the effort to build the first atomic bomb to the race to sequence the human genome.
For scientists and engineers today, perhaps the greatest challenge is the structure and assembly of a unified health database, a "big data" project that would collect in one searchable repository all of the parameters that measure or could conceivably reflect human well-being. This database would be "coherent," meaning that the association between individuals and their data is preserved and maintained. A recent Institute of Medicine (IOM) report described the goal as a "Knowledge Network of Disease," a "unifying framework within which basic biology, clinical research, and patient care could co-evolve."
The information contained in this database - expected to get denser and richer over time -- would encompass every conceivable domain, covering patients (DNA, microbiome, demographics, clinical history, treatments including therapies prescribed and estimated adherence, lab tests including molecular pathology and biomarkers, info from mobile devices, even app use), providers (prescribing patterns, treatment recommendations, referral patterns, influence maps, resource utilization), medical product companies (clinical trial data), payors (claims data), diagnostics companies, electronic medical record companies, academic researchers, citizen scientists, quantified selfers, patient communities - and this just starts to scratch the surface.
The underlying assumption here is that this information, appropriately analyzed, should improve both our potential and attained health, pointing us towards future medical insights while enabling us to immediately improve care by optimizing the use of existing resources and technologies.
As the IOM report concluded, "realizing the full promise of precision medicine, whose goal is to provide the best available care for each individual, requires that researchers and health-care providers have access to vary large sets of health and disease-related data linked to individual patients.
As daunting as this task obviously is, companies and academic researchers are bravely taking up the challenge, generally by focusing on some subset of the problem, typically at an intersection of two or more domains (clinical information plus biomarkers, say, or provider data plus claims). The selection may reflect what they bring to the table (clinical data in the case of medical centers, claims data in the case of payors) or where they think the greatest value can be found.
In addition, both established information companies (i.e. Google) and emerging companies (such as Palantir) are also key players; their fundamental business is based around their ability to approach problems of this dimension and complexity; Google famously asserts that their mission is "to organize the world's information, and make it universally accessible and useful."
One industry that seems underrepresented at the table is big pharma; evidently, many large drug companies have decided that big data informatics are not a core competency, and have elected to outsource this as a service. Perhaps this represents a savvy assessment of the current state of the art. However, it's also may be a miscalculation on the order of IBM's failure to appreciate the value of the operating system Bill Gates originally developed, or Xerox's failure to appreciate the value of the PC their PARC engineers had created, and which Steve Jobs immediately recognized and leveraged. Arguably, if you really want to build a company that is going to deliver the health solutions of the future, your first and most important investment might well be in recruiting a Palantir-level analytics group.
At the same time, you can understand big pharma's hesitation. Despite all the promise of big data in health, the results to date have been surprisingly skimpy; putting existing data in a vat and stirring has yielded a slew of academic publications, and a number of pretty pictures, but few truly impactful changes in health, at least so far.
Critics contend, "it's faddish, way overhyped, and not ready for primetime." Consider a recent big data project, that concluded that an easily observed clinical indicator, jugular venous distention (enlarged neck veins) is a bad prognostic sign for heart failure patients. That's something a third-year med student could just as easily have told you.
Advocates, meanwhile, point to early successes and plead for patience and resources; they point out that the scale of data required to build good predictive models has only recently become available, and has already led to promising advances.
For instance, system biologists such as Mount Sinai School of Medicine's Eric Schadt (a friend and previous collaborator) and Stanford's Atul Butte have already used big data analytics to identify and prioritize drug targets - though it remains to be seen whether these yield clinically useful products, and whether the associated approaches are truly generalizable.
Meanwhile, the latest issue of Cell reports the first computation model of a whole cell, a model of a bacterium that "includes all its molecular components and their interactions" and, according to the authors, "provides insights into previously unobserved cellular behaviors" while leading to new predictions that were subsequently experimentally validated.
There are at least two profound challenges that big data advocates will need to overcome en route to analytical nirvana.