As growing technological prowess enables sophisticated discrimination capabilities, our reach for health and economic benefit stands to collide with the ethical core of medicine.
"It is truly the first civil rights act of the 21st century."
The words of Sen. Judd Gregg, Republican of New Hampshire, struck an optimistic note in 2008. After 12 years of struggle, congressional action to protect Americans from genetic discrimination was providing a rare respite in a period of growing partisan rancor.
I never imagined that enactment of the Genetic Information Nondiscrimination Act (GINA) would prove so difficult. When I began work as health advisor to Olympia Snowe, the Republican senator from Maine, five years earlier, the bill had unanimous Senate support. Protecting individuals from discrimination based on one's genetic makeup appeared an unassailable proposition. The fact was, most existing civil rights statutes were based on such immutable characteristics.
Yet GINA became mired in contentious issues, and by the time it was signed into law I faced a multitude of concerns. It was clear that advancing technology was opening a virtual Pandora's Box of new civil rights challenges. At the crux of these was the fact that scientific progress has been enabling increasingly sophisticated discrimination. That could take the form of customized cancer therapy that precisely targets a patient's particular tumor; yet it could also result in an insurer denying health coverage because one is genetically susceptible to develop that same cancer. Science could be set at cross-purposes.
Today's laboratories, data repositories and escalating analytical power make it possible to investigate almost any factor and it's relationship to health -- including one's genetic makeup, medical history, diet, relationship status, behavior and more. Yet when discoveries are made, curse often precedes cure, and even after a preventive or treatment becomes available, economic considerations tend to encourage bias against those at risk. Genetic discrimination was just an early manifestation of such adverse effects. Though health technology-driven bias is less visible than being relegated to the "back of the bus," its impact may be no less significant, and the proliferation of such forms of discrimination could undermine over a half century of progress in civil rights.
Some claimed that GINA was enacted prematurely, as few instances of genetic discrimination had been documented. Yet such new bias could grow as rapidly as the technology on which it is based. A more proactive approach was required. While our growing knowledge of genetic predisposition to disease offered the prospect of better health, it also meant that those who were identified with genetic risk could be denied health coverage and face employment discrimination. A new age of genomic medicine was dawning, spurred forward by early completion of the Human Genome Project. Yet the promise of such medical technology wouldn't be fully realized if few Americans were inclined to participate in research or utilize new genetic tests due to the threat of discrimination.
Those concerns had driven Snowe and Louise Slaughter, the Democratic congresswoman from New York, to mount the first federal effort to prohibit genetic discrimination in 1996. Similar to other civil rights legislation, GINA was not targeted to address every possible manifestation of bias, but focused on the greatest public concerns -- discrimination in employment and health insurance.
Genetic discrimination comprised just one of a number of game-changing technological challenges to civil rights. Confronting these presents new obstacles, and points to the need for a paradigm shift in our approach to prevent such inappropriate bias.
Existing federal antidiscrimination statutes recognize a multitude of "protected classes," including those based upon race, color, national origin, gender, religion, age, martial and familial status, disability, veteran status, and sexual orientation -- and decades of effort has fostered greater equity and opportunity. But it was seldom necessary to control the information on which bias was based. Nor did one need to consider every possible harmful act. The fact is, an individual's membership in a protected class was typically discernable, and victims were often painfully cognizant of the bias they suffered. Both the basis for discrimination and its expression were usually apparent.
Such is certainly not the case for genetic discrimination. Bias may be based on a complex genetic analysis, compared with simpler classifications such as gender. With genetic status invisible to a casual observer, one is less likely to know others who are also at risk. Victims frequently may not know they have been a target for bias -- in many cases only the offender may be aware that discrimination has even taken place. If the action is relatively surreptitious, the hidden nature of the genetic trait means that the results of bias won't be readily visible to a casual observer. That is in stark contrast with other forms of discrimination, such as those based on race or color.
Another distinctive aspect of genetic discrimination is that any defense that a violation was committed at a subconscious level is moot. Offenders must actively seek out genetic data on which to act. So as we discussed the need to enact GINA, I often noted that such discrimination was particularly objectionable as it represented an indisputably deliberate act based on an individual's immutable characteristics.
We knew that if genetic discrimination grew in concert with our rapidly expanding knowledge of genomics, the potential for harm was immense. In a worst case scenario one could envision the development of a genetic underclass -- a possibility both provocatively and dramatically depicted in the 1997 film, Gattaca -- a depiction of a future in which genetic manipulation and selection determine the course of an individual's life.
GINA was finally was enacted following the 2006 elections -- but only after an additional 18 months during which it seemed every provision in the bill had been opened to re-negotiation. One of the most significant issues of contention involved the efforts of some to rigidly confine the bill's protection to the narrow constraints of employment and health insurance. While the legislation specifically banned discrimination in both spheres, GINA also protected individuals from being compelled to take a genetic test and treated such genetic information as medical data -- complete with corresponding restrictions on who could access it. It was argued that such strictures should be eased -- to allow employers in particular to utilize genetic tests and their results for other unspecified uses. Some argued that if useful purposes for genetic information were found that didn't directly involve employment or health insurance discrimination, why couldn't those be permitted?
It was immediately clear that granting employers or insurance firms such open-ended access to genetic data would simply invite consultants and lobbyists to devise new strategies to undermine GINA's specific protections. No less worrisome was the fact that compromising the security of genetic data could help foster new injurious forms of discrimination. Genetic information could be exploited in countless ways outside the spheres of employment and insurance. One could envision a multitude of applications involving behavior alone. Perhaps some might wish to evaluate a potential tenant in part by checking their genetic profile. In the terminology of the film "Gattaca," it could be advantageous to screen out "invalids."
This illustrates two key principles in confronting such technology-based discrimination. The first is that one cannot rely solely on prohibiting harmful acts. For years we discussed, debated, explained, negotiated and cajoled -- trying to assure that genetic information couldn't form the basis for either employment discrimination nor impact an individual's access to, or their cost of health coverage. Yet whenever Congress moves to prohibit harmful acts, that can trigger a seemingly endless game of "Whack a Mole" as one attempts to identify every conceivable loophole that might be used to undermine legislative intent.
One may thus conclude that reliance on explicit prohibitions alone can be insufficient, and that leads to a second principle that is critical to address discrimination involving health information: that access to health data must be restricted to those with a clear justification for its use. In this case, opening genetic data to employers and others could create a race between creative discrimination and congressional response, and given the current efficacy of Congress, the winner in such a contest appears obvious.
Our experience with GINA helped to reveal the tip of an emerging threat -- the use of modern data systems to create new forms of discrimination -- and our concern focused on the use of personal medical data. While genetic data expresses probabilities, other parts of one's medical record reflect established fact -- an individual's diagnoses, the medications one has used, and much more. As a clinical scientist I shared an appreciation for the value of individual medical data in research to improve health, yet recognized that its use could be like a double-edged sword, with tremendous potential for both harm and benefit. Many economic interests sought access to medical data for a wide array of uses -- in marketing, in lending, and a myriad of other applications -- and many such practices are not benign.
In discussions at an international meeting on privacy in August of 2008, I thus proposed a two-pronged strategy in addressing discrimination based upon health data. First, that certain harmful acts must be clearly prohibited and second, that the possession and use of personal medical data should be restricted without an individual's consent. In terms of the latter, the governing principle ought to be 'need to know', not 'want to use'. That would at least limit the manipulation of personal health data in new creative uses that could cause harm...and those were a growing concern.
The practice of data mining certainly isn't new. For many years, consumers have been profiled using a variety of data sources, from credit card purchases to one's Internet activity. In 2009 there was increasing discussion on the issue -- particularly focused on data culled from social networking sites. The experiences of recent college graduates began to strike a chord in a generation that hadn't thought privacy was very important. In a worsening job market, the depiction of a wild Spring Break in Cancun might have detrimental impact on a young graduate's employment prospects, and many Americans began to speculate more broadly regarding what information was being aggregated about them, and just how it was being used. Data mining began to be appreciated for what it was -- an asset that could be used to harm as well as help. That was particularly true regarding medical information -- and often that data has been obtained without explicit consent. Pharmaceutical data provides an illustration of that.
Some of us have long been concerned regarding data mining in the promotion of prescription drugs. While most Americans have assumed medication records are secure, nothing could be further from the truth. Pharmacies have long sold physicians' prescribing data, after supposedly "de-identifying" records so neither the physician or patient's name is included. Yet pharmaceutical firms easily track what physicians prescribe, by simply purchasing the Drug Enforcement Administration (DEA) database of registrants -- which allows the matching of each prescription with the physician who wrote it. And that has enabled the pharmaceutical industry to focus marketing efforts and incentives to influence physicians' prescribing of particular drugs.
For patients, the use of insurance in purchasing a prescription generates an entry in a common industry database, but even those who pay completely out-of-pocket may not have confidentiality. One may use a credit or debit card, a customer discount card, etc...and it's simply a matter of linking up databases to determine you have filled a birth control prescription, use an antidepressant, or perhaps are HIV‑positive.
Such issues of health data privacy came to a head as Congress considered economic stimulus legislation in 2009. Part of the American Reinvestment and Recovery Act established both new health information technology (IT) standards as well as an incentive system to assist health care providers in adopting health IT systems. And as Congress considered the regulatory aspects of those provisions, it undertook just what many had long insisted could not be attempted -- a reconsideration of the privacy and security provisions of the Health Insurance Portability and Accountability Act of 1996 ("HIPAA").
It was remarkable that despite the innumerable HIPAA privacy notices that millions had received when obtaining care, most individuals had a very poor understanding of how their medical information was shared. Surveys demonstrated that the majority of Americans thought their medical records were only shared among those treating them and their health plan. The idea that marketers, drug firms, insurance bureaus, researchers, fundraisers and others might have access to their most intimate data could cause a firestorm. And under a modern health IT regime, while improved technology could promote better care at lower cost, without proper safeguards one could quickly access, copy, or lose the health records of millions with just a few keystrokes. The potential for abuse was high.
In speaking with many Democrats, I was disappointed that there was often either ignorance of data sharing practices, or a belief that somehow all this disclosure would produce better health outcomes. Part of that view stems from a confidence in technology and scientific research, but sometimes a paternalistic orientation towards health care can play a part -- the mindset of "we know what's best, we'll take care of you." Unfortunately, in the extreme such thinking can spawn legislation that fosters a "nanny state" approach to public health.
Republicans often voice a similar confidence in technology, with the frequent addition of an occasionally irrational reliance on the power of markets. The fact is, though, that the individual choice essential to consumer power has long been largely irrelevant. For example, the interests of the insurance and pharmaceutical sectors have long dictated how drug prescription data was managed. For both political parties, the influence of lobbyists representing pharmaceutical and insurance firms, and academic organizations seeking profitable research opportunities, had consistently trumped concerns about privacy and security.
The disturbing fact was that existing law and practice revealed a lack of understanding of some critical fundamentals of medical ethics. Beginning with the ancient Hippocratic Oath, the disclosure of personal medical information was considered unethical, and when the 1947 Nuremberg Code established limits on scientific study, that included recognition that every individual must have the right of informed consent. That code formed a basis for the prosecution of war crimes after the Second World War, and was followed by the Helsinki Declaration, a set of principles primarily directed towards physicians conducting research. That latter agreement included specific requirements to protect the "dignity, integrity, right to self-determination, privacy and confidentiality of personal information of research subjects." The Declaration recapitulated the requirement for consent -- even for the use of patient data alone.
This ethical framework recognizes that even research that is not interventional in nature can pose significant risks to patients. Beyond the Hippocratic Oath's assertion of personal privacy and autonomy, the subsequent codes acknowledge that both the findings of a study and the disclosure of personal patient data can have harmful consequences for an individual.
Unfortunately, a deficit of appreciation for that ethical framework contributed to changes in the HIPAA "Privacy Rule," which expanded access to patient records beyond that necessary to provide treatment and assure health plan payment of charges. Some practitioners also sought patient signatures on HIPAA privacy notices on which they had surreptitiously inserted consents in order to use a patient's data to further their own for-profit endeavors, such as pharmaceutical studies and third party marketing. Medical records became a profit center -- and were at best, a leaky repository of personal health information.
Indeed I wasn't too shocked when an executive of one of the world's largest technology firms mentioned he could purchase my medical records. He knew the system of data aggregators and brokers who tapped into what HIPAA refers to as my "protected health information." He assured me that he'd never actually do such a thing -- and that he was as disturbed as I that the current state of medical records security was likely to undermine public confidence in health IT systems and their potential to both improve care and reduce costs.
The issue at stake thus extended beyond the fact that sharing practices violated medical ethics; they also conflicted with the second principle I had learned -- that given the multitude of abuses possible, one minimizes abuse of medical information by limiting access in the absence of patient consent.
Considering in 2009 that the Congress was on a path to reform health insurance, many chose to minimize the danger. After all, in a year or so we would have guaranteed access to coverage, and health insurance discrimination would be a relic of the past. Yet personal health information can be used to discriminate in countless ways beyond simply denying one enrollment in a health plan. One could use it to evaluate potential customers, employees, or associates. And once an individual's medical record was compromised, it wasn't like a credit account -- such data never expires, and a replacement cannot be created. The damage is permanent and irreparable. In the language of the law, a victim cannot be made whole.
Leverage can be a good thing. The political calculus of the 2009 economic stimulus legislation provided an opportunity to ensure not only that critically-needed health IT adoption was promoted, but that Congress would also act to reverse a trend of inappropriate use of personal health information and make medical records more secure. Issues of information use and consent were addressed, and Snowe insisted upon mandatory disclosure of breaches of security to affected patients as a means to spur increased diligence. Patients would be notified if their medical records had breached and hadn't been properly encrypted to prevent unauthorized individuals from reading them. If both failures occurred and one did not provide notice to patients, a modest fiscal penalty was imposed.
The breach notice requirement was vigorously opposed. Industry preferred that the public have as little knowledge as possible regarding both its business uses of their medical data and what had become an often wanton disregard for security. The latter remains too often the case, as in three years following enactment of the Recovery Act, over 14.8 million unencrypted patient records had been compromised in just the ten largest data breaches. Yet even as progress was made to secure personal medical data, another threat emerged from those who sought to obtain individual health information.
The rise of predictive analytics has come in response to increasing demand to project future behaviors and outcomes. Its applications span a broad range -- from national security to selling soap. Most of us are familiar with a critical one - the use of credit scores to estimate risk and increase profitability. Yet similar applications are all around us.
Some of these are remarkably useful, yet we have also seen the application of algorithms and lists that not only discriminate, but may be inaccurate and blindly imposed. In 2004 the late Sen. Edward Kennedy of Massachusetts repeatedly faced denial of boarding by USAir because a "T. Kennedy" was on the "no fly" list. It even happened once as he tried to board a flight in his native Boston. If such ineptitude can impede an individual of his stature, one can imagine the impact on others when we allow technology to evolve from tool to judge.
It should come as no surprise that a number of both beneficial and harmful applications rely on protected medical information, yet sophisticated data analysis allows the construction and use of surrogate health measures as well. For example, instead of directly attempting to obtain a list of patients with a given condition, one may identify other characteristics and behaviors they have in common -- such as the products they purchase -- in order to build a database of those likely to share the same health condition. This new class serves as a proxy for the actual health information -- and provides a means to bypass informed consent as well. The application of predictive analytics in this way cannot be assumed to be either inoffensive or benign. Technology has simply facilitated an "end run" around existing statutes designed to protect individuals.
Thus as growing technological prowess enables sophisticated discrimination capabilities, those economic benefits can collide head on with civil rights protections. In one instance predictive analytics may enable more targeted marketing; while in another, it may threaten one with exclusion from opportunity. If such a construct simply serves as a proxy for a protected class, one may argue that existing civil rights statutes remain applicable. But in the absence of that, one could see their intent undermined.
While a proxy can be a nearly perfect substitute for medical status, in other cases analytics may produce a new hybrid group that is predominately comprised of one or more protected classes, obscuring the nature of the discrimination. So while legislation such as HIPAA, GINA and the Affordable Care Act have advanced the protection of medical data and reduced discrimination, sophisticated applications of analytics can counter that progress.
Some who examine this landscape conclude that any concept of privacy is simply obsolete. They toss up their hands and say, "the horses have left the barn." That may be an acceptable answer in Silicon Valley or in academia, but it is not the sort of answer that elected representatives can offer their constituents and is certainly contrary to fundamental privacy rights protected by the Constitution. So how does one conclude that the use of such a construct is inappropriate and poses a threat to civil rights? In addition to the question of whether an application of technology undermines existing civil rights statutes, one may consider its potential to impose harm in terms of three tests.
The first of these is the immutability of a trait. Unchangeable characteristics over which we have no control -- such as race or gender -- have been the subject of antidiscrimination statutes, yet civil rights have also been extended regarding other permanent states over which we may have had some earlier control, including veteran status and disability. As one examines both health data mining and surrogate measures -- a multitude of immutable factors are evident -- including genetics. Profiling based on an unchangeable characteristic should raise questions, as the ability of an individual to impact these is absent.
A second test is that of relevance. Few Americans would support discrimination based on health information without some overriding justification. We accept requiring airline pilots to meet an exceptional health standard to which they voluntarily submit, because it is directly related to the task at hand -- not simply a statistical correlate. Consequently we would not permit such irrelevant traits as race or gender to be used to discriminate in the hiring of flight crews. Thus if potential employers, creditors, associates, or acquaintances begin to screen using health surrogacy measures, strong objections should be expected unless a high bar of justification can be offered. Those contemplating such applications should consider carefully the fact that Americans overwhelmingly expect their health data to be shared only as necessary between those providing their medical care and the health plan that provides benefits -- not to create new forms of profiling and discrimination.
The third test is the presumption of a zone of privacy. The extent of this zone may different depending on the nature of the data, but neither personal medical information nor its correlates should be considered in the public domain. The potential of data mining and predictive analytics to become intrusive and offensive was underscored earlier this year by New York Times staff writer Charles Duhigg in his depiction of how Target Brands sought to identify pregnant women in order to promote new purchasing habits. While Target has a baby registry for customers wishing to disclose their pregnancy status, Duhigg reported that the firm also chose to build data models using the purchases made by pregnant women to project which other female customers were pregnant, and thus focus marketing efforts on them.
The lack of appreciation of the possible ramifications of this effort is staggering, as it appears there was a failure to fully comprehend the myriad of reasons why a woman might not wish to disclose her health status -- a matter of self-determination for her that carries a multitude of legal, financial and personal consequences. It may be that she has had previous miscarriages. It may be that a pregnant woman is seeking employment and is concerned about possible discrimination. Or perhaps she is in an abusive relationship with a partner who does not yet know of the pregnancy. The projection that she is pregnant might even be incorrect -- and that imposes a whole other set of possible consequences.
Duhigg found Target unwilling to even comment on its practice. That comes as no surprise, given that the firm was studying its customers' reproductive status and creating a proxy for pregnancy. Such certainly appears a way to "work around" obtaining direct disclosure or consent, and avoid a potential HIPAA violation -- which acquisition from other sources would entail. The fact is, most of us simply aren't interested in the methodology. It's the objective that is unacceptable -- that a firm seeks to obtain intimate health information that an individual has chosen to withhold. If that goal must be concealed to successfully implement the program, it's a good indication to management that such a project should never have gone forward.
As Duhigg demonstrated, crossing into a zone where privacy is expected is a recipe for trouble. Target should have anticipated that projecting the reproductive status of a woman would be invasive. Yet privacy intrusions span the full range from highly offensive to inane. President Clinton was once asked the question, "Is it boxers or briefs?" regarding his choices in underwear. Today, one might simply purchase such information.
One might expect that Target and other firms will argue that by choosing to engage in commerce with them, one waives any expectation of privacy regarding either purchases or profiling. Yet such an argument infers that one must shop with cash while wearing a mask to preserve some semblance of privacy in essential transactions. Certainly not a satisfactory answer.
Another premise presented is that by yielding privacy, consumers gain some advantage. Yet as I recall the creation of many customer membership or loyalty card programs in the 1990s, the consumer benefits appear questionable. In shopping for groceries at Safeway for example, one day a customer could purchase special promotional items. The next, one needed a "Safeway Club" card to make purchases at the sale price. As a mischievous graduate student I'd shop with friends and we would purchase beer using one card, fresh fruits and vegetables on a second, and staples on another. Then contemplate what analysts would do with that information. Yet such consumer evasion is hardly a practical strategy for maintaining privacy. Soon we may see facial recognition software used to identify customers -- obviating the need for even carrying such a card. I wish I had confidence that wasn't the next logical step. But I think we know better than that. This is certainly not the vision of an "empowered consumer."
Rapidly expanding and interconnected data thus raise the risk of adverse impacts arising from discrimination, posing the challenge of how to realize the benefits of technology while minimizing harm -- and applications involving personal health data present some of the greatest hazards. While considerable risk reduction can be achieved by simply securing health data in accordance with the ancient Hippocratic Oath, the creation of health data surrogates threatens to undermine that.
At the same time, it is important to note that even in the absence of the use of surrogate measures, the Recovery Act left significant gaps in the protection of health data. The Act did not address many internet health applications, nor is there even sufficient consumer warning regarding the risks associated with divulging health data in social networking. Individuals must recognize that what one shares regarding health on line is often not protected.
Despite the hazards posed by both actual and surrogate health data, an examination of what has been said about not only health IT, but information sharing more generally, yields an overwhelmingly positive view. An on-line inquiry will generate results dominated by tech luminaries who lead firms that have created such products as search engines, social networks, and marketing and communications applications -- but relatively few from legal or ethical authorities. It is significant that those search results ranked highest reflect glowing depictions of a world with an unimpeded flow of data -- yet these typically emanate from those who profit by the sale of personal data, including many uses that may not be in an individual's best interest. Ironically, many such luminaries are themselves not so trusting in terms of sharing their own information. This brings to mind the answer of Mark Zuckerberg of Facebook, to the question of why users would entrust their private data to him. His reply characterized his view of those who would do so: "Dumb f***s."
That underscores the problem. As many seek to mine the health data of Americans, either directly or via surrogate measures -- it has become the Wild West out there. And millions are just beginning to recognize that they are not the customers in such endeavors, but the product. Should we fail to better address the use of medical information and its surrogates, millions may find themselves not only product, but victim as well.
It may be time for the second civil rights bill of the 21st century.