The Iron in Our Blood That Keeps and Kills Us

How the most common disease you've never heard of is unearthing our evolutionary roots

red blood cell main 615.jpg.jpg
Red blood cell [AndrewMason/Flickr]

An ambulance rushed Dr. Malcolm Casadaban to a Chicago emergency department with labored breathing and three days of fever, body aches, and cough. He died twelve hours later as heart, lungs, kidneys, and liver failed under the burden of overwhelming infection. Bacterial cultures of his blood eventually revealed the characteristic rods of Yersinia pestis. Somehow, the MIT-and Harvard-trained scientist died of septicemic plague -- the Black Death -- in Hyde Park, Chicago, in September of 2009.

Investigators soon learned that Casadaban studied this organism in his laboratory at the University of Chicago, but they could not explain how the bacterium bit the hand that cultured it. Remembered as "one of the most creative and influential geneticists of our time," during his career he furthered our understanding of science and disease. Unexpectedly, he continues to do so in death.

AP090712026815.jpgDr. Michael Casadaban died at age 60 after being exposed to a weakened form of the bacterium that causes plague. [University of Chicago Medical Center/AP]

Autopsy revealed that Dr. Casadaban unknowingly suffered from hereditary hemochromatosis, a genetic disease leading to a toxic accumulation of iron in his organs. A modern manifestation of an ancient DNA mutation, this disorder can be traced to a single unknown ancestor who lived millennia ago. This mutation allowed her (or him) to more readily absorb iron from food, which may have unexpectedly aided survival in lean times -- possibly at the expense of iron-overload in later generations. We know little about the disease's founder, but we do know that she survived long enough to pass one copy of the gene to her children, and eventually, to nearly one in ten individuals of northern European ancestry.

The mutation's surprising frequency and peculiar fondness for those of Irish, British, and Scandanavian heritage offers a unique opportunity for scientists and historians to study how the world of our ancestors may have shaped the landscape of modern disease. Researchers look to DNA analysis to solve a lingering biohistorical puzzle: Is the hemochromatosis gene common because it is an unintended consequence of natural selection, or because it is a relatively fresh glitch in the human genome with little time to spread to other regions?

Out of this tradition of sickness rises a story of medical progress -- and uncertainty -- involving an atom, the wanderlust of ancient peoples, the appetite of a caveman, and a possible legacy of the Black Death. Casadaban's tragic death at the hands of an infamous microbe challenges a theory that the hemochromatosis gene became common because it protected its owners from the plague.

In this search for the origin of one of the world's most common genetic diseases, emerging research in evolutionary medicine raises new questions about our history, development, and future as a species.


As elements go, iron is a fickle and mischievous companion. Essential to life, yet impulsive, promiscuous, and destructive when allowed to roam unescorted, it poses a tremendous engineering challenge to human tissues.

Iron readily exchanges electrons with other elements. Indispensable to oxygen transport and metabolism, this property may also cause disease if iron participates in unsanctioned electron exchanges that produce free radicals -- an evanescent and particularly hot-blooded family of compounds that damage cells and DNA. As a result, all organisms dependent on iron -- from primitive bacteria to mammals -- go to great lengths to safely transport and store this potentially poisonous payload. Under normal conditions, this meticulously coordinated system functions beautifully. However, in those who absorb more iron than average, the extra influx eventually overwhelms this transport and storage system. Eventually, rogue iron escapes its minders and chemical mischief ensues.

In spite of its proclivity for drama, we owe our lives to this metal. In fact, its value and scarcity over the course of human evolution may be reflected in the body's selfish tendency to hold on to it. The kidneys and gut are much less fond of sodium, potassium, calcium, and magnesium, as we can easily excrete these metals when they accumulate in excess. We have no such way to rid ourselves of iron. Some believe this may be a thrifty adaptation to an ancient world where meat from the butcher or blood from the hospital weren't a short car ride away.


Trousseauinset.jpgDr. Armand Tosseau [Wikimedia Commons]

In 1865, Dr. Armand Trosseau described a previously unrecognized illness involving the peculiar triad of skin bronzing, cirrhosis, and diabetes. A French internist, Dr. Trosseau was a legendary diagnostician and educator. His name remains familiar to modern students of medicine, though not for this first case report of hemochromatosis. He is more famously associated with Trosseau's syndrome, a disorder of blood clotting found in patients with a lurking gastrointestinal cancer. Not long after giving the first lecture on hemochromatosis, Dr. Trosseau discovered a painful, pale, inflammatory rash on his own left leg, heralding an underlying blood clot. Ironically, it was the very phlegmasia alba dolens of his own eponymous syndrome. Shortly thereafter, he succumbed to gastric cancer -- having named, diagnosed, and suffered from its syndrome of inappropriate clotting. An ironic end for the first physician to study iron-overload.

Two decades later, the German pathologist Dr. Friedrich Daniel von Recklinghausen autopsied a series of patients dying of the mysterious "bronze diabetes." Mustachioed, bespectacled, and in dapper bow tie -- the very epitome of an academic pathologist, in the 19th or 21st century -- he appreciated that certain tissues were richly laden with iron deposits, prompting him to name the disorder hemochromatosis.

423px-Friedrich_Daniel_von_Recklinghausen.jpgDr. Friedrich Daniel von Recklinghausen [Wikimedia Commons]

We now understand that years of unchecked iron absorption poison nearly every organ system --not just the liver, pancreas, and skin. The physician's challenge is to distinguish the initial symptoms of hemochromatosis -- which can include fatigue, arthritis, or erectile dysfunction in persons of middle age or older -- from common, unrelated mimics, before the irreversible consequences of organ failure occur: heart failure, massive hemorrhage, overwhelming infection, diabetic crises, liver failure, or cancer.

However, these outcomes are preventable. With prompt diagnosis and treatment, hemochromatosis patients can enjoy a normal lifespan. The remedy is simple, inexpensive, and a relic of the barber-surgeon: bloodletting.

The modern practice of therapeutic phlebotomy circumvents our inability to excrete excess iron. Every 500 milliliters of whole blood, roughly the size of a 16-ounce bottle of soda, contains up to 250 milligrams of iron. Patients generally begin with one phlebotomy session per week or two; it can take well over a year to normalize their iron levels. They must return a few times each year for maintenance therapy, as their small intestines stubbornly continue to absorb extra iron.

Nearly 150 years after Dr. Trosseau's initial report, researchers identified the genetic culprit: a mutated HFE gene encoding a tyrosine molecule instead of the intended cysteine at the 282nd position of the protein chain (a mutation abbreviated as C282Y by biochemists).

We are still exploring how this mutation ultimately causes hemochromatosis. A heterozygote, or carrier of a single copy of the C282Y mutation, will absorb slightly more iron than most. However, the patient will not absorb enough iron to develop the disease, as two defective copies of the gene are necessary. Afflicted individuals must inherit one mutated HFE gene from Mom and another mutated gene from Dad. But oddly enough, not all homozygotes, or those with two defective copies, will manifest the disease. For unknown reasons, only about 28 percent of male and 1 percent of female homozygotes will ever develop symptoms or organ damage. Iron loss through menstrual bleeding and childbirth may explain some of this gender discrepancy, but unfortunately at present, we have no way of predicting who will develop complications.

With 1 in 200 to 250 persons bearing the requisite double mutation, the C282Y form of hereditary hemochromatosis is among the most prevalent genetic diseases in the United States. Given its potentially fatal course, puzzling questions arise: why is a potentially dangerous mutation so common, and why does it favor people of Northern European descent?


Genetic analyses are answering historical questions and helping to distinguish whether natural selection or population isolation, or a combination thereof, propagated the European legacy of hemochromatosis. DNA testing shows that all individuals with this hemochromatosis gene descend from the same ancestor, and this may explain the geography of the disease.

If you randomly pluck two humans off Earth's surface and compare their genes, you would find they share, on average, 99.5 percent of the exact same DNA sequence, regardless of whether you are comparing Lou from the Bronx with Lucia from Brazil. This similarity reflects our recent common origin in Africa from the same small populations of ancestral humans; as a species, we have not been around long enough to grow different. On the time scale of evolution, we are the new kids on the block.

The vast majority of human genetic variation is neutral; it has no effect on an individual's ability to survive and have children. Our alleles, or individual genetic variants, are passed down to subsequent generations as we reproduce. It generally takes about one million years for one of these neutral alleles to "drift" through the population and become commonplace throughout the world.

However, some alleles become common over time as they help more individuals survive and reproduce: This is known as positive selection. Given our relative youth as a species, it can be challenging to distinguish whether a common allele is genuinely advantageous or neutral and still drifting. In hemochromatosis, both may be true.

Hemochromatosis, unlike many other hereditary disorders, is uniquely monopolized by one single defect: The C282Y mutation accounts for 80 to 90 percent of cases. Therefore, we suspect that all patients with this gene inherited it from the same distant ancestor. Known as a founder effect, this phenomenon may explain why the mutation is so common in Europe and North America and virtually unseen elsewhere: All persons carrying it belong to one large and relatively sheltered extended family that slowly dispersed through Europe, and eventually America, over the ages.

The genes surrounding the C282Y mutation bear other imprints of a founder effect. The clues lie in the genetic neighborhood, or haplotype, that encompasses the HFE gene on a strand of DNA.

Only a small handful of haplotypes carry the C282Y mutation; this can suggest that the disease-causing allele arose fairly recently and that there was insufficient time to spread the mutation throughout the population, as reshuffled maternal and paternal genes form new haplotypes in their offspring. However, natural selection can leave a similar footprint.


Haplotype analyses of hereditary hemochromatosis enable researchers to stalk the founder's identity and date of existence. The results are challenging popular conceptions of ancestry and ethnicity.

Given the surnames of the afflicted, physicians and patients long suspected that the "Celtic Curse" of hereditary hemochromatosis was an Irish export. However, in 1980, Marcel Simon proposed that the mutation arose in a central European population who carried the mutation to settlements in Ireland, Britain, France, and Iberia as they migrated west. This, too, would seem to explain the modern-day population distribution of hereditary iron overload. Yet its similar prevalence along the north Atlantic coast and Scandanavia more recently lead another group to propose another population of origin: Vikings.


Vikings emerged as wanderers -- and engines of alleleic dissemination -- in the late 8th century. In 793 AD, Norse raiders sacked the monastery on the English tidal island of Lindisfarne, igniting the Viking Age and an admixing of Scandanavian, Anglo-Saxon, and Celtic cultures. Norsemen later established settlements throughout the British Isles, including a stronghold at the mouth of the River Liffey now known as the city of Dublin. Trade, occupation, slavery, and intermarriage etched a Viking thumbprint into the native community, now reflected in modern English place and surnames ending in -thorp(e) or -by, or in words like "law," "husband," and "skull."

Advocates of the Viking theory argue that a Scandanavian founder better explains today's distribution of the C282Y allele along North Atlantic coastal areas, as Celts, unlike Norsemen, were not known for their seafaring ways.

In the midst of this debate, an elegant hypothesis emerged: a truly Celtic mutation exported from Ireland to Scandinavia aboard a Viking longship would appear on a greater number of haplotypes in Celtic lands than in Norse ones, as more time has passed for the mutation to spread throughout the Irish land of origin.

In a study comparing Irish and Swedish populations, researchers observed that the haplotype diversity in the two groups was actually identical. Provided the study was large enough, it suggests the culprit allele emerged prior to Celtic or Viking civilizations; perhaps the founder was a central European hunter-gatherer who followed melting ice sheets west and north to the Atlantic. His or her progeny now populate Scandanavia, France, the British Isles, and by immigration, the United States. If confirmed, this would further testify to our 99.5 percent genetic similarity and reinforce the lesson that modern perceptions of political, familial, even racial identities are constructs of culture more than biology.

But could factors other than geography and population migration explain the distribution of the disease? Many believe that the C282Y mutation and surrounding haplotype spread by natural selection: a single copy of the mutant allele may thwart iron deficiency or disease.

Iron scarcity in ancient Europe may have inadvertently enriched our population with the hemochromatosis gene.

The dawn of the "Neolithic revolution" around 8,000 BC marks man's transition from hunter-gatherer to food producer. Life was good. Populations exploded as the farming and herding lifestyle sustained higher concentrations of people per acre; wandering tribes coalesced into towns and villages as they planted crops and had babies. Division of labor, facilitated by the development of food storage, enabled some citizens to pursue other tasks essential to the common good of the community: defense, leadership, and technological innovation. And for herders, tasty domesticated animals yielded meat, eggs, and milk, while the less tasty animals pulled plows and moved soldiers.

Sounds like a great deal, right? Barley for nothing and chickens for free. However, the same agricultural revolution that shepherded us out of the cave and into the marketplace also birthed an era of malnutrition and disease whose consequences we still address daily. Farming might even be considered "the worst mistake in the history of the human race" according to evolutionary biologist Jared Diamond, who later popularized this viewpoint in his Pulitzer Prize-winning book, Guns, Germs, and Steel: The Fates of Human Societies. Maybe life wasn't so good after all?

RTR2IEN8inset.jpgPhilippe Wojazer/Reuters

The present day obesity pandemic has directed much attention toward the dinner plates of our hunter-gatherer ancestors. Proponents of the "caveman," or Paleolithic, diet -- an unlikely assortment of researchers, fad dieters, and hipsters proudly known as "paleos" -- assert that the average hunter-gatherer consumed a healthier and more balanced diet than his farming and herding successors in spite of an unpredictable food supply. With diets rich in meat, fish, and a variety of foraged plant foods, these men and women of the forest lacked convenient access to the starches and grains of an energy-dense, button-busting, carbohydrate-laden modern diet.

So easy a caveman could do it? Maybe our ancestors were smarter, and perhaps healthier, than we thought. Fossil evidence indicates early European farmers stood roughly 6 inches shorter than their hunter-gatherer ancestors, a possible indication of malnutrition. Bioarcheologists -- bone-chasers with medical minds and an interest in the most senior of citizens -- combed burial sites and unearthed fossil evidence of agriculture's early dark side. Average height and life expectancy fell, as bone infections, dental cavities, and skeletal malformations associated with anemia rose. While the exact composition of the Paleolithic plate remains debated, most agree that European hunter-gatherers ate more meat than those in modern farming communities. And this animal protein was an excellent source of one familiar micronutrient: iron.

The World Health Organization estimates that 1.6 billion persons worldwide current suffer from the lack of red blood cells known as anemia -- half of which may be caused by iron deficiency. One's inner paleo might wonder whether this pandemic of iron deficiency began in the Neolithic era as diets bloated with carbohydrates replaced those rich in meat and fish. Anemia decreases the oxygen carrying capacity of the blood; if marked, this will hinder an individual's ability to stay healthy, find food, and reproduce. The C282Y mutation increases iron absorption, and it may have inadvertently protected carriers against this threat.

A study of over one thousand C282Y heterozygotes demonstrated that these carriers indeed have modest protection from iron deficiency. Focusing the analysis on women, who for reasons of menstruation and childbirth are much more likely to develop iron-deficiency, the authors observed that 21 percent of C282Y carriers were iron-deficient compared to 32 percent of those women lacking the gene. Although this finding did not also hold true for men, carriers of both genders had a larger amount of iron attached to certain transport proteins than the normal population.

But does it sound too convenient? A mutation arrives in the nick of evolutionary time to help more Neolithic men, women, and babies stave off the iron deficiency caused by their recent habit of producing, rather than chasing, their food. Have one copy of the mutation and you'll never manifest hereditary hemochromatosis. Have two copies of the mutation and you may develop the disease, but not until you already had children -- thus ensuring the gene will be passed on to future generations, where more heterozygotes will survive inevitable famines and postpartum hemorrhages.

It is indeed an alluring storyline, but shaky assumptions and unanswered questions linger in the backstage.

First, was the marginal benefit of the C282Y carrier state -- say a 10 percent reduction in the prevalence of female iron deficiency -- sufficiently meaningful? Obviously not every anemic woman dies, but just how many mothers and babies can attribute their survival to the modest protective effect of this mutation? And to boot, a recent reappraisal of archeological records questions the once widely held belief that iron deficiency was as common during the agricultural transition as previously thought.

Second, the founder may have lived before or after the European embrace of agriculture. In spite of our best efforts, we still lack a precise understanding of when the mutation first appeared. Haplotype analyses estimate a debut between 4,000 BC and 1,400 AD, but this is debatable. The Neolithic revolution started in the Fertile Crescent around 8,000 BC; spreading west, it reached central Europe in 4-5,000 BC and the British Isles by 3,800 BC. If the founder lived as late as 1,400 AD, the Neolithic diet theory can't hold, as too few generations have passed for positive selection to readily occur given the ostensibly modest benefit of the mutant gene.

In fairness, even if the diet hypothesis crumbles in light of future evidence, one can still wonder if iron deficiency anemia sufficiently threatened human development, productivity, and survival at earlier periods in history that the C282Y mutation might still have been useful before the Neolithic. However, it may well be the case that this allele is indeed evolutionary useless -- a new and random mutation with insufficient time for it to spread to Asia and Africa. Or, maybe, its value lies elsewhere.


Perhaps C282Y protected against disease. The hypothesis is not outrageous; malaria was, and still is, such a decimating force in Africa and Asia that certain protective traits also responsible for sickle-cell disease, thalassemia, and glucose-6-phosphatase deficiency are still commonplace in these endemic regions. In another example, one mutation seen in cystic fibrosis may protect heterozygotes from cholera, typhoid fever, and other diarrheal diseases. These examples of the "heterozygote advantage" illustrate how certain mutations may be advantageous alone, but detrimental when inherited in duplicate. Recall that it takes roughly one million years for a neutral mutation to spread through our population. These disease-causing mutations are less than ten thousand years old, yet they appear across the world; this can be no accident. Infectious diseases must be "among the strongest selective pressures in human history."

As we domesticated cereal grains, we so domesticated animals; in the process, our population grew, and we altered our environment. We developed unsavory new habits: Standing water from crop irrigation spread mosquitoes, domesticated animals spewed new germs into the barns adjoining herders' homes, and trade routes promoted exchange of new goods -- and new microbes. Amid this tinderbox emerged the familiar epidemic killers of the modern world: measles, smallpox, tuberculosis, influenza, pertussis, malaria, and bubonic plague. Like toddlers in one large, runny-nosed, ancient daycare, our ancestors inevitably fell ill.

Yersinia pestis , a stowaway bacterium in the gut of a stowaway flea feasting off a stowaway black rat in the cargo hold of an Egyptian trading ship, arrived in the port of Constantinople in 541 AD. Taking foothold in Emperor Justinian's capital city, the bubonic plague infiltrated outward through a preoccupied and vulnerable Roman empire. Drained by years of conflict with Goths, Huns, and Persians, Rome festered with opportunities for disease spread though trade and war. The bodies stacked by the thousands.

From Constantinople and Alexandria, the firestorm reached Rome within the year. It subsequently spread west and north to modern-day Germany, France, and the United Kingdom before smoldering to embers. The plague no sooner returned to Ireland and Britain, in the 660's AD, as the mouldboard plow and three-field crop-rotation system boosted production of wheat, rye, oats, and barley, and stimulated trade with hungry, battered, and sporadically plague-infested Mediterranean port cities.

plagueinset.jpgWikimedia Commons

The Plague of Justinian, spanning 541 to 767 AD, was only the first plague pandemic of the recorded era. At least 25 million people died. The second outbreak, the infamous Black Death of the 14th century, killed up to one in three persons living in mainland Europe, Asia, and Africa, leaving behind a much lonelier world. It was, according to Sylvia Thrupp, "a golden age of bacteria."

Such a staggering body count underscores the potential selective power of the plague, and foments another theory for the mutation's presence in northern Europe. Author and researcher Sharon Moalem postulates that the mutation improved plague survival, as it may theoretically prevent Yersinia pestis from reproducing inside of human immune cells. During the Black Death, mortality may have been highest, up to 50-66 percent, in the British Isles -- a future hotbed of hereditary hemochromatosis. Cold northern winters spent shivering and sneezing around the fireplace might favor person-to-person transmission of the deadlier lung-favoring pneumonic form of the infection, instead of the flea-borne bubonic form seen commonly in Mediterranean outbreaks. In this most unsympathetic environment, minute DNA differences may decide survival or death. A genetic advantage would quickly spread through the island population -- it would have less value on the mainland where plague mortality may have been lower.

However, while intriguing, this argument remains unsubstantiated and somewhat inconsistent with the plague's own taste for iron. In fact, this pathogen actually hijacks iron from its victim's stores to enhance its own ability to infect host cells. Interestingly, in 1956, Jackson and Burrows showed that a mutant strain of Yersinia pestis lacking a particular iron-harvesting mechanism was far less lethal to mice. However, the bacterium regained its bite when the infected mice were injected with supplemental iron. Along these lines, it might seem that patients with higher iron stores are actually more vulnerable to plague infection.

Enter Dr. Casadaban.

The University of Chicago tragedy made national headlines as over 60 close contacts of the professor scrambled to obtain precautionary antibiotics. Investigators from the U.S. Centers for Disease Control and Prevention, the Chicago and Illinois Departments of Public Health, and the University struggled to determine how the researcher fell ill. An accidental needle stick or splash of contaminated liquid is likely to blame, although the exact circumstances remain unknown.

Oddly, Casadaban was infected by a weakened strain of Yersinia used for vaccine research. This was no medieval fiend. At autopsy, an astute pathologist -- in the tradition of von Recklinghausen -- appreciated the distress of an iron-overloaded liver. Genetic testing revealed C282Y homozygosity, confirming the diagnosis of hemochromatosis. In an unfortunate echo of the Jackson and Burrows experiment, Casadaban's own tissues provided the supplemental iron needed to fuel the devastating infection.

In his last experiment, Dr. Casadaban demonstrated how even an attenuated strain of plague can overwhelm an iron-overloaded host; one wonders how easily its full-strength cousin might also infect a C282Y homozygote or even a heterozygote carrier with marginally elevated iron stores. Although a single example, this case raises doubts that the C282Y allele conferred some level of protection against the plague.

Maybe C282Y has nothing to do with our survival after all. There could be other reasons for its modern presence. Perhaps it is a coincidental neighbor of a different, more important allele. Events of positive selection are often accompanied by selective sweeps. As an advantageous allele spreads through a population, the surrounding neighborhood of nearby genes "hitchhike" along and loyally follow the selected variant from generation to generation, like an entourage around a celebrity. Maybe C282Y is simply one of the hangers-on, a groupie following a future guitar god of the human genome: an allele with undiscovered virtuosity, currently soloing in obscurity in Mom's garage.

But while the talent search continues, we are left wondering.


We cannot yet ascribe an identity to the founder or the world he or she lived in. Exploring the roots of this illness from proton to population, it seems that nearly every answered question spawns far more unanswered ones.

This quest exposes the frustrating underside of evolutionary medicine. In spite of much progress, many theories to explain the origins of various genetic diseases remain unsupported (those associated with malaria are the exception). But that shouldn't discourage us.

Scientific progress is rarely linear or easy. Over time, the method works -- the Internet and penicillin are proof. The collaboration between biology, medicine, and history only began in earnest in the 1980's and 90's, after crucial advances in technology and genetics. The field of evolutionary medicine is still maturing. Eventually, a reliable image of the founder will emerge, and with it, a more complete understanding of our roots.

So hemochromatosis kills people. It's also completely treatable and bloodletting is cheap. Why should we waste time and ever-precious research dollars to understand why this disease exists if we already understand the cure?

This story of hereditary hemochromatosis remains relevant because it follows our development as a modern species and society. Awash in a confluence of genetic and environmental influences, the mutation debuted at the most biologically exciting and formative time in our history. We are reminded that as we influence our environment, our environment can influence us.

There must be an explanation for why hereditary hemochromatosis is 30 times less common in blacks and 11,000 times less common in Asians than in non-Hispanic whites. A founder effect, the relatively young age of the mutation, and other unidentified demographic factors may explain this discrepancy. And perhaps, the allele may have discouraged iron deficiency or disease at a particularly vulnerable time in northern Europe.

The why is so important, as it raises fundamental questions regarding our own future. With an eye toward climate change, explosive population growth, and the threat of emerging pandemic diseases, one can't help but wonder what current social, biological, and environmental forces may shape the legacy of subsequent generations. In this sense, understanding the past may help illuminate our future.

In medicine, as in Faulkner's South, "The past is never dead. It's not even past."