In September 1962, about a month before President John F. Kennedy signed the Vaccination Assistance Act into law, a white 11-year-old girl was admitted to a hospital in Sioux City, Iowa. Her face was swollen, her cervix was inflamed, and her enlarged tonsils were coated in a “grayish membrane” that covered the upper part of her throat. The swelling obstructed her breathing, so doctors cut an incision in her trachea to bring oxygen to her lungs. But the girl only worsened: The wound turned purple and red and then bled, her kidneys failed, and she seized repeatedly. She fell into a coma, and on her fifth day in the hospital she died. Cultures of bacteria from her throat confirmed the cause of death: diphtheria.

Alarmed by the severity of the case and the prospect of an outbreak, scientists from the state health department and the Centers for Disease Control descended on Sioux City. The outbreak turned out to be mild: It caused 17 cases, claimed no other lives, and was over in a couple of weeks. But investigators were stumped as to how it had begun in the first place. Ninety-one percent of children in the district were vaccinated, so it was surprising that an outbreak had taken hold. There was also no obvious source of infection. The bacterium wasn’t present in nearby schools where the affected children had siblings or friends. The outbreak wasn’t affected by raw milk, once a common source of diphtheria, because all of the city’s milk was pasteurized. Moreover, noted investigators, Sioux City had no “true skid-row section,” nor was there any “district catering to transients” nearby.

The start of the 1960s was characterized by an optimism about the conquest of infectious disease. New vaccines and new federal resources had led a growing number of experts to predict that vaccine-preventable infections would soon be wiped out for good. But in Sioux City and elsewhere, outbreaks of preventable disease persisted. Health experts who attempted to explain the trend often revived age-old assumptions about the ignorance and disease-breeding proclivities of the poor; these were the very ideas insinuated by the Sioux City investigators’ comments about “skid row” and “transients.” This tendency to hold the poor accountable for outbreaks also reflected a new pattern of disease that emerged over the course of the 60s. With record numbers of middle- and upper-class parents vaccinating their children, preventable infections began to concentrate in new populations. This was particularly true for polio and measles, both targets of federally sponsored vaccination programs that overshadowed diphtheria prevention. In the wake of national immunization efforts, polio, once a middle-class disease, became a disease of the “slums” and, in some areas, of minorities. Measles, which once struck all children, became a disease of the disadvantaged.

The 1960s campaigns against polio and measles took place in the context of a national war on poverty, widespread anxiety about the decline of American cities, and the civil-rights movement; worries about poverty, urban transformation, and race were thus subtly inscribed upon the nation’s efforts to immunize against these infections. The decade was also marked by growing scientific enthusiasm for disease eradication, which inspired a push not just to vaccinate against diseases, but eliminate them entirely.

The decade’s most high-profile vaccination campaigns both shifted their target diseases’ epidemiology—the pattern of who got sick where and when—and provoked changes in the diseases’ popular reputations. Measles-eradication proponents, for instance, urged Americans to see measles not as a familiar part of childhood, but as a fate worse than polio, drawing upon middle-class anxieties about poverty and urban decay as they did so. As one health educator put it, measles-immunization programs needed to highlight the disease’s “dramatic aspects” in order to make Americans fear the disease, for only then would the country stand a chance at wiping out a disease still harbored in its “ghettoes” and “slums.” This approached reinscribed vaccination as a middle-class concern, even as the decade’s social-welfare programs aimed to ensure vaccination’s equitable distribution across class lines.

* * *

In 1959, Surgeon General Leroy Burney penned a letter to health departments across the country, encouraging them to redouble their efforts against polio. “We in the Public Health Service share with you a deep concern that there was more paralytic poliomyelitis in 1958 than in the previous year,” he wrote. When Salk’s polio vaccine was first introduced in 1955, demand for it was so overwhelming, and vaccination rates climbed so quickly, that cases of the disease quickly plummeted. But demand then quickly slackened, noted Burney. And while rates of polio were still far lower than they had been a decade before, the sudden decline in cases showed troubling signs of reversal. There were roughly 5,500 cases in 1957 and close to 6,000 cases in 1958. If health departments didn’t act quickly to halt the trend, things would only get worse because more than half the population under 40 was either unvaccinated or incompletely vaccinated, and close to a third of children under five weren’t protected at all against the disease.

Burney’s letter also pointed out that polio wasn’t affecting the same segments of the population as it once had. Before 1955, children between the ages of five and nine were at greatest risk of the disease, but by the end of the decade, paralytic cases were concentrated in children under five, and attack rates were the highest among one-year-olds. Cases of polio were also appearing clustered in urban areas, with attack rates concentrated in the poorest of urban districts. When polio struck Rohde Island in the summer of 1960—the state’s first epidemic in five years—CDC epidemiologists noted that the “pattern of polio” was “quite different from that generally seen in the past.” Prior to the introduction of the Salk vaccine, polio cases were scattered throughout the capital city of Providence, “without preference for any socioeconomic group.” But in 1960, the cases were almost entirely confined to the city’s lower socioeconomic census tracts, especially in areas with housing projects, through which the disease seemed to spread without resistance. By stark contrast, noted investigators, “The degree to which the upper-economic areas were spared is quite remarkable.”

These observations were of no minor consequence; the fact that polio was now concentrated in urban areas and among the poor meant that its “epidemic pattern” had changed in the wake of the vaccine, according to epidemiologists. When outbreaks hit rural areas, the cases were “scattered” and the disease struck the usual school-age children. But in cities, where most outbreaks now occurred, epidemics were concentrated in “lower socioeconomic” areas and younger children were affected more frequently than older schoolkids. “A definite trend seems to be developing whereby poliomyelitis is appearing more in lower socioeconomic groups and among preschool children,” noted a Public Health Service fact sheet that described polio’s new predilection for urban areas. There was no mistaking that the trend had been sparked by the advent of Salk’s polio vaccine. But the trend was particularly troubling against the backdrop of the country’s changing urban landscape.

In the 1950s, most of the country’s largest cities had begun losing population and wealth to new developments of detached homes on their edges and outskirts. “The city is not growing; she is disintegrating: into metropolitan complexes, conurbations, statistical areas, or whatever one chooses to call them,” said Burney’s successor, Luther Terry, at a national meeting on health in American cities in 1961.

In the decade and a half since World War II, millions of Americans with the means to do so had moved to new suburbs that had been built to address a national postwar housing shortage. Federal housing loans and investment in the interstate-highway system accelerated the migration. By the start of the 60s, central cities were markedly poorer than their suburbs and continued to grow even more so. And in public-health literature and correspondence on the new patterns of polio distribution, “urban” and “poor” often came to be used synonymously. The conflation of the two categories was a testament to changing realities and presumptions about American cities and the people who called those cities home.

The nation’s cities in this period were poor; they were also, increasingly, black. From the end of World War II through the 1950s, vast numbers of African Americans moved out to the rural South and into cities in the North and West. In total, some 3.6 million whites moved to the suburbs, while 4.5 million non-whites moved to the nation’s largest cities of the course of the decade. By 1960, most Americans lived in the suburbs, but only 5 percent of African Americans did; the vast majority of them lived in these “disintegrating” areas marked by “raffishness,” violence, and poverty. Thus, when public-health officials began examining new urban outbreaks of polio, they also documented rates among “Negroes” and other non-whites that far exceeded those among whites living in suburban settings.

The history of American medicine is riddled with instances of the poor and non-white being held culpable for their own infirmities and the diseases of their communities. In 19th-century New York, cholera epidemics were blamed on the intemperance, filth, and godlessness of the city’s poor. In turn-of-the-century California, responsibility for outbreaks of plague were pinned on the habits and customs of Chinese and other Asian immigrants. Not long after, public-health workers blamed poor eastern- and southern-European immigrants for the spread of polio through American cities (its means of transmission still unknown). Through the middle of the 20th century, scientists held that “syphilis-soaked” blacks were uniquely susceptible to the disease and its effects because of racially determined inferiorities. In the late 1950s and early 1960s, however, a new, seemingly objective explanation was offered for the appearance and persistence of polio among urban non-white populations: The cities were populated by the poor, and the poor simply weren’t vaccinated, whether they were black or white.

Not surprisingly, the dramatic income gap between the vaccinated and the unvaccinated prompted many to speculate that “economics” was the root cause of the failure to vaccinate; those who could afford to vaccinate did, and that was that. But thanks to the Polio Vaccination Assistance Act and the March of Dimes, polio vaccines had been administered free of cost in many communities—so there must have been some other root cause of the failure to vaccinate. Some observers fell upon explanations that blamed the poor and minority communities for their purported ignorance, apathy, and procrastination habits. Others pointed to their lack of motivation and imperviousness to the high-profile publicity of the polio-immunization campaigns of the previous decade. Often this attribution of blame was subtle. With such widespread publicity and access to free vaccines, the poor’s failure to vaccinate was perceived to have been, to some extent, deliberate. “We need a doorbell-ringing campaign in poorer sections where vaccination is ignored,” said Donald Henderson, head of surveillance at the CDC, in 1964.

A growing body of research on the sociology of poverty influenced investigations as to why the poor “ignored” pleas to immunize. The Welfare Administration’s 1966 report Low-Income Life Styles epitomized the tone of such research; the report summarized the poor’s “typical attitudes” about life, courtship, marital relations, education, money management, and health. “Literally hundreds” of recent research studies, said one review, were showing how “poverty-linked attitudes and behavior … set the poor apart from other groups.” At the Public Health Service, a team of behavioral scientists argued that educated mothers with “white-collar” spouses were more likely than mothers with less education and “blue-collar” spouses to vaccinate their children because it was more convenient for them and because peers and community groups encouraged them. Moreover, when lower-income people read the papers, they read only about “crime, disaster, and sports” and therefore missed the news on public and social affairs, science, and politics—and, it followed, free polio vaccines. The vaccination status of the poor was thus chalked up to a combination of social pressures, inconvenience, and a lack of education that made them different from the middle-class norm. In short, those in poverty—and their “life styles”—were responsible for their newfound vulnerability to polio.

In the early 60s, it was clear that polio-vaccination campaigns had thus far simply “‘skimmed off the cream’ of people who are well-motivated toward immunization programs,” as one health official put it. The families who lined up in droves to vaccinate their children early and completely were middle-class families who, from the start, felt most threatened by the disease and most invested in the search for a means of treatment or prevention. Future immunization campaigns would have to try harder to reach those who had been missed the first time around, advised health officials.

But even this advice had its limitations. An adherence to middle-class values and a sense of trepidation about the dangers of urban-poverty areas were evident in federal guidelines that instructed health departments on how to identify the unvaccinated in their communities. The guidelines advocated going from house to house to identify pockets of the unvaccinated, even though it might be difficult to find people at home in areas where “both parents are working during the day.” In such cases, it was best to leave a note at the door and call back in the evening—though it was expressly not recommended to return to “slum areas” after dark.

This article has been excerpted from Elena Conis' Vaccine Nation: American's Changing Relationship With Immunization.