The Draconian Dictionary Is Back
Since the 1960s, the reference book has cataloged how people actually use language, not how they should. That might be changing. An Object Lesson.
In 1961, what newly published book was denounced as “subversive and intolerably offensive”? Was it the new American edition of Tropic of Cancer, Henry Miller’s sexually explicit autobiographical novel? Nope. Although that book was called filthy, rotten, repulsive, and “an affront to human decency,” the correct answer is Webster’s Third New International Dictionary.
It might be hard to understand how a dictionary could have been deemed “subversive.” Indeed, the source of the outrage—the inclusion of slang and nonstandard terms such as the word ain’t—seems unobjectionable today. In 2011, the linguist Geoffrey Nunberg wrote of the kerfuffle in The New York Times, “It’s a safe bet that no new dictionary will ever incite a similar uproar, whatever it contains. The dictionary simply doesn’t have the symbolic importance it did a half-century ago.”
That symbolic importance is summed up in the phrase Nunberg uses: “the dictionary,” a singular reference book for language. The idea of such a thing is fiction: Ever since the early days of dictionary making in England 400 years ago, there have been competing dictionaries—never a sole, eternal authority. “The dictionary” isn’t a real thing so much as a symbol for the idea of proper English.
This symbol has been particularly powerful in the United States. That may be because print dictionaries have embodied certain ideas about democracy and capitalism that seem especially American—specifically, the notion that “good” English can be packaged and sold, becoming accessible to anyone willing to work hard enough to learn it.
Massive social changes in the 1960s accompanied the appearance of Webster’s Third, and a new era arose for dictionaries: one in which describing how people use language became more important than showing them how to do so properly. But that era might finally be coming to an end, thanks to the internet, the decline of print dictionaries, and the political consequences of an anything-goes approach to language.
The first English dictionary was Robert Cawdrey’s A Table Alphabeticall, published in 1604. In it, he announces that he is not writing for scholars or experts, but for “ladies, gentlewomen, or any other unskillful persons”—that is, those who had traditionally been deprived of educational opportunities. Cawdrey’s book was, in a way, a self-help book as much as a reference book. Because girls rarely had the opportunity to attend school and were not admitted to British universities, women would have relied on Cawdrey to help them to (in his words) “more easily and better understand many hard English wordes.” It’s unclear if Cawdrey was truly interested in female empowerment, or if he simply saw an opportunity for financial gain in women’s predicament. Still, the fact that his book was marketed to them is notable, because this was the era when Britain’s literary glass ceiling was broken: Elizabeth Cary, Margaret Cavendish, and Aphra Behn weren’t just writing private letters to friends or scribbling in secret; they were professional authors selling published works.
In 1755, Samuel Johnson, a bookseller’s son from Lichfield, published what is still arguably the greatest achievement by any English-language lexicographer, A Dictionary of the English Language. The two-volume book brought him success and widespread acclaim. In compiling his opus, Johnson didn’t just improve his own status, he helped others along, too. Among the beneficiaries of his accomplishment were Johnson’s fellow provincials. Although Johnson had needed to leave the University of Oxford after only one year due to a lack of financial means, throughout England there were other strivers with even less education. Many of them turned to dictionaries to help them overcome their limited vocabularies and poor spelling.
In America, dictionaries had a similarly leveling effect. Noah Webster was an eccentric and zealous patriot who saw a uniquely American English language as a way to turn former English subjects into American citizens. In the late-18th century, Webster began producing dictionaries and spellers. In 1828, he published his masterwork, An American Dictionary of the English Language. Webster’s dictionary didn’t just define English words for people who already had good facility with reading and writing, like writers and teachers and ministers. His dictionary also helped the sort of people who didn’t come from upper-crust neighborhoods in New York or Boston exchange their native dialects for standard American English. Some enslaved people (most notably Frederick Douglass) were able to get ahold of dictionaries and spellers, which they used, defiantly, to teach themselves to read and write. When immigration exploded later in the century, newcomers used dictionaries partly to better understand the nuances of their adopted language, partly to efface their national origins.
During the uproar over Webster’s Third, this history of dictionaries as a form of self-help literature collided head-on with the societal upheaval of the 1960s. In the quarter-century that had elapsed since the previous edition, new editors at the Merriam-Webster company had set to work assembling a dictionary informed by the study of linguistics, a discipline that took a neutral stance on grammar and usage. Unfortunately, they didn’t reckon with their customers’ emotional attachment to the older, more judgy style of dictionary making.
The standard way of describing these two approaches in lexicography is to call them “descriptivist” and “prescriptivist.” Descriptivist lexicographers, steeped in linguistic theory, eschew value judgements about so-called correct English and instead describe how people are using the language. Prescriptivists, by contrast, inform readers which usage is “right” and which is “wrong.”
At the time, the press responded with knee-jerk revulsion to descriptivism. The New York Times, for example, dubbed Webster’s Third “a disaster.” The New Yorker devoted 24 pages to Dwight Macdonald’s dyspeptic evaluation of the book, which seems excessively long even by then-editor William Shawn’s standards. The Atlantic critic Wilson Follett was also not a fan. His review in the January 1962 issue called the book “a very great calamity.” (The magazine ran a kinder evaluation by Bergen Evans four months later.)
These vitriolic responses came as a shock to the Merriam staff, who were accustomed to thinking of themselves as essentially harmless, like Johnson had. Many American readers, though, didn’t want a nonhierarchical assessment of their language. They wanted to know which usages were “correct,” because being able to rely on a dictionary to tell you how to sound educated and upper class made becoming upper class seem as if it might be possible. That’s why the public responded badly to Webster’s latest: They craved guidance and rules.
Webster’s Third so unnerved critics and customers because the American idea of social mobility is limited, provisional, and full of paradoxes. There’s no such thing as social mobility if everyone can enjoy it. To be allowed to move around within a hierarchy implies that the hierarchy must be left largely intact. But in America, people have generally accepted the idea of inherited upper-class status, while seeing upward social mobility as something that must be earned.
Allowing vulgar words into the lexicon, as many accused Merriam-Webster of doing, signaled to some that the hierarchy was being dismantled. And it was, to an extent: This was the decade that saw the introduction of the birth-control pill, the end of Jim Crow, the publication of The Feminine Mystique, and the Stonewall riots. In a 2001 Harper’s essay about the Webster’s Third controversy, David Foster Wallace called the publication of the dictionary “the Fort Sumter of the contemporary usage wars.” I’d go a step further and say that it might be thought of as an early salvo in the contemporary culture wars writ large. If the book had been published 10 years earlier, it might not have caused a stir at all. People had to be pretty nervous about general societal “permissiveness” to be so rattled by the lack of grammatical fastidiousness in a dictionary.
It’s hard to remember now, but for decades after the publication of Webster’s Third, people still had intense opinions about dictionaries. In the 1990s, an elderly copy editor once told me, with considerable vehemence, that Merriam-Webster’s dictionaries were “garbage.” She would only use Houghton Mifflin’s American Heritage Dictionary, which boasted a usage panel of experts to advise readers about the finer points of English grammar. (David Foster Wallace was invited to join several years after his Harper’s essay; Geoffrey Nunberg, it should be noted, was the panel’s chairman.) I had worked for both dictionaries briefly, and I barely saw any difference between the two. American Heritage’s usage notes really did very little to alter the overall descriptivist approach of the Houghton Mifflin dictionaries. The difference is that American Heritage tried to acknowledge and assuage the public’s sensitivity about linguistic permissiveness.
As the Merriam-Webster editor Kory Stamper revealed in her 2017 tell-all, Word by Word: The Secret Lives of Dictionaries, dictionaries still play a role in the culture wars. Members of the conservative fringe, she says, are still willing to flood editors’ inboxes with angry letters demanding, for example, that the definition of marriage be revised to exclude same-sex unions. Still, there’s no way such demands can be met: Stamper and her descriptivist colleagues are obligated to write definitions that reflect how language is actually used, not how the members of the Westboro Baptist Church wish it were used. That’s what descriptivists do: They describe rather than judge. Nowadays, this approach to dictionary making is generally not contested or even really discussed.
That’s not just because the company’s descriptivist approach, now embraced by all mainstream American dictionaries, has triumphed. Dictionaries, as Nunberg pointed out in the Times, are no longer as central to culture. That’s partially because people don’t rely on them so heavily for self-improvement, now that the internet provides an unending banquet of educational opportunities (no matter the accuracy of that feast). Furthermore, the ease with which automated spelling and grammar checkers offer corrections to human errors has downplayed dictionaries’ role as writing references.
But dictionaries haven’t lost status for that reason alone. There’s something about the end of the print format that has lowered customers’ attachment to the dictionary, or to put it more accurately, their dictionary. During the later years of the print era, Merriam-Webster and American Heritage were offering what were really very similar books, but very different brands. When you bought your print dictionary, you were implicitly taking sides. Unless you were investing in multiple dictionaries from a variety of publishers—and really, who was?—you were aligning yourself with your chosen dictionary’s public image. When the American Heritage Dictionary of the English Language (then published by American Heritage magazine) was released in 1969, the company marketed it to opponents of the counterculture. One advertisement portrayed the typical Merriam-Webster dictionary customer as a hippie, the ultimate incarnation of 1960s permissiveness.
In his 2009 book, Going Nucular, Geoffrey Nunberg observes that we now live in a culture in which there are no clear distinctions between highbrow, middlebrow, and lowbrow culture. It stands to reason that in a society in which speaking in a recognizably “highbrow” way confers no benefits, dictionaries will likely matter less. But oddly enough, Merriam-Webster is doing a great deal to promote the idea that sounding educated and using standard—if not highbrow—English really does matter. If American Heritage was aggressively branding itself in the 1960s, Merriam-Webster is doing the same now.
The company has a feisty blog and Twitter feed that it uses to criticize linguistic and grammatical choices. Donald Trump and his administration are regular catalysts for social-media clarifications by Merriam-Webster. The company seems bothered when Trump and his associates change the meanings of words for their own convenience, or when they debase the language more generally.
Maybe it’s not the dictionary that has become outmoded today, but descriptivism itself. I’m not implying that Merriam-Webster has or should abandon the philosophy that guides its lexicography, but it seems that the way the company has regained its relevance in the post-print era is by having a strong opinions about how people should use English. It may be that in spite of Webster’s Third’s noble intentions, language may just be too human a thing to be treated in an entirely detached, scientific way. Indeed, I’m not sure I want to live in a society in which citizens can’t call out government leaders when they start subverting language in distressing ways. When Kellyanne Conway described the Trump administration’s demonstrably false statements as “alternative facts,” Merriam-Webster’s blogger tartly reminded her that a “fact is generally understood to refer to something with actual existence, or presented as having objective reality.” True descriptivism, I suppose, would have called Conway’s definition an interesting new variant usage. I’m glad they called it what it is—an incorrect and deceitful one.