Bigger, Better Google Ngrams: Brace Yourself for the Power of Grammar

Back in December 2010, Google unveiled an online tool for analyzing the history of language and culture as reflected in the gargantuan corpus of historical texts that have been scanned and digitized as part of the Google Books project. They called the interface the Ngram Viewer, and it was launched in conjunction with a blockbuster paper in the journal Science that baptized this Big Data approach to historical analysis with the label "culturomics."

The appeal of the Ngram Viewer was immediately obvious to scholars in the digital humanities, linguistics, and lexicography, but it wasn't just specialists who got pleasure out of generating graphs showing how key words and phrases have waxed and waned over the past few centuries. Here at The Atlantic, Alexis Madrigal collected a raft of great examples submitted by readers, some of whom pitted "vampire" against "zombie," "liberty" against "freedom," and "apocalypse" against "utopia." A Tumblr feed brought together dozens more telling graphs. If nothing else, playing with Ngrams became a time suck of epic proportions.

As of today, the Ngram Viewer just got a whole lot better. For starters, the text corpus, already mind-bogglingly big, has become much bigger: The new edition extracts data from more than eight million out of the 20 million books that Google has scanned. That represents about six percent of all books ever published, according to Google's estimate. The English portion alone contains about half a trillion words, and seven other languages are represented: Spanish, French, German, Russian, Italian, Chinese, and Hebrew.

The Google team, led by engineering manager Jon Orwant, has also fixed a great deal of the faulty metadata that marred the original release. For instance, searching for modern-day brand names -- like Microsoft or, well, Google -- previously revealed weird, spurious bumps of usage around the turn of the 20th century, but those bumps have now been smoothed over thanks to more reliable dating of books.

While these improvements in quanitity and quality are welcome, the most exciting change for the linguistically inclined is that all the words in the Ngram Corpus have now been tagged according to their parts of speech, and these tags can also be searched for in the interface. This kind of grammatical annotation greatly enhances the utility of the corpus for language researchers. Doing part-of-speech tagging on hundreds of billions of words in eight different languages is an impressive achievement in the field of natural language processing, and it's hard to imagine such a Herculean task being undertaken anywhere other than Google. Slav Petrov and Yuri Lin of Google's NLP group worked with a universal tagset of twelve parts of speech that could work across different languages, and then applied those tags to parse the entire corpus. (The nitty-gritty of the annotation project is described in this paper.)

A final enhancement of the Ngram Viewer is a set of mathematical operators allowing you to add, subtract, multiply, and divide the counts of Ngrams. (An "Ngram," by the way, typically hyphenated as n-gram, is a sequence of n consecutive words appearing in a text. For Google's Ngram Corpus, n can range from 1 to 5, so the maximum string that can be analyzed is five words long. The "5-grams" in A Tale of Two Cities would include "It was the best of," "was the best of times," and so forth. That keeps the datasets from spinning out of control, and it's also handy for guaranteeing that the data extracted from the scanned books doesn't run afoul of copyright considerations, a continuing legal headache for Google.)

Orwant, in introducing the new version on the Google blog, reckoned that these new advanced features will be of primary interest to lexicographers. "But then again," Orwant writes, "that's what we thought about Ngram Viewer 1.0," which he says has been used more than 45 million times since it was launched nearly two years ago. I was given early access to the new version, and after playing with it for a few days I can see how the part-of-speech tags and mathematical operators could appeal to dabblers as well as hard-core researchers (who can download the raw data to pursue even more sophisticated analyses beyond the pretty graphs).

Presented by

Ben Zimmer is executive producer of the Visual Thesaurus and Vocabulary.com. He writes the language column for The Wall Street Journal.

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well. Bestselling author Mark Bittman teaches James Hamblin the recipe that everyone is Googling.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

How to Cook Spaghetti Squash (and Why)

Cooking for yourself is one of the surest ways to eat well.

Video

Before Tinder, a Tree

Looking for your soulmate? Write a letter to the "Bridegroom's Oak" in Germany.

Video

The Health Benefits of Going Outside

People spend too much time indoors. One solution: ecotherapy.

Video

Where High Tech Meets the 1950s

Why did Green Bank, West Virginia, ban wireless signals? For science.

Video

Yes, Quidditch Is Real

How J.K. Rowling's magical sport spread from Hogwarts to college campuses

Video

Would You Live in a Treehouse?

A treehouse can be an ideal office space, vacation rental, and way of reconnecting with your youth.

More in Technology

Just In