Millions of Digitized Books, Hundreds of Fascinating Conclusions

Jean-Baptiste Michel et al.’s short and sweet article Quantitative Analysis of Culture Using Millions of Digitized Books raises a number of bold points that show just how valuable Google’s bold (and originally considered foolhardy) Google Books project has been to historians.  The project uses nearly 5.2 million books (over 4% of those written, a very significant standard) containing over 5 billion words and searches them.  Let’s pause for a minute and think about what that means.  25 years ago, or even 10 years ago, if you said you wanted a search through a sizable sample of every book ever written for certain words, you would have had your head examined.  The paper points out that it would take a human 80 years just to read all the books written in one year, 2000.  Here’s this device that can go through the entire corpus in literally quicker than a blink of the eye.  Roughly 2/3rds of the 500 billion words are in English and there’s only a significant sample size for the books 1800 and on (though there are a fair amount from 1600-1800), but even with these limitations, the work allowed the researchers to come to some bold conclusions.
“What conclusions?” you ask?  Try this one on for size: they estimate that most dictionaries might only contain as little as 52% of the living lexicon at any given moment.  They estimate the total lexicon of 1-grams (single words, excluding symbols, numbers, typoes, etc) at 544k in 1900, 597k in 1950, and 1,022,000 in 2000 (counting n-grams that are used over 1/1,000,000,000 of all English words).  Some of these are not in dictionaries due to dictionaries’ traditional dislike of compound words, but others are inexcusable (they point to “deletable” as a particularly ironic example).  This lexical “dark matter,” in their charming expression, are words that are fresh for research.  No OED biography has ever examined every facet of these words, and no amount of looking up will find them.  The n-gram has saved these potentially valuable expressions from the invisibility of  their hidden nature.


Another bold feature the n-gram allows is to trace the rise and fall of terms over time.  Much has been made of the example of the engram for “World War I” vs “Great War,” where Great War holds strong until 1939, then falls off, while World War I rises to pick up the slack, but it’s hardly the only example.  You can do the n-gram test yourself and see the decline of a good many words and phrases, and the introduction of others.  Ever been curious to see if anyone said “Yadda-yadda-yadda” before Jerry Seinfeld?  Want to map “Reality Television” vs. “Situational Comedy” and see if you can identify the year Survivor was released?  Want to compare Claude Lamarck with Charles Darwin or Karl Marx with Sigmund Freud?  The world is your oyster.


The n-gram can also detect the death of older, archaic forms of words.  “Spilled” is becoming the past tense of “to spill,” but there is no use in crying over spilt milk about it, spilt had a long run.  Contemporary spouters of aphorisms think that all that glitters is not gold, but their fathers sagely opined that all that [i]glisters[/i] is not gold.  Indeed, past tense verbs that end in “t” are fighting a slow, steady, losing battle against “ed.”  Can they survive?  I feel I’ve spoilt the ending of this struggle, but I’ve been burnt on these predictions before.


The final section of the article struck (or will it become “striked?”) a more somber note: repression.  Examining the use of the word “Trotsky” in Russian language sources through the 1920s tells a harrowing tale, but everyone expected as much.  (I wanted to run a similar test on “New Economic Policy” vs “Five Year Plan,” but, alas, I speak no Russian, and the English results are pretty meaningless).  What is more interesting is the revelation of people never before suspected of repression.  The Nazi regime’s list of degenerate artists was apparently far more extensive than generally known, as people never included in the traditional narrative saw their mentions in German press fall off the face of the earth in the late 1930s.  Again, this was just a cursory exercise: this n-gram search opens up the possibility of a new way of looking both at the more blatant Nazi/Soviet repression, and the more subtle blacklisting preferred in the West.  There are millions of possibilities that n-grams open up for these millions of books.

One Reply to “Millions of Digitized Books, Hundreds of Fascinating Conclusions”

  1. Fascinating subject for us word-junkies (assuming that's a word). Nicely written, too, although the repeated use of the mot de jour (bold) was distracting. If the article hadn't been so engaging, I might have been tempted to bolt (archaic past tense of bold — that is, if bold were a verb?)

Leave a Reply

Your email address will not be published. Required fields are marked *