This week’s readings expressed a wide and deeply conflicted range of attitudes regarding the assorted uses of computers, computer modeling and the data-ization of the humanities. The authors were all for it, but some of the arguments against the idea they discussed were interesting – and valid. This validity is incredibly important; having been discussing diversity and cultural inclusion on LBSC 631 this week, I found myself hyper-aware of the attitudes some of the authors were displaying to their techno-tentative brethren. However, this is a blog and I’m am going to make some grand and sweeping statements – which I will then try to back up… hopefully using memes.
Grand Statement #1: Let’s not be that guy.
You know the guy I mean.
Grand Statement #2: “Computers allow you to go further.”
If there is to be a rallying cry for the Digital Humanities, this might well be it. Yesterday, I was whining to a mechanical engineer of my acquaintance about Underwood’s observations of the reluctance in the Academy to embrace digital technologies, how they fear a total seismic shift* in their world.
I would like to assuage those fears. According to my mechanical engineer, “Computers allow you to go further. They don’t take away the work.” I was scrambling for a pen here so the next bit is a paraphrase: computers make more work and they make what you’ve got more accessible.
Take the work done with MALLET, Blevins describes how computer modeling validates itself. The Ballard diary, chosen in part one assumes for its completeness, shows how well the computer can model. Blevins even relates how surprised he was that it initially worked so well. But it worked. The tool did the thing it was designed to do. That’s great! And now there’s all this data to play with. If you wanted to only focus on the number of babies born when the crocuses were in bloom, then it’s a simple matter of correlating your data. If you want to take up the argument discussed in Graphs, Maps, and Trees, that there is no such thing as a “gothic novel” and dissolve that grouping from his chart of genres to see what the effects are, you can do so. Vistas abound, new peaks arise to be surmounted.
The problem, I think, is that Humanists see things like “correlating data” and “data manipulation” and they freak out because these are STEM things. That they are not scientists, but humanists. Theirs is a world of logic and rhetoric. Well, yes, fine, but notice how scientists get all the grants?
We don’t need to spend years waiting on graduate students to count everything by hand. We can load that puppy, or important literary work, into a computer and run analysis, any analysis, all analyses. And then tomorrow, we can do it again, go further and deeper. Instead of relying on grad students, you can partner with other academics on the other side of the world as easily as in the next building over a la Graham, Milligan, and Weingart. Where privacy and exclusivity are a concern, there is no need to make work public as they did, but this opens a work up to more input, catching mistakes earlier, and examining multiple points of view since no one publishes the book to make money anyway. You publish the book to get tenure.
Grand Statement #3: This is not the Singularity.
Technology is moving at a brisk clip, but we’re not in any danger of be replaced by robots today or tomorrow or the next day. For whatever reason, and I’m going to guess it has a lot to do with not being good with computers 10+ years ago, some humanists aren’t on board with putting the digital into their work. This is a massive disappointment for the rest of us because the kind of work that they’re doing, work like breaking down the linguistic anachronisms in Downton Abbey (a point of much personal vindication for me) and examining the Ballard diary, is really interesting. And doing it with graphs means that people who don’t have PhDs can understand it too. Perhaps therein lies the fear; that if outsiders can see – and understand – what we’re doing, we’ll all revert to the seventh grade and get made fun of by the popular kids for liking to evaluate complexly and dig a little deeper. So how do we embrace our intelligence, how do we share the fruits of our enthusiasm in the best possible way? I would argue that charts and graphs – visualizations of complex data – are the way forward.
*To be fair, the idea of a seismic shift as representative of a complete overhaul of any working system was no doubt active before Mr. M. Watkins published his article “How Managers become Leaders,” but it is from him that I got the idea so I have linked to it in Google Scholar: Michael D. Watkins, “How Managers Become Leaders,” Harvard Business Review 90 (June 2012), 65-72.
In an analysis of the Corpus of Historical American English (COHA) and Google Books, Professor Mark Davies of Brigham Young University studies the effectiveness of both engines and their ability to properly read the English language. Both are corpuses of the American English language, but Google Books has 500 billion words compared to COHA’s 400 million. Davies argues that although COHA has a significantly smaller database, the trending patterns still mirror those of Google Books. Because of this, Davies argues that COHA is actually the more effective corpus—a smaller database means far fewer data to sift through and that means quicker search results and faster information.
COHA’s “toys” are what make it a more useful database in Davies’s eyes. While Google provides the same basic function (showing the frequency of word usage throughout the decades), COHA is able to track concepts, related words, and changes in meaning. Whereas Google tends to have a one-track mind, just like it’s general search engine, COHA manages to “think” about relations for the words being placed into the search. Looking for things such as relation, form, root words, or even cultural shifts, the searches are much more comprehensive.
Design is a huge issue for some researchers, and with this in mind, Google definitely has the upper hand. True, as Davies puts it, COHA is able to effectively portray the same statistics, but I had a genuinely hard time navigating the site. Bar graphs are nice for portraying how many dark-haired, blue-eyed people are in a class, but for a corpus of American English, I found them rather ineffective. Google Books had a much more pleasing site, nicer to the eye, and easier to follow the pattern of the language. Yes, COHA’s tables are nice for their alternative searches, but as I said, traversing the site is actually rather difficult. Google provides a much more streamlined site, and provides actions that are easy to follow
Unfortunately for Google, I think the amount of words in the database have made it impossible to create proper analyses for the grammar, word meanings, and word foundations that COHA is successfully able to analyze. If Google Books created an efficient way to sift through all of that information quickly enough, then it would immediately become the preferred site. However, because of its inability to process the large amounts of data (largely its own fault) has rendered it ineffective to be a “true” corpus of the American English language.
Jean-Baptiste Michel et al.’s short and sweet article Quantitative Analysis of Culture Using Millions of Digitized Books raises a number of bold points that show just how valuable Google’s bold (and originally considered foolhardy) Google Books project has been to historians. The project uses nearly 5.2 million books (over 4% of those written, a very significant standard) containing over 5 billion words and searches them. Let’s pause for a minute and think about what that means. 25 years ago, or even 10 years ago, if you said you wanted a search through a sizable sample of every book ever written for certain words, you would have had your head examined. The paper points out that it would take a human 80 years just to read all the books written in one year, 2000. Here’s this device that can go through the entire corpus in literally quicker than a blink of the eye. Roughly 2/3rds of the 500 billion words are in English and there’s only a significant sample size for the books 1800 and on (though there are a fair amount from 1600-1800), but even with these limitations, the work allowed the researchers to come to some bold conclusions.
“What conclusions?” you ask? Try this one on for size: they estimate that most dictionaries might only contain as little as 52% of the living lexicon at any given moment. They estimate the total lexicon of 1-grams (single words, excluding symbols, numbers, typoes, etc) at 544k in 1900, 597k in 1950, and 1,022,000 in 2000 (counting n-grams that are used over 1/1,000,000,000 of all English words). Some of these are not in dictionaries due to dictionaries’ traditional dislike of compound words, but others are inexcusable (they point to “deletable” as a particularly ironic example). This lexical “dark matter,” in their charming expression, are words that are fresh for research. No OED biography has ever examined every facet of these words, and no amount of looking up will find them. The n-gram has saved these potentially valuable expressions from the invisibility of their hidden nature.
Another bold feature the n-gram allows is to trace the rise and fall of terms over time. Much has been made of the example of the engram for “World War I” vs “Great War,” where Great War holds strong until 1939, then falls off, while World War I rises to pick up the slack, but it’s hardly the only example. You can do the n-gram test yourself and see the decline of a good many words and phrases, and the introduction of others. Ever been curious to see if anyone said “Yadda-yadda-yadda” before Jerry Seinfeld? Want to map “Reality Television” vs. “Situational Comedy” and see if you can identify the year Survivor was released? Want to compare Claude Lamarck with Charles Darwin or Karl Marx with Sigmund Freud? The world is your oyster.
The n-gram can also detect the death of older, archaic forms of words. “Spilled” is becoming the past tense of “to spill,” but there is no use in crying over spilt milk about it, spilt had a long run. Contemporary spouters of aphorisms think that all that glitters is not gold, but their fathers sagely opined that all that [i]glisters[/i] is not gold. Indeed, past tense verbs that end in “t” are fighting a slow, steady, losing battle against “ed.” Can they survive? I feel I’ve spoilt the ending of this struggle, but I’ve been burnt on these predictions before.
The final section of the article struck (or will it become “striked?”) a more somber note: repression. Examining the use of the word “Trotsky” in Russian language sources through the 1920s tells a harrowing tale, but everyone expected as much. (I wanted to run a similar test on “New Economic Policy” vs “Five Year Plan,” but, alas, I speak no Russian, and the English results are pretty meaningless). What is more interesting is the revelation of people never before suspected of repression. The Nazi regime’s list of degenerate artists was apparently far more extensive than generally known, as people never included in the traditional narrative saw their mentions in German press fall off the face of the earth in the late 1930s. Again, this was just a cursory exercise: this n-gram search opens up the possibility of a new way of looking both at the more blatant Nazi/Soviet repression, and the more subtle blacklisting preferred in the West. There are millions of possibilities that n-grams open up for these millions of books.
Lev Manovich is an accomplished thinker in the field of new media. In his short piece, “Database as a Genre of New Media,” he makes the case that databases represent a fundamental paradigm shift in the way that people think about the organization and presentation of information. Databases as a non-narrative, not necessarily linear way of organizing data did not originate with the digital age – they were found previously in, say, encyclopedias or photo archives – but they have experienced a renaissance in that time. Video games, your hard drive, and the Internet are all databases, and they all represent a way to present data free of the constraints of logic and coherence imposed by the narrative form.
As Manovich puts it, “As a cultural form, database represents the world as a list of items and it refuses to order this list. In contrast, a narrative creates a cause-and-effect trajectory of seemingly unordered items (events). Therefore, database and narrative are natural enemies.” He argues that the very term “narrative” is abused in the interactive databases of the Internet and video games, where users may respond to preprogrammed variables, whether they are hyperlinks or Koopa Troopas. A narrative is something carefully constructed by its author constituting “a series of connected events caused or experienced by actors.” It is careless to assume that a user will automatically derive this experience from a database without considered input from its author – narrative is “used as all-inclusive term, to cover up the fact that we have not yet developed a language to describe these new strange objects.”
Manovich argues that since databases are free of the “cause-and-effect trajectory” of the narrative form, they can, through ever-increasingly complex organizational forms come to represent a more complete simulacrum of reality. The implication of his vision seems to be that databases will mimic real-life systems in incredible detail – a city, a historical figure, or even a whole historical society – and users will be able to interact with these simulacrums in apparently natural, non-narrative ways.
Imagine – if, instead of writing an exhaustive three volume biography of Theodore Roosevelt, Edmund Morris had programmed the entirety of his research into an algorithm which imitated Teddy himself. Students of history wouldn’t need to read about Teddy – they could go bear hunting with a database that simulates his appearance, his behavior, his patterns of speech in virtual reality. In this way, they could experience the man as he was – Teddy 2.0 would not shoot that simulated bear cub either. Am I getting this right?
Each method – narrative and database – has its own merits to recommend it, but as the genre of database evolves into ever more sophisticated forms, narrative as a construct is likely to fall more and more by the wayside in favor of organizational techniques better suited to their unique matter.
A little help – am I overstating his argument? Missing it completely?
Mike Davies’ TIME Magazine Corpus of American English is a search tool of the online archives of TIME Magazine from the 1920’s through the 2000’s. The tool is free and can be found here. Once you have played around on the site it will ask you to create a free username so that BYU can keep track of how the site is being used.
On the front page of the website, Davies claims, “You can see how words, phrases and grammatical constructions have increased or decreased in frequency and see how words have changed meaning over time”. The website certainly meets the challenge of the mission statement, however, it can be a little complicated to navigate the site. The examples on the first page are good to play around with for beginners. One of the examples given is –gate, and how the use of it changed in the 1990s (e.g. Monicagate). Click on –gate and the top box will show words that use –gate. Scroll down to Monicagate (number 5 on the right), this will pop up the year and magazine articles which you can click for further context.
Another useful feature is the option to compare multiple features in the search. For example, you can compare two words like ‘husband’ and ‘wife’ and then you can further limit the search by adding the collocate ‘divorce,’ this can be even further restricted by choosing a time range in which to search. Once you pick an actual article, TIME Magazine Corpus directs you to the TIME Magazine website where you can email the document to yourself, print it, or share it via blog, twitter, facebook, etc.
You have to be familiar with the specific ways to search the site in order to really be able to use it. There are plenty of ways to find help on the site, take a look at the information that pops up when you click the question marks by the search boxes.
Even with this help, the site takes some getting used to and can be rather time consuming to use. It is certainly easier to use than to try and go through the texts yourself to see how words have changed over time.
As far as complexity, TIME Magazine Corpus is similar to Voyeur. It is also reminiscent of the Library of Congress’ Chronicling America website, though I find Chronicling America much easier to use. The example page is great but perhaps some sort of short instructional video to go along with the example would be helpful. At least a tutorial would be great.
Though the site is limited to TIME Magazine, the amount of material is huge, ‘100 million words,’ and still growing as TIME keeps releasing publishing. A researcher could use this site to study almost anything, I conducted random searches in gender studies, film media, parts of speech, phrases, etc. and very rarely did the search conclude with less than three examples to pick from. In fact, the amount of information that normally pops up can be overwhelming.
Please play around on the site and let me know if you think that it is a useful site. Do you find it a bit difficult to navigate?
Sometimes it is a frustrating experience to search for a topic through the internet, only to have the search engine turn up results that are not related to what you are looking for. This problem is similar to what the Bing commercials looked to address with “search overload” during internet searches.
The Google Custom Search Engine provides its users with a search engine to put on their website; the main feature is that it is customizable to refine its search results based upon parameters set by the user.
This makes it easy to find information because the search engine will only look through the user-set websites and pages, and not through other places that are not topic-related.
Setting up a Google Custom Search Engine is an easy three-part step. The first step has the user setting the parameters of the search engine, listing the websites the search engine will use. The second step is only a setup of how the engine will appear on the website, and the third step provides the code to paste into the user’s website.
There are tons of smaller options that allow the search engine to be customized even further, from choosing sites to emphasize during the search, to making money from Google’s AdSense program.
One problem I could see with the search engine is that its usefulness is only as good as the sites that the user lists for the engine to use; if they do not know enough sites to put on the list, the search results may not be as complete.
One solution is that the search engine allows collaboration with invited users with limited access, letting them add sites and labels to the list as needed. The search engine can also choose instead to search through all pages, but emphasize the list of websites provided by the user.
The Google Custom Search Engine is basic in what it is used for, but can be further customized for advanced use in user interaction and how results are shown. Easy to set up, this search engine is one way for websites to ensure that their users are finding search results that are topic-related.
External Link to Example Search Engine
Smithsonian and DC Museums