Visualizing Data and the Future of Digital Scholarship

This week’s texts focus on digital analysis through visualization tools and text analysis. In his article, “Digital Visualization as a Scholarly Activity,” Martyn Jessop gives us a working definition of “visualization” as “a group of techniques (methods) for creating images, diagrams, or animations to communicate both abstract and concrete ideas throughout human history” (Jessop, 282). These tools, used by scholars of the humanities, arts, and sciences, alike, have offered alternative avenues for representing, illustrating, and understanding data. However, Jessop argues that by applying these techniques digitally, scholars can go beyond face-value to create new methodologies of “doing” history – that is, communicating ideas in a way that is interactive and allows for the audience to derive new understandings and meanings from what is displayed. In a time when scholars are increasingly supplementing the scholarship they are expected to produce with online exhibits and mapping projects, or taking an interdisciplinary approach and representing empirical data in 3D models, elevating digital visualizations to a level that is considered “scholarly work” is imperative. 

While Jessop’s text is now 12 years old and scholarship including digital visualization has continued to grow, academic institutions still prize published texts as a primary path to tenue and professional achievement. The question of whether academic departments will change their requirements for tenure remains to be seen—and there has been cause for concern from many scholars that the proposed solution is to tack additional digital projects onto tenure expectations of publishing (see a letter to the editor of the American History Association’s Perspectives on History, here). However, there are hopeful indications that many organizations are moving in this direction (see the AHA’s “Guidelines for the Professional Evaluation of Digital Scholarship by Historians”).

Joanna Guldi’s,“The History of Walking and the Digital Turn: Stride and Lounge in London, 1808–1851” and Ben Schmidt’s, “Making Downton more Traditional” pair nicely with Jessop’s discussion of digital visualization and the utility of incorporation digital tools and visualizations into methodological frameworks. Both Guldi and Schmidt use Google’s NGram Viewer—a tool that allows users to visualize the history of a term’s usage in books—toward different ends and with varying results. 

For Guldi’s research on “walking” (aka “lounging, strolling, lurching, dodging, waddling, trudging, etc.) in 19th-century London reveals complex and interconnected layers of rhetoric, class, social change, etiquette, the body, clothing, and the built environment. Her efforts to identify discussions of walking (or its various synonyms) through word searching tools in databases elicited as many challenges as successes—faulty algorithms and false positives, to name a few—but, by employing some of the skills in the historian’s toolbox, Guldi was able to derive interesting and significant findings from word search data. As she states in her Introduction, “the historian in the digital age must make do with findings that can be gathered and cross-checked by way of multiple databases” (Guldi, 116). However, the intensive search process Guldi undertakes to determine to validity of a databases’ results by cross-referencing it against others seems incredibly time-and-labor intensive (though she states that this process, in fact, can “ease the scholar’s workload”) (Guldi, 118). Indeed, for a scholar tracing changes in verbiage and rhetoric over an extended period of time—especially if the subject has not been well-researched before—the ability to use digital visualization tools to represent trends in usage seems useful, if not foolproof. 

For Schmidt’s hunt for anachronisms in his quest to poke holes in the scripts of Downton Abbey, Google NGram seemed to serve its purpose. The text analysis tool revealed a host of anachronistic phrases throughout the first few seasons of the popular show—phrases containing words that would be much more common in the mid-to-late 20th century than at the turn of the 20th (some of my favorites were the use of “gonna” in multiple phrases and “exercise classes”).  Schmidt also added another layer to his analysis by observed how often certain phrases were used and comparing them to other phrases that might have been more commonly or frequently used during the period the show is set in. Schmidt’s conclusions are as interesting as his use of the NGram—he finds that anachronisms show up more in the behaviors and ideals of the characters of the show than in their manner of speaking (though quite often in that, as well). But isn’t this exactly what we watch these shows for? To see our modern sensibilities projected upon characters living in a not-too-distant past? No? Just me? In any case, while word choice makes a show, it too can break a solid historical argument, and for that reason, Schmidt’s analysis serves as both a word of caution and evidence that there are tools available to, on some level, ease the load of identifying accurate phrases of a given era.

In addition to being time consuming, these tools seem prone to issues across languages, limitations based on what books are included in the survey, and rely largely on the heavy-lifting of metadata librarians behind the scenes. So are there better alternatives? Voyant, as we will see, provides another option for text analysis that, unlike Google NGram, was created for the digital humanities, in particular. 

Taken together, these texts bring up the same question Jessop posed in “Digital Visualization”—that of “visual literacy.” Unlike written language, which Western educational practices emphasize, visual literacy is rarely taught in classrooms. Thus, even though digital visualization has the potential of serving as a robust tool for synthesis, modeling, AND analysis, does the broad public have the skills necessary for observing, analyzing, and drawing conclusions from visualizations? We are taught in history courses to cross-reference our research, be wary of anachronisms and ahistorical perspectives, and, as public historians in particular, encouraged to think creatively about how we can represent our research findings in a way that is accessible to a broad public. But even given these skills, have we been prepared to both read a map and draw new meanings and conclusions from a digital map that combined spatial, temporal, and empirical data? The question of digital visualization seems not to be, “can it be created?” but rather, “how can it be used/understood once it is?”

2 Replies to “Visualizing Data and the Future of Digital Scholarship”

  1. Thanks for your post, Carmen! It’s gotten me to thinking about a kind of “chicken and the egg” question regarding how we can make digital work more accepted in scholarly/academic circles and more accessible to the wider public. Does it start at the bottom, with students/the general public learning visual literacy? Or does it start at the top with educational institutions reworking their tenure and professional achievement standards? I’m sure the answer is a healthy mix of both… But essentially, how can we make sure universities are hiring digital scholars who will educate students in digital literacy while also making sure digital literacy spreads beyond these institutions?

  2. I have to admit, I was a little disappointed by Schmidt’s analysis of the Downtown Abbey scripts. His approach of identifying anachronistic phrases and frequency is interesting, but it doesn’t consider the context of those phrases in how they’re used. How can class differences in speech (a major division in the show) be accounted for using this method of analysis? Do the sources Schmidt is comparing to the scripts reflect the kind of speech that would be accurately used on the show (ie. do “all English-language books” from that period reflect British speech in particular? What about the differences between written and spoken word?)
    To really get to the heart of these nuances, a close-reading of sources is required – just the type that Jockers writes off in “Macroanalysis.” Isn’t history built on this kind of close consideration of a source’s unique origin, author, etc.?
    This points to a larger question about context in digital visualizations – how much historical context does an analysis need to be consider to be “doing” history?
    Ultimately, I think your point about Downton Abbey being entertainment value first and foremost is perhaps the most important thing that Schmidt kind of writes off. Downton isn’t really claiming to be doing accurate-to-the-last-word history. Schmidt is, however, with several flaws in his study.

Leave a Reply

Your email address will not be published. Required fields are marked *