In the HiPSTAS (High Performance Sound Technologies for Analysis and Scholarship) grant proposal, the authors express the hope that participants in their program “will understand better how to ‘imagine what they don’t know.’” The readings for this week make clear that the practice of oral history could be and probably should be so much more than it has been heretofore envisioned and practiced, where, at least in my conception of the subject, a historian interviews a bunch of people about a particular topic, has their tapes transcribed, produces a book or a documentary using some of the material in the recordings, and then files the tapes away in a box (possibly in an archive and maybe even with some cataloging) that is likely never to see the light of day again.
In terms of “doing” oral history, the two most conventional readings in this regard are Doug Boyd’s “Designing an Oral History Project” and Kara Van Malssen’s “Digital Video Preservation and Oral History.” Boyd points out that there’s a lot for the historian to think about beyond just the questions that will be asked of a subject when designing an oral history project, and both authors urge the practitioner to think holistically about the project ahead of time, to include not only pre-production and the point of capture, but also considering the entire lifecycle of the project, including editing, archival storage, and future access. “Early choices you make in a project will affect later opportunities,” notes Boyd. “Decisions have consequences.”
While Van Malssen’s discussion of video formats looks forward towards the future and considers issues of preservation of recordings, Jonathan Sterne instead looks backwards at the history of the currently ubiquitous MP3 audio format to examine how decisions going back at least 100 years have had implications for the specifications of this particular format, which reference, sometimes for no better reason than this is how it’s done now so let’s stick with it, specifications from other earlier formats. Stern argues that “encoded in every MP3 are whole worlds of possible and impossible sound and whole histories of sonic practices.” (2)
Particularly important in Sterne’s work is the notion of “format theory,” which I think boils down to the choice of format is not benign because “Format denotes a whole range of decisions that affect the look, feel, experience, and workings of a medium. It also names a set of rules according to which a technology can operate.” (7) The assumptions and specifications engendered in each format affect the user’s/listener’s experience of and relationship with the media, and thus in Sterne’s view, it is important to understand how the format mediates the material.
In “Oral History and the Digital Revolution,” Michael Frisch offers an example that I think illustrates the idea of format theory and provides a basis for redefining what we even think oral history is. Frisch’s work illustrates that the audio and videocassette format of oral history recordings have had a profound effect on accessing these resources and understanding the content of these tapes as well. An assumption of oral history practice is that linear analog tapes are a pain to work with and therefore transcoding if you will the content of the recordings from audio or video to text by means of transcription is the best and fastest way for a researcher to access and engage with the content of a recording, to the point that transcription is viewed as an essential procedure. Frisch argues, however, that a great deal of meaning is lost in the translation of sound into text. “Meaning inheres in context and setting, in gesture, in tone, in body language, in expression, in pauses, in performed skills and movements. To the extent we are restricted to text and transcription, we will never locate such moments and meaning, much less have the chance to study, reflect on, learn from, and share them.” (2)
Digital formats, however, offer new possibilities for oral historians. Using timecodes, annotation, and other metadata linked to content, it is easy to quickly dive into digitized materials directly at any point of particular interest in the recording. Thus the recording itself rather than the inherently different experience of a transcript of it becomes the object of study and in Frisch’s words “put[s] the oral back in oral history.” By studying the recording directly, the researcher can engage in what Nancy Davenport, cited in the HiPSTAS proposal, refers to as “deep listening” or “listening for content, in note, performance, mood, texture, and technology.” This additional information beyond the content of the recording in its strict, text-based sense may allow the researcher to gain new insight into the meaning of what has been recorded.
Ethnographer Wendy Hsu seeks to move away from the digital text as object of study paradigm however and “shift the focus of the digital from a subject to a method of research” by combining various quantitative, data-oriented computational analysis techniques with traditional qualitative ethnographic methods including direct observation and interviews to identify, document, and consider the meaning of patterns and processes related to her subject matter, which is musicians of the Asian diaspora. The data-generated patterns uncovered by the quantitative means inform questions that can be further explored qualitatively. Some of the methods she has employed in her work include mapping the geographic locations of bands’ fans by scraping location information from the bands’ MySpace friends’ pages, analyzing non-song sounds in song recordings to learn about the context of the creation of the recording, and using spectrograms to visually analyze stylistic qualities of music.
So how might historians apply similar “doing digital” techniques to their own work with audio and video artifacts? That is very much an open question and one that I’m not sure the readings answered very well. However, one of the stated aims of the HiPSTAS project is to bring together archivists/librarians, scholars, and computer scientists in an effort to create new tools to facilitate the study of sound recordings by means such as clustering, classification, and visualization. Archives are already storing quite a bit of oral history recordings that go unlistened to or unwatched, a valuable resource that Frisch notes goes “largely untapped.” And the HiPSTAS team makes a pretty good point that if researchers don’t start using existing audio collections, then repositories won’t have much incentive to keep storing the old recordings, let alone augment their collections with new materials, so it really is imperative for history scholars to find means to unlock the potential of these audio resources.
There was really a lot going on in these readings and I feel like I barely scratched the surface here of the many issues that the various authors raised. Returning to where I began this though, the readings did really challenge my perception of what exactly oral history is. It isn’t just about interviews or even necessarily the spoken word. A wide variety of preserved audio such as musical performances, ambient sound, speeches, poetry readings, and the telling of stories that have been passed through generations by way of oral tradition can reveal valuable information about the past (or present-day) life and culture. All sorts of sound-based documents could serve as potential primary source material given useful means of incorporating the information they provide or could reveal into one’s historical analyses. This may well be a bit of a “duh” to everyone else, but I guess that’s something that I just had never really considered before. Now I’m trying to imagine what else I don’t know.
What other issues did this week’s readings raise for you regarding the possibilities and potentials brought about by digital means and methods as applied to oral or audio history?
So, don’t get me wrong, I love YouTube, but the way all the readings this week were talking about the backlog of oral histories brought to mind equal parts the sprawling leviathan of YouTube and how Yahoo!, back in those dark days, hired a bunch of librarians to curate the internet.
Well, that didn’t work and it’s statistics like, “300 hours of video are uploaded to Youtube every minute” that make me wonder things like, “could the MPLP actually be the answer?” “More Product Less Process,” or “the Minnesota Method” as it was called when I first heard it, suggests that we use more upper level description and less item level description to get materials out into the public’s use. Detractors of MPLP, in my experience, consider this attitude someplace between laissez-faire and grossly negligent. But it might be the answer to get things up and out, which, in turn, seems to me a perfect opportunity to crowd-source some cataloguing. Find a platform, upload the videos, and have interested citizens tag the videos and timestamp interesting bits. There’d even be a fascinating thing to have everybody who watches it from the beginning mark what they found interesting so as to upvote sound bites.
But what does this do to context? How many people have argued that something they said was taken out of context and the meaning (perhaps intentionally) misconstrued? The special contextualizations that needs must be integrated for especially video are highly subjective. A tone of voice, the way someone uses their hands; these can be difficult to describe accurately and perceptions change individual to individual. If we train every member of the participating community what risk do we run of de-professionalizing ourselves; if anyone can do it, what makes us so special? So how do we do our due diligence to video archives while still preserving our jobs?
Commenting on the MPLP issue. When I first heard of MPLP, I was quite skeptical, but having read the article finally last semester and also having had to deal with huge processing backlogs myself, I find the argument for skipping the item level description very compelling, to a point.
But I don’t think that the access issue that Frisch is talking about is so much about whole collections of oral histories being hidden from researchers (which is undoubtedly true) but rather in getting to the really useful parts (whatever that may mean) of individual interviews without having to wade through the entire thing. Or getting to the best interviews in a larger collection. Frisch isn’t really talking about more or less processing but different processing. His discussion of indexing to make the really interesting stuff in the audio without having to listen to the whole darned thing would be absolutely wonderful, no question. But just like transcription, this takes a lot of time and effort.
We do have another descriptive form that helps facilitate access to these interviews–the abstract. Instead of verbatim transcription, we summarize the lines of inquiry and discussion and highlight points of interest using time code references. We also add subject and keyword terms. It still takes some time and as a textual record, it has similar limitations to the full transcript, but I think it’s a useful document that a researcher can use to decide if that interview will be useful to their needs without having to read the whole transcript or listen to the entire interview and it can be used to identify relevant portions of the material. In my mind at least, I can envision the abstract being translated into a digital form such as Frisch describes.
But I guess I wonder if something like Pop Up Archive is sort of the equivalent of MPLP for oral history.
I like the pragmatism behind MPLP, but I also wonder how in the world to get people to actually listen to hours of oral history in order to crowdsource the cataloging. Perhaps gamification would work, but that might not be an option for smaller institutions with fewer resources than the big players who can fund massive crowdsourcing projects. However, it’s definitely important to think about how and why oral history is important, aside from the obvious historical factual or historical perspective content one can derive from a personal interview; it’s also important to think about what makes archivists and historians important to the interpretive process for oral histories. Could crowdsourcing in practice create more work for professionals instead of less, if we needed to constantly contextualize oral histories for contributors and Internet stranger volunteers? How could we build a process that would actually help out institutions with fewer resources, who tend to need the most help in making their oral history collections available?