Upon reading the bold statement at the beginning of the Preservation Reformatting chapter of ARSC’s Guide to Audio Preservation, “A restored version of a sound recording cannot be considered a preservation copy,” I had a moment of kneejerk skepticism. While I think that anyone can appreciate the truthiness of that claim, it feels a little weird to authoritatively ignore an important concept: digitization always results in the creation of a new digital object.
A point made by Trevor Owens in a 2012 post on The Signal comes to mind:
“The idea of digitization obfuscates the fact that digitization is not a preservation act. Digitization is a creative act.”
Owens is arguing that a digital surrogate should not simply be considered a handy duplicate because digitization tools (and the preservationists who use them) will always be making decisions about what they are capturing. Some of the decisions may seem harmless or minuscule, but they are still judgments that prioritize certain significant properties. Furthermore, the decisions regarding what materials get digitized and which ones don’t also demonstrate these types of values. Still, I can understand where ARSC is coming from. “Warts and all” preservation copies allow scholars to scrutinize the artifactual qualities of items, and they also have a way of—to use a tired phrase—bringing history to life.
Still, too much faith in these preservation copies can lead to problems. Sarah Werner illustrates this point in her discussion of the limitations of the digitizations on Early English Books Online. The digitizations were largely drawn from microfilms of early books and so certain unexpected details can be lost or misinterpreted. The title page of an elegy mourning the death of Prince Henry, for instance, is a mistakenly reversed negative because the person who processed the microfilm didn’t believe that the text was white on a black background.
Interestingly, FADGI’s Technical Guidelines for Digitizing Cultural Heritage Materials does not subscribe to ARSC’s relatively dogmatic principles regarding verisimilitude. Take, for instance, this snippet that discusses FADGI’s view on the adjustment of master image files:
“There is a common misconception that image files saved directly from a scanner or digital camera are pristine or unmolested in terms of the image processing… Because of this misconception, many people argue that you should not perform any post-scan or post-capture adjustments on image files because the image quality might be degraded. We disagree. The only time we would recommend saving unadjusted files is if they meet the exact tone and color reproduction, sharpness, and other image quality parameters that you require.”
FADGI’s reasoning is a mix of ideological concerns and practical thinking. It recognizes that advocacy for adjusting master images may cost it some blue ribbons in the future (“First Place – Most Authentic in Show”); however, it also feels that “adjusting master files to a common rendition provides significant benefits in terms of being able to batch process and treat all images in the same manner.” Furthermore, multiple copies (master/raw, production, access) might create prohibitive storage costs.
In their eyes, post-capture adjustments will result in insignificant data loss and raw files are often more trouble than they are worth. Might this be FADGI taking proactive steps to avoid creating more situations like Sarah Werner described: producing facsimiles of things that don’t actually exist?
This makes me wonder: what is the cause for these contrasting perspectives? Is it something to do with the different materials being preserved? ARSC’s definition of preservation reformatting might provide a clue: “the process of transferring the essence or intellectual content of an object to another medium.” I’m not 100% certain what they mean by “essence”; perhaps they are referring to the differences between artifactual and informational qualities. I also see this as a nod to a precept of psychoacoustics: perception and meaning are not bound to one another. Still, the very idea that someone is prioritizing either essence or intellectual content seems to undermine the authenticity of any preservation copy.
Also, ARSC does recognize the fact that audio files aren’t necessarily “unmolested” during transfer from analog to digital. In my current field study at UMD Digital Conversion and Media Reformatting I am following ARSC guidelines to digitize reel-to-reel recordings and this requires me to adjust the azimuth of the reel-to-reel player before transferring each recording into Adobe Audition. During this process I am essentially adjusting the playback head based on what sounds right to my ear. The resulting preservation master can’t be called 100% raw. But I suppose that “do the best you can out there” doesn’t make a very strong opening statement to your preservation guidelines!
Ultimately, I wonder how different the philosophies of ARSC and FADGI are in practice. They look pretty different on the page in regards to tone, with ARSC being a bit dogmatic and FADGI as a relatively cavalier pragmatist, but are they so divergent? Is FADGI really “throwing away” any more data by virtue of post-capture adjustments than ARSC is by prizing the preservation copy? I suppose not, if you believe that digitization is a creative act in the first place.
5 Replies to “The ARSC-Files: The Truth Is Out There”
Pedro, awesome post. I think you’re right about “essence” being something more than meaning. There has to be some essence-related reasoning to justify digitization to a certain degree of fidelity, in order to capture something above and beyond the “aboutness” metadata can represent.
Great point, Amy; it reminds me of Woody Guthrie’s daughter identifying a voice as “a young Pete Seeger.” She recognizes not just the voice, but the timbre of youth. Her ear and memory place the recording in her living room in the 50s, not metadata.
It’s a very odd mix of ‘you must do it this way’ and ‘this way still has issues.’ I guess they’re trying to make sure the issues are consistent? For instance, it’s acknowledged that RGB scanners and screens can’t fully replicate all the colors of the original, and so specific parameters for monitors, etc, are recommended. But are these recommendations passed on in any way to the end user, so they know what will give them the most accurate viewing experience? And, of course, I’m still waiting for someone to come up with a monitor that incorporates amber or 7-color LED frames, the way stage lighting has done, for better color parameters. How will that effect things?
Digitization philosophy has always been ‘digitize in as high a quality as you can, and even if we can’t display it at that high quality, maybe some day we can,’ but there are still plenty of affordances at the intake end to consider, which I’m glad to see these documents at least start to address.
I think it is healthy to keep the notion “digitization always results in the creation of a new digital object” in mind as you and Trevor Owens do! The other day, I was talking with Jason Camlot, the Principal Investigator of an scholarly audio archive SpokenWeb (http://spokenweb.ca/about/). He said that sound needs a constant migration from one medium to the other: wax cylinder, vinyl, magnetic tape, CD, hard drive. Given we’ve been discussing about the difference between storage, migration, and emulation throughout the semester, that phrase struck me: “sound needs a constant migration.” I’ve been musing on the strange entity of sound since then. It seems like the FRBR notion of “the work” works well with the audio artwork. What do you all think?
I’m inclined to agree with FADGI’s flexibility on post-scan adjustments, since there’s nothing inherently more authentic about the raw scanned file than an adjusted one…both will have variations or distortions based on the limitations of the equipment and the person performing the scan or adjusting the file. It does seem like some mechanism for documenting the adjustments made would be important, though. Especially if further research is going to be done using the digital data (as with the IRENE project, for example), having information on the origin of the document will allow researchers to have more confidence in the accuracy of research built on the digital object.