It seems that when it comes to preserving born digital works, certain questions need to be raised. In fact, a lot of questions need to be raised since there is no established consensus on which formal framework to use. There’s the question of “who,” involving the roles different people play in the lifetime of a work. This includes the artist, the curator, the preservationist, and the consumer/audience. Next there’s the “why”: what makes this work worth saving, and why did we choose certain components of the work to save? Next comes the “what” part: what exactly do these groups decide to save, and what is it that we are actually saving about this work? And finally there’s the “how”—putting a preservation plan into action.
The “who”: Creators, Curators, Conservators, and Consumers
First comes the artist, who creates the work. The artist makes the initial creative decisions that make his/her work unique, whether intentionally or incidentally. Next comes the curator, who decides that the work is worth collecting and exhibiting and defends the work’s significance. After that is the preservationist or conservator, who determines what to preserve and how. Finally there is the audience/consumer and their role in supporting the work.
What makes born digital works so complex is that the roles of these various groups are often bleeding into each other: the artist creates an interactive work that allows the consumer to feel a sense of authorship in making unique decisions that affect the work; the conservators are now asking for statements of intent from the artists to hear their feedback on what’s significant about the work; and fans of a work can prove crucial in providing the emulation software necessary for preserving that work.
Furthermore, as Dappert and Farquhar insist, different stakeholders place their own constraints on a work. For instance, Chelcie Rowell discusses how Australian artist Norie Neumark used a specific software called Macromedia Director for her 1997 work Shock in the Ear. The audience who experienced it originally had to load a CD-ROM into their computer, which could have been a Mac or Windows. The preservationists chose emulation as the best method to save works like this one, and these emulators were created by nostalgic enthusiasts. So each of these people involved placed constraints on the original work, in terms of hardware, software, and usage. And these constraints changed from its creation to preservation. Dianne Dietrich concludes with this in regards to digital preservation:
“As more people get involved in this space, there’s a greater awareness of not only the technical, but social and historical implications for this kind of work. Ultimately, there’s so much potential for synergy here. It’s a really great time to be working in this space.”
For this reason, it is becoming more important than ever to document who is doing what with the work, increasing accountability and responsibility. Which leads to…
The “why”: Preservation Intent Statements
As Webb, Pearson, and Koerbin express, before we make any attempt to preserve a work we need to answer the “why”. Their decision to write Preservation Intent Statements is a means of accomplishing this. For, as Webb et all say, “[w]ithout it, we are left floundering between assumptions that every characteristic of every digital item has to be maintained forever.”
And nobody has the time or resources to save every characteristic of every digital item. At least I don’t. To try and do this would be impossible and even undesirable for certain works, where the original hardware and software become too costly to maintain.
This leads to a discussion of authenticity. Like Espenshied points out in regards to preserving GeoCities, with increased authenticity comes a lower level of access, but with a low barrier to access comes a low level of authenticity and higher percentage of lossy-ness. In the case of GeoCities, Espenshied says,
“While restoration work must be done on the right end of the scale to provide a very authentic re-creation of the web’s past, it is just as important to work on every point of the scale in between to allow the broadest possible audience to experience the most authentic re-enactment of Geocities that is comfortable for consumption on many levels of expertise and interest.”
And that gets at the heart of why we should bother to create Preservation Intent Statements before implementing any actual preservation actions. We need to establish the “bigger picture,” the long-term vision of a particular work’s value. Rowell also points out that there are different kinds of authenticity: forensic, archival, and cultural. Forensic and archival authenticity deal with ensuring the object preserved is what it claims to be (if you’ve read Matt Kirschenbaum’s book Mechanisms, you know that this can be harder than you think to achieve). Cultural authenticity, however, becomes a much more complex issue, and explores how to give respect to the original context of the work while still ensuring a wide level of access.
And once we have decided on the best strategy, we then get into…
The “what” and the “how”: Significant
Now that we’ve established the “bigger picture,” we get into the details of exactly how to capture the work for preservation. This is where Dappert and Farquhar come back in. Dappert and Farquhar really get technical about the differences between “significant properties” and “significant characteristics.” Their definition of significant characteristics goes like this:
“Requirements in a specific context, represented as constraints, expressing a combination of characteristics of preservation objects or environments that must be preserved or attained in order to ensure the continued accessibility, usability, and meaning of preservation objects, and their capacity to be accepted as evidence of what they purport to record.”
Sounds confusing, right? The way I understood it was that properties can be thought of like HTML properties for coding. In coding, properties are simply a means of using a logical system language to define certain attributes of the website/game/whatever we are coding. Similarly, for a digital work, the property itself is abstract, like “fileSize” or “isVirusScanned.” We aren’t trying to preserve those properties; rather, it is the pair of the property with its value (like “fileSize=1MB”) that we want to capture, and this is what a characteristic of the work is. You wouldn’t save a property without its value, nor would you save the value without attaching it to a property. And significant characteristics go beyond the basic forensic/archival description of the object by capturing the context surrounding the object. Thus, significant characteristics can evolve and change beyond the original work as the preservation environment changes and as different courses of action are taken. And all of these changes should be documented along the way through these significant characteristics, prioritized and listed by order of importance.
The last question that remains is… is anyone else’s mind boggled by all this?
Practitioners and theorists are posing many fundamental questions about the archival profession. Where is it heading? What are its core principles? Is it in jeopardy of becoming obsolete or even ending all together? The questions of what the archives profession is and what it means to be a member of it relates to how we define the archives itself. The articles for this week focus on this definition and the activities and functions entailed when using the word “archive” or “archives.” Archivists claim jurisdiction over what constitutes an archives and are fending off perceived misuse of the word by digital humanists, philosophers, businesses, and everyday people. This defense is part of archivists affirming their authority to decide what it means and their unique fitness to perform this work. At the same time, the changes of the digital era are challenging the applicability of archival theory. In this atmosphere, one wonders about the importance of arguing for a single definition.
A Professional Defense of Archives
Professionalization of many occupations in the United States occurred during the Industrial Revolution, a period of uncertainty similar to the changing digital economy that we are experiencing today. As Burton J. Bledstein demonstrated, starting in the late-nineteenth century, groups such as architects, accountants, etc., created professional standards, organizations, and schooling to establish themselves as professions and to gain authority within a specific field. They aimed to define a “coherent system of necessary knowledge within a precise territory, [and] to control the intrinsic relationships of their subject by making it a scholarly as well as an applied science.” Sounds familiar right? It should, because this is almost exactly the same path that archivists followed. The Society of American Archivists (SAA) formed and sets the standards for the profession, the MLS degree (and the legion of other acronym permutations) has become a standard job requirement, and archival science is both a scholarly and applied science.
Acknowledging the current state of archives is more complex than I make it out, the situation largely seems positive profession-wise. However, as Trevor Owens demonstrated, other groups have (increasingly so with the advent of digital world) and continue to use the word “archives” under their own definitions and undermine the archivist’s professional authority over this term.
It is here where many, such as Kate Theimer, reassert the definitions established by SAA based on the traditional notions of an archives. These definitions focus on the ideas of controlling materials based on provenance, original order, and collective control. She asserted that “many other kinds of professionals (and non-professionals) select or collect materials, preserve them, and make them accessible” but the archivist’s value stems from doing these tasks based on the tenets referenced above. She fears that historical context will be lost by basing archival practice on other ideas. Theimer is emphasizing the importance of the archivist’s role both to inform the public that this information needs protection and to demonstrate it is the archivist that should be doing it. While it is reasonable to defend these tenets in a societal and professional sense, the historical context of the theories and the emergence of digital materials calls them into question.
A New Digital Order
Jefferson Bailey wonders how much the archival profession should be relying on Respect des Fonds (made up of provenance and original order) in his essay “Disrespect des Fonds: Rethinking Arrangement and Description in Born-Digital Archives.” Bailey revealed the theory’s contested past, showed that Respect des Fonds was born in a specific historical moment in France, and was merely a simplification of standards for new archivists, one that was never completely implemented there. He further demonstrated that multiple theorists have challenged these principles, complicating the idea that the archival core values are static and unchangeable. Additionally, Respect des Fonds becomes increasingly problematic when applied to born digital material.
Bailey asserted that analog records have clouded the possibilities of describing records and that digital materials do not function in the same way. For instance, original order is unobtainable on magnetic disks that store information in multiple places with no inherent order. He did not dispute the utility of original order and provenance but instead believes “it is time to revoke their privileged place in archival discourse and revisit the true goals of arrangement and description in light of the capabilities of digital records.” With all the problems with archival theory, why defend it so vigorously in the defense of the definition of archives?
You Say Archives, I Say Archives
It makes practical sense to defend the traditional idea of archives for professional reasons. Archivists have not been at the fore of handling digital material and part of this defense is reaffirming the archivist’s place in roles that would traditionally fall into their purview. Digital humanists and IT departments attempted to fill the void in recent years, handling the preservation and access to digital materials in novel ways. Though these groups have different understandings of an archives than the traditional archivist, should the archives profession fight them if, as Bailey demonstrated, the archival ideas prove problematic? It is my belief that we should be learning from each other.
As Jaime demonstrated in her post, there are multiple ways to display and examine the context of a record just as Bailey stated that “the multiplicity of meanings possible with digital records can be better realized through an ongoing interrogation of archival traditions of arrangement and description.” Similarly, what I argue is for a multiplicity of meanings for the term archives, depending on the context of which it is used. The term can mean something and be useful in one field just as it serves its purpose within the archival field itself. I agree with Bailey in that the archival core notions need a reexamination. Archivists should embrace this complexity and learn from the other occupations to grapple with the digital material its terms of art are failing to fit. While it may feel wrong to allow other fields leeway into the archivist’s professional territory, failing to do so and learn from their innovations puts the archivist down a path where they could have no profession at all, relegated to only a mention in an archives somewhere.
How does digitizing texts impact the way we conduct research? Michael Whitmore and Jonathan Hope believe that a literary criticism revolution is at hand, one in which scholars will discover new patterns and arrive at new conclusions.
Their 2007 article “Shakespeare by the Numbers: On the Linguistic Features of the Late Plays” (from Early Modern Tragicomedy) first notes that the idea that genre is a nebulous concept, one that has changed over time. Qualitative observations alone cannot accurately determine texts’ themes since commentators have different standards will disagree among themselves. How, then, can we create a widely acceptable means of analyzing?
Whitmore and Hope propose that we rely on a “quantitative analysis of linguistic features” (136). Programs such as Docuscope take literature that has been digitized and allow scholars to search for key words and verb tenses. With this raw data, they can more clearly decipher diction and stylistic patterns.
The article examines Shakespeare’s last seven plays, which various commentators since the 1870s had discribed as “romances” or “tragicomedies” (133). Yet the First Folio, published in 1623, did not break them into a distinct group. What elements within these plays caused later critics to see patterns that Shakespeare’s first editors evidently did not?
Whitmore and Hope broke plays into 1,000, 2,500, and 7,500 chunks (to allow for a larger sample size), ran them through Docuscope, and discovered that the later plays had unique linguistic characteristics. 1) Verb Tense: these plays more often used the past tense and referenced the past. 2) Asides: they also had more instances of characters’ speaking to the audience or referencing outside events. 3) Use of “to be”: characters more often used both forms of the verb “to be” and verb tense ending in “-ed.”
What does this raw data suggest? The authors argue that the prevalence of the past tense reveals the past’s importance to the present, the asides enhance the “dreamlike” ambiance of the the plays, and that the “to be” usage shows a preference for telling, rather than showing, the audience about events and people. Thus, Shakespeare used these linguistic features to create “focalised retrospection” (153) and the quantitative analysis reveals specific reasons why the later plays comprise a distinct group.
However, Whitmore and Hope are less aggressive with their general conclusion. They note that such analysis complements, but does not replace, traditional qualitative commentary. The door is wide open, though, for other scholars to use quantitative analysis with myriad other works.
How did you respond to their article? Do you think quantitative analysis of the type they used on Shakespeare’s plays can tell us more about texts and authors’ intentions than we already know? Or are they over-hyping its potential?