Before this week, digital preservation used to seem like this insanely complicated and overly technical process that I never thought I could truly understand. My background is in history and fine arts, I never took an advanced math or science class in highschool or college because I thought that it just wasn’t for someone like me. These readings proved to me that literally anyone can grasp the concepts behind digital preservation, provided that it is explained in accessible terms.
I feel like there were a couple different themes that repeated through these readings, namely that there is no one way to “do” digital preservation, and digital preservation isn’t an “all or nothing” process; you don’t have to go all out, it’s okay to start small and work your way up.
To start, Professor Owens’ chapter, The Craft of Digital Preservation, introduces this idea that digital preservation is a “craft, not a science”(Owens, 72.) Meaning, there is no one set way to “do” digital preservation, no single answer, but instead it is something that requires planning and thought, that must adapt to specific situations, and changes over time. Owens suggests “part of the idea of digital preservation as craft is that there isn’t a single system for doing digital preservation. It is a field that requires continued refinement of craft.” (Owens, 79.) There will always be the need for improvement and adaptation of principles to meet a specific institution’s needs, no one framework or model can “solve” digital preservation for you. Owens continues to warn against developing an uncritical reliance upon frameworks or models, stating that “these frameworks are useful as tools only to the extent they help do the work. So don’t take any of them as commandments we are required to live by, and don’t get too locked into how any of them conceptualize and frame digital preservation problems.” (Owens, 80.) Frameworks are great for guidance, but each institution needs to develop policies and practices that work best for them, their collections, and their users.
Okay, that’s great– but what do I actually need to do to start digital preservation, what should I be thinking about? Thankfully, Owens provides some guidelines and points us to the National Digital Stewardship Alliance’s Levels of Digital Preservation (NDSA LoDP) to break those processes down into digestible steps. Owens outlines four areas of digital preservation that institutions should consider when initiating preservation projects. First, preservation intent and collection development policy– as an institution, what do we want to save? What do we want to avoid? How does that reflect our mission? Second, managing copies and formats– we need systems to ensure bit preservation and the long term useability of content. Third, arranging and describing– how are we organizing our content? What terms will we use to describe it? What kind of metadata do we want to record? Finally, multimodal use and access– what formats will we make our content available in? How will we ensure that this is accessible to users? These questions can help institutions conceptualize why and how they should approach digital preservation, and with this knowledge in mind, they can utilize frameworks like LoDP more effectively, creating policies and practices tailored to their specific needs and capabilities.
Okay, so digital preservation doesn’t need to be an “all or nothing” endeavor, not everyone can or should attempt “four star” digital preservation right off the bat. All digital preservation programs have to start somewhere, and the NDSA’s LoDP was specifically written to be “of maximum utility to those institutions unsure of how to begin a digital preservation program.” (NDSA, 2). This framework explains digital preservation in non-technical terms and breaks down five different content areas (or elements) of digital preservation that institutions should focus upon: Storage and Geographic Location, File Fixity and Data Integrity, Information Security, Metadata, and File Formats. By labeling and listing out these elements of digital preservation, LoDP helps institutions begin to conceptualize what resources and policy should be developed to support a digital preservation program. Within that, there are also four progressive levels of quality for each element, which “is intended to allow for flexibility — users can achieve different levels in different content areas according to their unique needs and resources.” (NDSA, 2). The LoDP can help institutions get started, and continues to provide guidance as their digital preservation programs evolve, recognizing that each institution will develop differently. I think it’s super important to drive home the central idea behind this model and behind these other readings– digital preservation needs to be accessible, but also it must be flexible. There is no one way to do digital preservation, not every institution can, will, or needs to preserve things according to best practices or four star quality.
Chudnov’s The Emperor’s New Repository really hammers this idea into our heads that digital preservation programs don’t need to be large or fancy right out the gate to be effective. They advise us to “start with a small collection, minimal staff, and a short timetable, and see what you can learn by building something quickly.” (Chudnov, 3). Really, Chudnov is all about just getting it done and making it accessible to users ASAP– because really, that’s why we’re preserving things in the first place. Adding a fancy new layer of software to manage your digital objects can actually just make things more complicated, it’s okay to just post it on your website and allow users to interact with it that way. (Chudnov, 4). Again, digital preservation is an ongoing, iterative process, and “you’re going to learn so much along the way that the details of whether that tool’s the best long term fit or not are going to become obvious to you as you build up experience loading your content and making it available.” (Chudnov, 3). Over time, you’ll learn about what does and doesn’t work for you, you can make adjustments to policy, adopt new software, and make new decisions about how to store digital content.
8 Replies to “Figuring out Digital Preservation”
I too reacted strongly to the simplicity of Chudnov’s mention of posting small collections to an ordinary web page. I was especially taken with his observation that the simplicity of that platform was a merit in itself since there was so little technology around it that might cause it to fail. For me the whole discussion hearkened back to many of the conversations we had in Professor Punzalan’s class about prioritizing the establishment of access above the perfection of the metadata. Because as you astutely point out, access is indeed why we’re here.
It does make me wonder to what extent we should restructure daily workflows with a priority on access in mind. Particularly when that restructuring comes at the apparent expense of efficiency. For example, from a workflow perspective I tend to prefer performing tasks in clearly defined stages separated by the action being performed on the collection and the tool(s) required to do it. However, I’ve found that the need to bring every item up to a minimum standard before proceeding to the next stage can grind this process to a halt when I do it this way, which of course delays access to the end user. Instead I’ve learned to work in a manner that is characterized by multiple passes over the same items. And on each pass I’ve learned to tolerate a degree of imperfection with the new assets I post for the sake of making them available to users more quickly. My resulting workflow now looks less like an assembly line and more like a swirling river where I may pass over the same materials multiple times, improving them each time.
I will admit that this method has felt slightly less comfortable. It goes against my analytical and deliberative mindset. But I have seen some evidence that the swifter establishment of access has been a benefit to my user community. I’ll be very curious to see this semester if I’m forced to adapt my approach again when focusing more on preservation as apposed to curation.
Great overview of this week’s readings. Your opening observations about feeling like digital preservation was outside your wheelhouse with your given background really resonated with me. I went back and forth even signing up for this class originally because I have very little experience with digitization and computers, not really my thing. But hey, if you can’t learn in the classroom why are we even here, am I right?
I like that you emphasize the action-oriented nature of digital preservation. By that I mean, you demonstrate how the frameworks we read about are useful tools for orienting yourself within the spectrum of digital preservation but that they can only take you so far. At some point you need to translate theory into practice; as you note, it’s okay to start small, it’s okay to make mistakes but you must start. This brings me full circle with my original apprehension about this class and actually gives some comfort. Whatever digital preservation efforts I make won’t be perfect but learning by doing might just be the most effective teacher in this field.
You did great job with digesting our readings this week. As I was reading your comments about the the importance of access and Chudnov’s ‘start small and get something to the people’ idea, I was reminded of MPLP (More Product, Less Process) and an article I read for another class that basically pointed out reasons why it wasn’t all it was cracked up to be. (The article was “A Defense of Preservation in the Age of MPLP” By Jessica Philips.) In Philips article, she argues that while MPLP is great for access, the process discredits and overlooks preservation work as central to archival practice. I found it a little ironic that Chudnov was pushing for, basically, MPLP on digital collections, not only for users to obtain access, but also for digital archivist to learn what they need for digital preservation when MPLP almost preaches that preservation at the lowest point (like removing metal and describing at a folder-level) is unnecessary. Philips argues that those preservation issues that are ignored while practicing MPLP can ultimately come back and haunt a repository because they aren’t actively preventing further decay of the physical materials. In my mind, it seems like the digital version of MPLP would be to almost skip Levels 2-4 of the LoDP, which is clearly the opposite of what repositories should be doing (although reaching Level 4 in every category is most likely only obtainable in an ideal world). Chudnov’s advocacy for a quick turn around for users without the fancy metadata or multiple copies makes total sense in the world of access, but if you sidle it up next to the works of MPLP, is that really helping the users or the archives in the long run? Is skipping the “small preservation issues” like backing up in multiple geographic locations, or noting who changes what on a digital object really saving you time, effort and money? (I also understand that the get-it-out-there-quick is a building block to a more sound digital preservation practice, and I ultimately agree that you should test small collections with users to see what they want and how they use your database etc, I just couldn’t help but think of the physical side of archives.)
I see your point about Chudnov’s article being the MPLP of digital preservation, and how that can be problematic to pat oneself on the back after doing essentially the bare minimum of digital preservation. I think I took this article as more of an encouragement for institutions that were afraid of initiating digital preservation projects to start, or to see that it doesn’t have to be a crazy complicated process. But, I think you’re right, it’s important to realize that you’re not done just because you’ve got basic access, you still need to back up that data and make sure you maintain context and sufficient descriptive metadata. As Owens’ book has described for us, digital preservation is an iterative process, and digital preservation programs should continue to grow over time.
Interesting connection with MPLP. I took a materials preservation class last spring (as did Maggie I think). My instructor is very passionate about preservation, and I’m sure she’s advocated exhaustively for funding. One recurring theme in the class though was that it has to be sustainable. I read this article for class on preservation in tropical climates, and there was one striking example that I still remember:
“At one large institution in a country with a new library building and very high ambient humidity, the cooling effects of air conditioning lasted only as long as electrical power was available resulting in the rapid development of mold as the humidity condensed into moisture on the cool books, causing a serious mold problem that was not known in the original traditional building.” (Dean, 2011)
Just like materials preservation, digital preservation comes down to what you can sustain. Maybe another topic for discussion is whether LoDP Level 1 is the bare minimum or sometimes just the appropriate choice.
MPLP might be a little bleak. I know it’s a topic that a lot of people are passionate about one way or another.
Dean, J. (2011). Preservation in tropical climates: An overview. International Preservation News, 54(54), 6-10.
Your focus in your post about how digital preservation isn’t an all or nothing process fits in well with the readings from last week. In the articles that were worried about a complete digital dark age, they assumed that all material would be lost. Through even the smallest digital preservation efforts, material is being preserved. It is important to remember that some digital preservation is better than none at all. When creating or even thinking about creating a digital preservation plan, it is easy to be caught in the trap of trying to do everything or do everything perfectly to the point where nothing gets done.
By taking a step back and realizing that while preserving everything might be what everyone thinks the goal of the digital archive is (and analog as well, let’s be honest), as archivists, we know that preserving everything is not possible. Why should we take a different approach to digital preservation?
I had the very same thought regarding the expectation that we’re supposed to preserve ALL digital information, particularly while reading the “External Bits” article by Mackenzie Smith. The comparison made in that article of the relative sizes of ALL digital information ever created and the holdings of the Library of Congress is a false equivalence. One of those corpora employs a standard for entry, the other does not. If we were to use the same bar for LOC that we do for the sum of all digital materials then shouldn’t we also judge the LOC harshly for not including incidental documents? What about PostIt notes, and grocery lists, or graffiti on bathroom walls? I suspect an awful lot of the digital information that we’re talking about in that five exabytes falls into the category of “somewhat questionable social value”.
Just like the LOC focuses on material of social value, should we not give ourselves a bit of a break from the idea that ALL digital material MUST be preserved simply because it can be? Can’t some of it safely be allowed to fade from existence?
All of you have made great observations, especially about how we should not forget the principles we’ve learned about physical archives when thinking about the digital. I’ve been spending time with a lot of people who work on digital humanities, and there have been times when I’ve been a bit taken aback by projects that don’t give much thought to “respect[ing] des fonds” or keeping digital items in the context of the collection as a whole – which goes against every archivist grain in my body. The principles that we’ve all learned about the physical realm are immensely relevant to the digital. A further question would be, in what ways do those core archival “rules” directly translate, or need to be adapted to the digital realm?