Hello all, and apologies for posting late. My interest in the course has a lot to do with the oblique applications of digital art curation methods to other domains, like institutional e-records or digital scholarship in a variety of disciplines. Ippolito and Rinehart frame new media art preservation as a laboratory for methods that “may inform the problem of preservation in other fields” (p. 20). That certainly seems to be true for capturing and preserving dynamic, web-based material. See, for example, the ongoing development of WebRecorder and other web and social media capture tools at Rhizome. Without really knowing how WebRecorder works, including whether it builds on the Wget-based workflow Fino-Radin describes or is a separate beast altogether, it seems incredibly useful for institutions whose missions and collection policies differ quite a bit from Rhizome’s.
Not only could it be possible to repurpose digital arts curation tools and methods in other contexts, but borrowing tools forces us to take a hard look at domain-based assumptions. What practices are analogous between software preservation and digital news archives, for example, and which are truly specific to a discipline? As tools for preserving art, games, and multimedia content filter into other areas, it’ll be interesting to see which domain-specific language is simultaneously taken up. Archivists and the general public already talk about technology in metaphors; what’s a few more?
Another aspect of digital arts curation that grabs me is deliberately preserving for reuse. Although in theory reuse is the ultimate objective of all kinds of [digital] preservation, the users and uses of archival material are often assumed to be narrow: historians writing history is a big one. Even oral histories and community archiving can start to assume a scholarly, credentialed audience if not conscientiously managed otherwise. So it’s interesting to think about how preserving specifically for reuse shapes how digital preservation anticipates future audiences, even if the reuse is not remixing so much as re-staging. Ippolito and Rinehart write about preserving versions and variability. Sounds like we’re actually in the business of preserving potential, and also that preservation is a matter of understanding and prioritizing (because we have to prioritize) the different potentials of a digital artwork.
Preserving digital art, games, software, and so on also calls into question how we describe born-digital material in archives, special collections, libraries, and other repositories. One prevalent attitude is to “let the bits describe themselves,” which seems to emerge from the confluence of More Product, Less Process and the high profile that BitCurator and other digital forensics tools/methods have achieved among archivists of digital material. Plus a troubling hint about the ostensible neutrality of file and system metadata, which I don’t think is Brian’s intention but lurks there nonetheless.
The first two weeks’ readings suggest that preservation isn’t just about reading the bits and metadata but more of a conversation in which they’re just two inputs of many. I was impressed with the emphasis on documentation, consulting with creators, and striking a balance between saving the experience of a digital art object and making sure the conditions to recreate it are what last. It’s a useful reminder that sometimes digital preservation is about everything but the bits.