Next Steps Preservation Plan for the Archeology Program Office

OVERVIEW

The Archeology Program Office of the Prince George’s County Department of Parks and Recreation was established in 1988 to excavate, preserve and protect archeological sites in county parks. It is part of the Maryland-National Capital Park and Planning Commission. As part of its mission, the program curates millions of artifacts, and over the years, related documentation has been created in various formats and on disparate media. Documentation is primarily in the form of digital and/or physical copies of reports, catalogs, slides, print photographs, negatives, drawings, maps, and videos. All artifacts and documentation are stored in the same facility. Their goal is to have all digital content centralized on one shared network which is currently in development.

This report outlines recommended next steps to preserve their digital content following a framework designed by the National Digital Stewardship Alliance (NDSA). NDSA’s Levels of Digital Preservation presents a series of recommendations based on five areas of concern for digital preservation – storage and geographic location, file fixity and data integrity, information security, metadata and file format. Level 1 recommendations relate to the most urgent activities need for preservation and serves as a prerequisite to the higher levels.

STORAGE AND GEOGRAPHIC LOCATION

Establish a central storage system for all digital content and maintain at least one complete copy stored in another location.

Digital content is currently stored locally on five desktop computers, two laptops, a back-up drive of files from former staff, approximately 300 compact discs and about 700 3.5” floppy disks. Staff have been working on a shared drive to organize content in one place. While staff are backing up their own drives locally, there is no complete copy of all digital content.

Following NDSA’s Level 1 recommendation, staff should move all files off disparate media and into a storage system and create a complete copy of files that should be stored in a different location. This serves a number of purposes:

– Transferring to a new storage system will guard against potential loss of data on old media.

– Minimizing number of storage locations will facilitate data integrity monitoring.

– Creating a complete copy will provide a back-up in case files are corrupted or lost.

– Storing a copy in a different location will guard against loss specific to one location such as damage to equipment as a result of severe weather or a catastrophic event.

– This process is also a necessary first step in achieving a goal for the program, which is to have all files organized and accessible without extensive searching.

1. Create a complete copy of what is accessible right now.

The development of the shared network in coordination with the program’s IT department will likely serve as their established storage system. A portion of files are currently inaccessible on floppy disks. However, the bulk of digital content is stored locally on various hard drives.  Any content that is currently accessible on hard drives should be copied and stored in another location to safeguard against loss. See “File Fixity and Data Integrity” below for checking data integrity and creating a file manifest. In the short term, storage can be on an external hard drive that is kept in another location in a secure place. Alternatively, staff could use a cloud storage service such as Dropboxwhich would have the added benefit of storing files offsite. Once files are integrated into the shared drive, a copy can be created from this set of content. Staff can work with the IT department to see if they can use a back-up storage system that is already in place with other departments.

2. Establish a file structure in the shared directory

Staff can begin integrating current files that they work with into the networked drive in order to develop a workable structure that will be practical for finding files. Historical documents can then be organized into that structure. Once a structure and naming system has been established, staff should document and follow this structure. This can be done concurrently with the following step.

3. Copy files from compact disks and 3.5” floppy disks using an unnetworked workstation

It will likely take a while to sift through decades worth of files. Given the urgency of potential loss from old media and a potential risk of viruses from disks used by former staff, copy over all the files from compact and floppy disks first onto a an unnetworked drive and run a virus scan before combining with current files.

a. Drive space needed: Assuming approximately 700 MB of data per CD and 1.44 MB per floppy disks, there is approximately 211 GB of data if all the disks are full.

b. Install anti-virus software on unnetworked workstation: Speak with the IT department about installing antivirus software so that it can run in the background while copying over files. Establish a schedule to run full virus scans depending on how regularly files are being copied over.

 c. Install drivers for 3.5” external floppy drive on unnetworked drive: At the time of the survey, staff had an external drive to read 3.5” floppy disks but lacked the software to run the drive. This step is necessary in order to access any of this information.

d. Install fixity software on unnetworked and networked drives: See the section “File Fixity and Data Integrity” below.

e. Transferring files from CDs and floppy disks to unnetworked hard drive: Currently staff use context such as file extensions, file location, file name, and the author name to find content. Therefore, it’s recommended that they preserve these elements and related descriptive information that may be written on the disks as much as possible until they can sort out all the content. Disks were labeled in a number of ways. The majority of floppy disks where organized in boxes of about 10 disks with labels on the boxes suggesting that there were groups of related disks. Some disks had vague or no labeling. Related files kept together on disks or boxes of disks should be copied into a directory folder with the descriptive information from the label. This can either be transcribed onto a text file or a picture can be taken of the label and included in the directory.

f. Timing: While each floppy disk may hold a relatively small amount of data, the process of copying over hundreds of disks can be labor intensive. However, during this window, the work is not being backed up. Therefore, consider either setting a short window of time to copy over all of these files, or setting a schedule where small batches of files are copied over from disks and scanned for viruses before being copied to another location. This can either be added to shared drive or another external drive that is stored in another location. Once files have been safely transferred, they can be integrated into the file structure on the shared drive.

4. Set a schedule of back-ups to maintain a complete copy.

METADATA

The NDSA Level 1 recommendation it to create an inventory of digital content and storage location. Like the digital content itself, the inventory should be backed up and stored in another location. As mentioned in the Section “File Fixity and Data Integrity” below, AVP’s Fixity software includes a function to generate a manifest of file paths that will assist with inventory. Staff can maintain this inventory as they continue to develop the file structure on the shared drive.

For file and directory naming, consider creating a controlled vocabulary and syntax to make it easier for staff to find files. This can include specific terms for archeological site names, document type (e.g., site form, report), and a version, year or other modifier (e.g., draft, final) when needed.

FILE FIXITY AND DATA INTEGRITY

File fixity is a way of ensuring that files have not changed. It is recommended to run fixity checks whenever files are transferred (Owens, p. 110). This will generate an alphanumeric string called a checksum that can be compared before and after the transfer. Changing the content of the file including the format will change the checksum value. If the IT department does not already use fixity software, Fixity is a free tool from AVP that can be used to generate and compare checksum values and ensure that all files have been transferred. The software also generates a manifest of file paths along with the checksums that could prove useful in establishing an inventory of digital content.

The Level 2 recommends virus checking high risk content while Level 3 recommends virus checking all content. Virus checking high risk content is addressed 3b of “Storage and Geographic Location.” Staff should have antivirus software installed at their workstations and run scheduled scans.

Level 3 also recommends checking fixity as fixed intervals to ensure data integrity over time. Consider establishing a yearly schedule of validating fixity. Any corrupt or missing files can be replaced with a copy that passes fixity validation.

INFORMATION SECURITY

This step will outline who has access to the content and what they can do with it. This will prevent files from getting deleted or changed by unauthorized staff. NSDA Level 1 recommends to identify who is authorized to read, write, move or delete individual files. Related to this, Level 4 in the section on file fixity also recommends that no one is authorized to have write access to all copies. This reduces the likelihood of changing or deleting all copies of one or more files.

Staff have taken initial steps in the process by creating three different directories with different levels of access for their users: one directory for onsite archeology staff, one directory for the rest of the Prince George’s County Parks Department, and one directory for Dinosaur Park, another program of the Prince George’s County arm of the Maryland National Capital Park and Planning Commission that shares the same workspace.

However, more levels of access may be necessary if only one person in the office is allowed to have read or write permissions. In addition, staff should clearly delineate working files from historical files that should not change. This will help to prevent the document from being changed or deleted. It will also help with fixity validation since working files will likely involved changes in content which will change the checksum value. This can be accomplished by setting permissions to specific subdirectories or to specific sets of files. Document access restrictions and store in a location that all users can access.

FILE FORMAT

 File formats can become obsolete. In some cases, once the format is obsolete the file might not be able to be opened in another format or will not be rendered in exactly the same way. The purpose of this section is to minimize these problems by using formats that are less likely to become obsolete, or that can be effectively rendered in another format. Widely used formats are generally considered to remain accessible because there will be a demand to either keep them accessible or develop a means of migrating them (Owens, 121).

Since some media have not yet been accessed, a current inventory of file formats that have been used is not available. Formats currently in use are jpegs andfiles generated from different versions of Microsoft Word, Excel and Access. Mapping files are created using GIS (geographic information systems) technologies. Older files have been created using WordPerfect, CAD (computerized aided drafting) software, and Paradox relational database. Staff are currently having trouble with opening Paradox files since the database is no longer supported and cannot be opened using current versions of Access or Excel.

Formats such as jpeg and Microsoft Word and Excel are commonly used, although the latter two undergo regular updates which may render slight changes if a file is opened in a new version. As the files are incorporated into the new directory structure on the shared drive, staff should develop an inventory of formats that they are using, work with the IT department to monitor them for obsolescence, and be prepared to migrate as needed.

FUTURE STEPS

NDSA Level 2 for storage recommends creating a third copy of the content and Level 4 recommends at least three copies stored in locations with different disaster threats. The Archeology Program Office could combine this recommendation with a means of sharing some of their content. This could be through a subject-specific repository for archeology or something more general like the Internet Archive.

Levels 2-4 address steps to maintain storage media so that files continue to be accessible in the long term. Staff should work with their IT department to document storage used for their shared drive and back-up copies, monitor for obsolescence, and have a plan in place for updating systems.

Staff also expressed an interest in resuming digitization of their physical documentation. Some reports and slides have already been digitized. As a starting point, staff can discuss their experiences with past efforts and lessons learned to establish goals for the program and how this will fit into the file structure that they are creating for current digital content. The Still Image and Audiovisual Working Groups of the Federal Agencies Digital Guidelines Initiative can be a good resource to establish best practices for digitization.

Digital Preservation Policy: Web Archiving for the Washingtoniana Collection

Introduction:

In my previous posts on this blog I have surveyed the digital preservation state of the District of Columbia Public library’s Washingtoniana collection. This survey was preformed via an interview with Digital Curation Librarian Lauren Algee  using the NDSA levels of digital preservation as a reference point.

In our survey we discovered that the DCPL Washingtoniana collection has very effective digital preservation which through a combination of knowledgeable practices and the Preservica service (an OAIS compliant digital preservation service) naearly reaches the 4th Level in every category of the NDSA levels of Digital Preservation. With this in mind my next step plan for the archive looks at a number of areas the archive has been interested in expanding and presenting some thoughts on where they could begin taking steps towards preservation of those materials.

Of particular interest in this regard is the collecting of website materials. Being dynamic objects of a relatively new media, collecting these items can be fairly complex as it is hard to precisely pin down to what extend is a website sufficiently collected. Websites may appear differently on different browsers, they may contain many links to other websites, they change rapidly, and they often contain multimedia elements. As such outlined below will be a policy which discusses these issues and specifically offers a digital preservation plan for websites.

Website Digital Preservation Policy for the Washingtoniana collection

The Washingtoniana collection was founded in 1905 when library director Dr. George F. Bowerman began collection materials on the local community. The collection stands as one of the foremost archives on the Washington, D.C area, community, history, and culture. Naturally it makes sense then with the increasing movement of DC social life and culture to online or born digital platforms that the Washingtoniana collection would consider collecting websites.

Selection

The same criteria for determining selection of materials for Washingtoniana materials should apply here. Websites should be considered if they pertain to Washington, DC or its surrounding areas, events that take place in or discus that area, pertain to prominent Washington D.C. related persons, DC related institutions, or websites otherwise pertaining to Washington D.C. community, arts, culture, or history.

Like any physical preservation decision, triage is an essential process. Websites that are likely to be at risk should be high priority. In a sense all web content is at risk. Websites that are for a specific purpose, or pertain to a specific event may have a limited operational window. Websites for defunct businesses, political election sites, and even an existent website on a specific day may be vulnerable and thus a candidate for digitization. In addition the materials in question should not be materials which are being collected elsewhere, and should be considered in relation to the rest of the collection.

Although automation tools may be used for identification, discretion for selection is on librarian hands. In addition, suggestions from patrons relevant to the collection should be considered, and a system for managing and encouraging such suggestions may be put in place.

Metadata

A metadata standard such as MODS (Metadata Object Description Standard ) should be used to describe the website. MODS is a flexible schema expressed in XML, is fairly compatiable with library records, and allows more complex metadata than Dublin Core and thus may work well. Metadata should include but not be limited to website name, content producers, URL, access dates, fixity as well as technical information which may generated automatically from webcrawlers such as timestamps, URI, MIME type, size in bytes, and other relevant metadata. Also, extraction information, file format, and migration information should be maintained.

Collection

A variety of collection tools exist for web archiving. The tool selected should be capable of the below tasks as outlined by the Library of Congress web archiving page

  • Retrieve all code, images, documents, media, and other files essential to reproducing the website as completely as possible.
  • Capture and preserve technical metadata from both web servers (e.g., HTTP headers) and the crawler (e.g., context of capture, date and time stamp, and crawl conditions). Date/time information is especially important for distinguishing among successive captures of the same resources.
  • Store the content in exactly the same form as it was delivered. HTML and other code are always left intact; dynamic modifications are made on-the-fly during web archive replay.
  • Maintain platform and file system independence. Technical metadata is not recorded via file system-specific mechanisms.

A variety of tools are capable of this task, a web crawler such as the Heritrix open source archival webcrawler or a subscription solution Archive-IT should be used. Both are by the Internet Archive, however the first is more of an open source solution while the second is a subscription based service which offers storage on Internet Archive servers.

Upon initial collection fixity should be taken using a Checksum system. This can be automated either with a staff written script or a program like Bagit, which automatically generates fixity information. This information should be maintained with the rest of the metadata for the digital object.

Websites should be kept in the most stable web archival format available. At the moment of this posts writing that format should be the WARC (Web ARChive) file format. This format allows the combination of multiple digital resources into a single file, which is useful as many web resources are complex and contain many items. Other file formats may be accepted if archived webpages are received from donors.

Preservation

Upon initial ingestion items may be kept on internal drives, and copied to at least one other location. Before the item is moved into any further storage system the file should be scanned for viruses, malware, or any other undesirable or damaging content using safety standards as agreed upon with the division of IT services. At this point fixity information should be taken as described above, and entered into metadata record.

Metadata should be described as soon as possible, as to which point the object with attached metadata should be uploaded into The Washingtoniana’s instance of Preservica.

Although Preservica automates much of the preservation process, a copy of the web archive should be kept on external hard drives. On a yearly interval a selection of the items within the harddrive should be checked against the items in Preservica to insure the Preservica fixity checks and obsolesce monitoring are working as desired.

References

Jack, P. (2014, February 27). Heritrix-Introduction. Retrieved November 14, 2016, from https://webarchive.jira.com/wiki/display/Heritrix/Heritrix#Heritrix-Introduction
Web Archiving-Collection development. (n.d.). Retrieved November 16, 2016, from https://library.stanford.edu/projects/web-archiving/collection-development
The Washingtoniana Collection. (n.d.). Retrieved November 16, 2016, from http://www.dclibrary.org/node/35928
Web Archiving at the Library of Congress. (n.d.). Retrieved November 16, 2016, from https://www.loc.gov/webarchiving/technical.html
Niu, J. (2012). An Overview of Web Archiving. Retrieved November 16, 2016, from http://www.dlib.org/dlib/march12/niu/03niu1.html
AVPreserve » Tools. (n.d.). Retrieved November 17, 2016, from https://www.avpreserve.com/avpsresources/tools/
Kunze, J., Bokyo, A., Vargas, A., Littman, B., & Madden, L. (2012, April 2). Draft-kunze-bagit-07 – The BagIt File Packaging Format (V0.97). Retrieved November 17, 2016, from http://www.digitalpreservation.gov/documents/bagitspec.pdf
MODS: Uses and Features. (2016, February 1). Retrieved November 17, 2016, from http://loc.gov/standards/mods/mods-overview.html
About Us. (2014). Retrieved November 17, 2016, from https://archive-it.org/blog/learn-more/

 

Pottermore – the Archival Information Package

I was able to put my Preservation Plan into action by uploading a Pottermore Collection to the Internet Archive in addition to saving the collection on my laptop. Here’s a brief recap of my Preservation Plan:

  • Capture this YouTube video that announced the launch of Pottermore in 2011, saved by the youtube-dl downloader.
  • Archive the Pottermore Wikia, using their own archiving tools to download the xml files.
  • Download the images from the Pottermore Wikia separately, since the xml files don’t include them.  This was going to involve the command line method, or if that didn’t work, to curate a selection of images from the collection.
  • Save this Pottermore entry from the Harry Potter Wikia, which details the description and history of the site.
  • Save Let’s Play videos that can be found on YouTube to capture the interactivity of Pottermore, using the youtube-dl downloader.

I’ve officially uploaded what I’ve collected so far to the Internet Archive, check it out here: https://archive.org/details/Pottermore.

Internet Archive Pottermore
What my Internet Archive collection looks like!

The first file I included was a PDF of the Pottermore entry from the Harry Potter Wikia.  This entry gives a full description and history of Pottermore.  I concluded that since it was only one entry, and the text is what matters more than anything else, a PDF would suffice.  The next folder includes a selection of images from the Pottermore Wikia.  This is what I was really happy about, since this is a feature that a lot of people enjoyed from the first Pottermore that isn’t as present in the newer version.  Since I couldn’t figure out that command line method that I had written about in my Preservation Intent Statement, which was supposed to capture all of the images from a Wiki, I had to go through one by one on the Pottermore Wikia image directory and download them.  Since there are 51 pages of images, with each page containing at least 40 images, I will be uploading one page’s worth of images at a time (as of this post, I have two pages’ worth of images uploaded to the Internet Archive). I save all of the images in their original format, which are either .jpg or .png files.  The final folder contains the XML files of the Harry Potter Wikia, which I had downloaded using the tools provided by the Wikia itself.

What I did not upload to the Internet Archive (due to copyright uncertainties) but have saved to my Pottermore folder on my computer are the videos.  I used the youtube-dl downloader to save the Pottermore launch video from 2011 as well as some Let’s Play videos to capture the experience of playing Pottermore.  All of the videos were saved in .mp4 format.

Below is a screenshot of the collection I have on my computer:

screenshot of my Pottermore collection
Screenshot of my Pottermore collection on my laptop.

I arranged the folders according to the different aspects of Pottermore that were saved.  The first folder contains the history of Pottermore, which includes the Harry Potter Wikia entry.  The second folder involves the Let’s Play videos, which capture the experience of playing Pottermore.  The next folder contains the Pottermore images, which are either in .jpg or .png format.  Some of the images are labeled either with descriptions, usually the names of the characters in the images (for example, “Hokey” or “Hooch”).  However, most of the images are named after their location within Pottermore.  For example, B1C11M1 = Book 1 (Harry Potter and the Sorcerer’s/Philosopher’s Stone), Chapter 11 (“Quidditch”), Moment 1 (“Charms Homework”).  This will help orient the viewer as to the order of images within Pottermore.  The next folder is Pottermore Launch, which includes the 2011 YouTube video that announced the coming of Pottermore.  The final folder contains the Pottermore Wikia pages in xml format.

What this collection really comes down to is trying to capture the essential elements of a website that, for our present purposes, no longer exists.  I am hoping that with the xml files of the wiki, the images that provided the interactive layers, and the let’s play videos that show how the game was played, that this goal was accomplished.

Gone in 6 Seconds: “Transforming” Preservation Intent Statement

The preservation plan for Rob and Nick Carter’s “Transforming,” a series of 4 digital paintings created as an homage to centuries old artworks (detailed here) is, much like the works themselves, more complicated than at first glance.

While I had some concern about legal protections written into these works,  according to an email with Rob Carter, there is no digital rights management, mainly they rely on certificates of authenticity that go along with the 12 editions and 5 artist’s “proofs” (which can be given to museums for display to the public). The certificate also entitles the owner to another copy should anything happen to the original.

The cost of obtaining this certificate and the original work is huge with one “Transforming Still Life” selling for $105,000 and one “Transforming Nude Painting” selling for 100,000 pounds. Even though there are artists proofs, I believe that they are only for display in exhibition. To acquire all of these items and their constituent parts in a permanent collection would be prohibitively expensive for most institutions not even considering preservation or the inability to make these items widely accessible due to copyright.

Beyond the cost, Rob and Nick Carter, and whoever else may be involved, are fairly attentive to their pieces currently.  They have backups in three separate locations and also in a “data safe.” Additionally, they actively upgrade the technology and software to improve display quality and to keep them working into the future. The artists also recently had to upgrade their “Transforming Diptych” work so that it would run on a new OS.

With the works under fairly good control and somewhat unattainable, it seems less pressing and unrealistic to focus on preserving the works themselves (at least the video finished product and the application code). Additionally, there are shortened example versions of these videos available online which I plan to preserve and serve as representative stand-ins. What is more important then is to document the process of creating this new genre of art and the conversation and reaction to these pieces.

At the end of my statement of significance I said that:

“Therefore, documenting “Transforming” means documenting the cultural conversation around media consumption in the early 21st century.”

This was a summation of the Carter’s goal to create works that awarded viewers for engagement beyond the average 6 seconds an average museum patron looks at an artwork. To reach the goal of documenting this cultural conversation, I plan to preserve the various video interviews from art historian Kate Bryan commenting on the themes in the paintings. These videos provide a rich and invaluable context and a present day scholarly perspective on the works which will be valuable to art scholars in the future.

Additionally, there are several video interviews with the artists themselves about what inspired them to create these works. All of these will be essential in preserving the scholarly communication about these pieces and the originals that they were inspired by. I have contacted Rob and Nick once and they are too busy to do an extensive interview, so these will have to suffice.  The information is the key part about these videos, rather than their look and feel. Therefore, it is not that important to ensure their visual quality or that they remain in the same format.

Furthermore, I plan to preserve all of the videos from Motion Picture Company explaining their role in creating the artworks. Their videos provide insight into the technological processes and difficulties behind the scenes that will be valuable to scholars of new media, animation, and design in the future. Because the visual nature is more important in these videos, I will preserve and maintain these in the highest quality formats available.

I plan to reach out to some people that worked on the team to see if they will do more interviews on the challenges and their experiences creating these pieces. I will compile these together as a complete document (most likely as a PDF document) that scholars and future artists will be interested in the years to come. This will further the goal of documenting the creation of this new genre of art without dealing with any concerns of providing access to the original substituent digital objects.

mediainfo_logo

Most of the videos mentioned above are either embedded directly on a webpage or in video players like vimeo or youtube. Thus, I will download them either by using a simple “save as” or by using youtube-dl. In addition, I will use tools like mediainfo to extract technical metadata (most likely in PBCore 2.0)  to accompany these videos. Furthermore, I will use FFmpeg (and this handy ffmprovisor to decrease the learning curve) to generate MD5 hashes for future fixity checks and also transcode any videos that may be in an at risk format.

Much of the online press coverage of these works, whether video or web articles have not been saved in the Internet Archive. I will ensure that these pages are preserved as part of the record of reaction and scholarly communication mainly using the “save page now” functionality of the Wayback Machine. Additionally, I will save a copy of the html as it is delivered to my screen and create a collection that can be accessed in a central location.

Actual viewer reaction is much more difficult to ascertain, with only a few mentions in news articles or in generic postings from gallery attendees saying something similar to “this is cool” along with a picture on social media. In order to preserve the audience’s experience, I will go beyond the limited reactions and try to interview some of these people that posted on social media and see if they will expand further (hopefully they remember!). These interviews, as above, will most likely be saved as a compiled PDF document.

In the end, I plan on uploading all of these objects as a collection in the Internet Archive ensuring long term access.