MAIP for Twitch, an Invitation to Edit

MAIP-Twitch

NOTES ON THE REPOSITORY
Model Archival Information Package (MAIP) that I composed for Twitch—a series of minimal one button game—is currently hosted on the GitHub repository. I chose GitHub in accordance with the Open Source spirit of Processing, the programming language and its surrounding community that brings digital artworks such as Twitch to life.

It is to be hoped that a MAIP like this would help enrich a collection such as “The Art of Programming” exhibition at the Computer History Museum. Currently, this exhibition in question hosts two programmers: Don Knuth and Jamie Zawinski. While I trust the exhibition to be more populated eventually, when compared to other exhibition such as “Memory and Storage” with 27 contents and granular contextual information, I cannot help but wonder weather the artistic endeavors with computers is prone to be neglected. As the subtitle for “The Art of Programming” exhibition is “A Programming Language for Everyone,” showcasing the evolution of Processing may be a perfect fit. Moreover, Processing may offer the width to the museum’s collection to show how some aspects of computing have little to do with the military and financial interests.

WHY PRESERVE TWITCH
Twitch is an epitome of the Processing evolution. Preservation of Twitch, therefore, allows the game to be a gateway to the history of Processing. Moreover, preservation of Twitch can be a pilot project for the growing numbers of software artwork created with Processing.

Diverse User Base
The latter rationale is especially of import, concerning the immediate stakeholders of Processing. According to Casey Reas, co-creator of Processing and the creator of Twitch, the creative community is the primary audience of Processing. Reas describes the motivation behind the invention as follows:

It’s not very common for artists and designers to be the primary authors of programming environment, but this is changing. I hope Processing has helped to demonstrate that we don’t need to rely only on what software companies market to us and what engineers think we need. As a creative community, we can create our own tools for our specific needs and desires.

As a matter of fact, within seven years since Reas and Ben Fry released Processing under the Open Source policy, the developing community has developed 70+ libraries. Processing users’ fields include: 12K and higher education, music industry, journal publishing, design and art industries. As a result, the language initially developed to teach computational graphic design literacy can now process audio, electronics, and animation.

History of Programming in the Field of Art & Design
Concerning the former, Twitch embodies the principles and the evolution of ideas that originate from the Massachusetts Institute of Technology (MIT). As the founder of the MIT’s Visual Language Workshop (later known as the Media Lab) Muriel Cooper wrote in a letter in 1980, the principle of Processing, too, revolves around how “the content, quality and technology of communication” informing “each other in education, professional and research programs.” John Maeda, who was allured by the vision of Cooper and pursued an Art degree after his MIT engineering training, later came back to work at the Media Lab and with graduate students including Casey Reas and Ben Fry. In his “When a Good Idea Works” published in the MIT Technology Review in 2009, Maeda connects the dots of how Reas and Fry’s Processing has come to be:

The starting point for their project was something that I can take credit for: the Design by Numbers (DBN) framework for teaching programming to artists and designers. I originally wrote DBN in the 1990s, but I couldn’t get it to yield production-quality work. My graduate student Tom White made it into something that was much more functional. And then Fry and Reas took a crack at it. DBN limited users to drawing in a 100-by-100-pixel space, and only in grayscale-faithful to my Bauhaus-style approach to computational expression. But Fry and Reas figured that people needed color. They needed canvases larger than 100 by 100. They realized that this wasn’t in line with my interests, so they went off and made their own system that gave users no restrictions at all.

twitch-ecosystem

In once sense, Processing arose as a response to practical problems. When Java fist came out, it offered minimal support for sophisticated graphics processing. Drawing a line and stuff like that was possible, of course. But it couldn’t do transparency or 3-D, and you were almost guaranteed to see something different on a Windows computer and a Mac; it was incredibly cumbersome to do anything that was both sophisticated and cross-platform. So Fry, who grew up hacking low-level graphics code as a kind of hobby, built from scratch a rendering engine that could make a graphically rendered scene appear the same in a Windows or a Mac environment. It was not just any renderer—it borrowed the best elements of Postscript, OpenGL, and ideas cultivated at the MIT Media lab in the late Muriel Cooper’s Visible Language Workshop.

What started at the MIT did not stop there. It is fair to say that the Open Source spirit of Processing helped to gain popularity among the developing community. Such support proves to be crucial for Processing as the programming environment continues to change. John Resig, the initial author of jQuery, for instance, developed Processing.js to enable better implementation of visualization and animation of Processing. Writing in JavaScript, Processing.js converts Processing codes written in Java, and uses HTML5’s <canvas> element to render images. As an antidote to the now-close-to-obsolete-Java ailments, this adaptation strategy was evolutionary, and Resig’s work was highly praised among the developer community. As one comment on reddit has it: “This is ridiculously well done. The simplicity of some of the example is fairly stunning.”

Twitch is currently showcased as one of 1174 Google Chrome experiments, that utilizes the functionality of modern web browsers. But Twitch is much more than a series of minimal one button game we can play, and its rich historical context needs to be documented.

WHAT TO COLLECT OF TWITCH
As such, I propose to preserve the ecosystem of Twitch in order to document how an Open Source project thrives. As such, my MAIP is composed of following digital assets, in an alphabetical order: codes, demo, hosting, and people.

Twitch-GitHub-Repostitory

  1. codes: This holder contains a instruction for reverse-engineering Twitch with its source codes. There is also a credits document which details the licensing and the due acknowledgement of each source code file.
  2. demo: This holder provides a document with a link to the YouTube video, demonstrating how Twitch works. I also have the screen recording file on my local drive, but this was too big to be uploaded to GitHub repository.
  3. hosting: This holder has a document with a hosting instruction for reverse-engineering Twitch.
  4. people: This holder contains five documents describing the figures who have played direct roles concerning Twitch’s fruition. Namely, again, in an alphabetical order: Ben Fry, Casey Reas, John Maeda, John Resig, and Muriel Cooper.

* Here is the zip file of what is hosted on the GitHub repository: MAIP-Twitch. Editorial suggestions would be most welcomed at my GitHub repository.

Snow Fall: Archival Information Package

Snow Fall: The Avalanche at Tunnel Creek, a multimedia story by The New York Times, made a significant impact on a variety of stakeholders when it was released in December 2012. It was a prime example of long-form journalism told through an immersive multimedia experience. The story was a first of its kind and set a precedent for what online journalism could and should be.kahovka-service.ru

For its innovation alone, Snow Fall merits preservation. However, it also serves as an example of the issues facing the journalism community today to archive the work they produce on a daily basis. A newspaper is no longer only generating physical materials like photographs and papers anymore, but videos, motion graphics, visualizations and websites. In the case of Snow Fall, there is also a difference between what multimedia items are available for viewing and how they are rendered depending on the device a user may have.

In an attempt to capture the various aspects of Snow Fall, as well as the impact it made on various communities, the archival information package (AIP) contains a variety of files. The AIP contains five folders and a Readme PDF file, which explains the contents of each folder. The folders are organized and named according to the subject and/or the type of file they contain.

Comments

The first folder, labeled “Comments,” contains two PDF files of every comment and reply left on the Snow Fall website from the day it was released, December 20, 2012, until eight days later when the comments were closed. There are a total of 1155 comments, arranged chronologically by date from newest to oldest. There is also an edited version of the comments, in which The New York Times selected specific comments to reply to. These are a smaller subsection of the overall comments, which John Branch, the author of the story, and Elyse Saugstad, one of the skiers, responded to. There is also a comment that was left by one of the skier’s wives included in this group.

Maintaining the comments was important in order to preserve the variety of reactions from the readers. However, they also demonstrate the impact the story made around the world by indicating where the reader was located. The handpicked selections that John Branch and Elyse Saugstad responded to serve to provide a bit more context to the story, and the separate file containing these replies allow for quicker access. Turning the website comments into PDFs maintains their format and provides a way for future searching.

Documentation

Outside resources that documented how the website was created and the media coverage surrounding the release of Snow Fall became extremely important to preserve as a graphic designer working on the project indicated that there was no “Snowfall maker document.” Anything that might provide a bit of contextual information to the story itself gives an overview of where online journalism stood at this point in time, as well as the technology used in building it. The information in this folder supplies more of a historical view.

The files in this folder consist of two MP4 videos that show the layering of data used to create the avalanche simulation. There is also a PDF file containing links to URLs on the Internet Archive’s Wayback Machine. The URLs point to the story itself, media coverage, interviews with New York Times staff members involved with creating Snow Fall, and even a blog post that defines the code behind the website. The last document included in the folder is a PDF file of The New York Times Innovation Report from May 24, 2014, which was an in-house report looking at the state of online journalism and audience engagement at the newspaper.

Videos

Since newspapers have a much more established process for archiving digital photographs, this aspect to the online story became less important. However, the process for preserving videos and motion graphics is less clear, so saving at least the final products appearing in Snow Fall was critical.

There is one standalone MP4 file in this folder and three folders containing videos of the graphics, interviews, and trip footage. The standalone file called “AvalancheAtTunnelCreek_Documentary” is an almost 11-minute long film that had been included at the very end of the story. A still image from the documentary still appears at the end of the story, but the video file itself is no longer there. The video was located and exported from YouTube.

The graphics folder contains videos of the motion graphics found in the story, the interviews folder contains footage shot by New York Times staff of the skiers and their family members, and the trip footage folder contains video shot by two of the skiers while on the trip. These are all MP4 videos saved from the website. Obviously, maintaining the unedited footage from the in-house staff is important, but only access to final edited versions is available for this project.

Web Recording and iPhone4_Screenshots

The multimedia elements and the overall design of the Snow Fall website was a significant aspect to the story. The programmers and designers made clear that the design for modern browsers on a desktop or laptop computer was the starting point and it was then altered to improve the experience on various mobile devices. Therefore, demonstrating how the full website appeared on a laptop compared to say, an iPhone and iPad, was valuable to preserve.

QuickTime provides a way to screen record your Mac, as well as the screen of your iPhone and iPad back to the Mac. Using this software allows someone to view how the website would render on any of these devices. Using QuickTime, I recorded how the website appeared on my MacBook, by creating individual videos from each chapter of the story. This captured all of the multimedia and how each item was integrated into the story. The iPad videos were captured in the same way by dividing out each chapter. However, it is evident that there are no photo slide shows and the motion graphics, which automatically play on the full website, need to be manually started through video playback controls. Many of the leading images which begin each chapter are looping videos on the full website, but these become still images on the iPad version. Additionally, there were some technical difficulties with recording the videos from the iPad to the MacBook. There is a slight delay between recording the video from the iPad to the MacBook and it causes the audio to get out of sync with the video. I need to explore this error a bit more or look into alternative software where this would not be a problem.

Creating a screen recording of my iPhone 4 to my MacBook became problematic as well because I can no longer upgrade to more current software. My iPhone is currently running on iOS 6.1.3, which was too old for any of the software applications I looked at. As a workaround I created screenshots of various elements from different chapters of the story. In the iPhone version, the photographs and slide shows integrated into the text are not included. Opening images to some of the chapters that are looping videos on a desktop computer, appear as still images on the iPhone. Motion graphics that automatically play on a desktop computer are embedded as videos with playback controls.

Conclusion

Taking the time to evaluate what are the most important aspects to preserve in one specific example of online journalism has been a valuable experience. The work in constructing the AIP for Snow Fall allowed me to consider what was important to the story beyond just the standard text and photographs. The amount and variety of work being produced at news organizations is only growing and much of it has already been lost. It is critical that these organizations take on some initial responsibility to save this multimedia work or it will entirely disappear.

 

…And Technology Saves the Day!!

34fb1e26d98fd600b36bcae9aa297d3d

All types of digital art have a heritage with an analog medium. The original format, can hold hidden treasures, similar to the digital version, that have yet to be discovered. While digital art is becoming the “norm”, analog have their own affordances that can be seen as superior. Analog art can include the traditional practices of art, film, writing, and other cultural documentation. Technology is inherently going to have problems occur, but when used in the right way can open a whole new world of creation. Reformatting analog materials into digital will help preserve them, and keep their information viable in the long run. By combining the old and new, technological advances and traditional art, previously unknown information can be found and a whole new style of art is made.

Continue reading “…And Technology Saves the Day!!”

Exploration at the intersection of material and digital

A researcher, looking to discover more information about an antique object, does a close analysis of the object’s original medium, ultimately finding enlightening evidence about the object’s creation in the traces of an erased text. This summary could apply equally well to the Library of Congress’s work discovering clues to the editing process of the Declaration of Independence, and to Matthew Kirschenbaum’s investigation into a diskette containing a 1980s computer program, as described in his book Mechanisms. One case involves a physical object, viewed through a digital representation; while the other centers on a born-digital work, studied through the lens of its physical storage medium. As this example demonstrates, many of the issues involved with examining (and preserving) born-digital works carry over to the process of digitizing physical objects. In both cases, the intersection of the physical and digital leads to similar challenges, as well as offering interesting possibilities. Continue reading “Exploration at the intersection of material and digital”

The ARSC-Files: The Truth Is Out There

Upon reading the bold statement at the beginning of the Preservation Reformatting chapter of ARSC’s Guide to Audio Preservation, “A restored version of a sound recording cannot be considered a preservation copy,” I had a moment of kneejerk skepticism. While I think that anyone can appreciate the truthiness of that claim, it feels a little weird to authoritatively ignore an important concept: digitization always results in the creation of a new digital object.

Dramatic reenactment of my moment of skepticism.

A point made by Trevor Owens in a 2012 post on The Signal comes to mind:

“The idea of digitization obfuscates the fact that digitization is not a preservation act. Digitization is a creative act.”

Owens is arguing that a digital surrogate should not simply be considered a handy duplicate because digitization tools (and the preservationists who use them) will always be making decisions about what they are capturing. Some of the decisions may seem harmless or minuscule, but they are still judgments that prioritize certain significant properties. Furthermore, the decisions regarding what materials get digitized and which ones don’t also demonstrate these types of values. Still, I can understand where ARSC is coming from. “Warts and all” preservation copies allow scholars to scrutinize the artifactual qualities of items, and they also have a way of—to use a tired phrase—bringing history to life.

Still, too much faith in these preservation copies can lead to problems. Sarah Werner illustrates this point in her discussion of the limitations of the digitizations on Early English Books Online. The digitizations were largely drawn from microfilms of early books and so certain unexpected details can be lost or misinterpreted. The title page of an elegy mourning the death of Prince Henry, for instance, is a mistakenly reversed negative because the person who processed the microfilm didn’t believe that the text was white on a black background.

werner3

Interestingly, FADGI’s Technical Guidelines for Digitizing Cultural Heritage Materials does not subscribe to ARSC’s relatively dogmatic principles regarding verisimilitude. Take, for instance, this snippet that discusses FADGI’s view on the adjustment of master image files:

“There is a common misconception that image files saved directly from a scanner or digital camera are pristine or unmolested in terms of the image processing… Because of this misconception, many people argue that you should not perform any post-scan or post-capture adjustments on image files because the image quality might be degraded. We disagree. The only time we would recommend saving unadjusted files is if they meet the exact tone and color reproduction, sharpness, and other image quality parameters that you require.”

FADGI’s reasoning is a mix of ideological concerns and practical thinking. It recognizes that advocacy for adjusting master images may cost it some blue ribbons in the future (“First Place – Most Authentic in Show”); however, it also feels that “adjusting master files to a common rendition provides significant benefits in terms of being able to batch process and treat all images in the same manner.” Furthermore, multiple copies (master/raw, production, access) might create prohibitive storage costs.

In their eyes, post-capture adjustments will result in insignificant data loss and raw files are often more trouble than they are worth. Might this be FADGI taking proactive steps to avoid creating more situations like Sarah Werner described: producing facsimiles of things that don’t actually exist?

This makes me wonder: what is the cause for these contrasting perspectives? Is it something to do with the different materials being preserved? ARSC’s definition of preservation reformatting might provide a clue: “the process of transferring the essence or intellectual content of an object to another medium.” I’m not 100% certain what they mean by “essence”; perhaps they are referring to the differences between artifactual and informational qualities. I also see this as a nod to a precept of psychoacoustics: perception and meaning are not bound to one another. Still, the very idea that someone is prioritizing either essence or intellectual content seems to undermine the authenticity of any preservation copy.

Also, ARSC does recognize the fact that audio files aren’t necessarily “unmolested” during transfer from analog to digital. In my current field study at UMD Digital Conversion and Media Reformatting I am following ARSC guidelines to digitize reel-to-reel recordings and this requires me to adjust the azimuth of the reel-to-reel player before transferring each recording into Adobe Audition. During this process I am essentially adjusting the playback head based on what sounds right to my ear. The resulting preservation master can’t be called 100% raw. But I suppose that “do the best you can out there” doesn’t make a very strong opening statement to your preservation guidelines!

The invisible hand of authenticity adjusts azimuth for the benefit of all.
The invisible hand of authenticity adjusts azimuth for the benefit of all.

Ultimately, I wonder how different the philosophies of ARSC and FADGI are in practice. They look pretty different on the page in regards to tone, with ARSC being a bit dogmatic and FADGI as a relatively cavalier pragmatist, but are they so divergent? Is FADGI really “throwing away” any more data by virtue of post-capture adjustments than ARSC is by prizing the preservation copy? I suppose not, if you believe that digitization is a creative act in the first place.