Racing the Beam: The Atari Video Game Computer System by Ian Bogost and Nick Montfort offers a detailed look at the Atari VCS or what is known by many as the Atari 2600. The book focuses on when the system dominated the market from 1979-1983 and discusses its eventual role in the video game crash of in 1983. In particular, the authors examine a number of game cartridges to reveal the affordances and resistance created by the video game platform and its limiting of “computational expression” in the creation of these games. At first glance, this book may seem largely tangential to preserving digital art but I think there are many commonalities that Bogost and Montfort illustrate quite well and which we can learn from.
The Resistance in the Materials
As William Morris once stated, “you can’t have art without resistance in the materials.” The Atari VCS could be defined by its resistance. Coders had to deal with the physical and technical limitations of the platform such as the speed of the beam writing across the screen one line at a time, the limited ROM, and the limited graphical elements, to produce games people actually wanted to play. For what they had to work with, the coders’ results were often innovative and astounding.
All the while, the concurrent political, social, and economic challenges such as creating games for the home (rather than arcade) environment, deadlines for game release based on market forces, and questions over ownership also affected game creation. In these ways, Montfort and Bogost are connecting what is on the screen and the cultural context with the forensic materiality of the hardware that Kirschenbaum describes in Mechanisms. Understanding the entire process of game creation and the limitations of the platform gives a better understanding of all the factors that need to be preserved.
Porting and Licensed Adaptations
The theme of resistance continued in the challenge of taking other arcade video games and licensed works from their original medium and adapting or “porting” them to the Atari 2600. To me, this process raised some interesting connections with ideas of social memory, digital preservation, and significant properties. Specifically, the quote below really got me thinking about these issues:
Along with the lack of true originality in most VCS games—that is, the basis of many VCS games in arcade games or licensed properties—another closely related theme runs throughout the history of the Atari VCS, that of the transformative port or adaptation. When an earlier game is the basis for a VCS game, it can almost never be reproduced on the VCS platform with perfect fidelity. – page 23, Racing the Beam
We see that these games lacked true originality in the sense that they were attempting to copy other works but were original in their transformative adaptation to a system loath to provide the elements needed to reproduce their inspiration exactly. Montfort and Bogost go on to say that, technical limitations notwithstanding, it is still impossible to replicate the physical environment, interface, or economic context to create a true copy of the game experience for the player, but it can be transformed to get something close enough to make the memory live on.
Porting and adapting have many parallels to the production of informal social memory through recreation and variation. Exactness is not key in this approach, instead adapting or porting seems more like a performance based on a “score” or the instructions of the original artwork, similar to the process that Rinehart and Ippolito discuss in Re-Collection.
Or seen in a different way, the retelling of a game on a different platform can perhaps be compared to the retelling and memory sharing process of oral history. In these ways, the idiosyncrasies of each repetition can be forgiven, assuming the significant features remain, allowing it to be the same “work” on a more conceptual level. With the Atari VCS, the resistance in the materials forced game creators to focus on the most significant elements in order to create something that resembled the look and feel of the original, while still acknowledging the variation of its underlying medium and its context.
As both professionals and amateurs try to preserve these games through emulation or even rewriting them to work on new devices that do not have the same resistances, they encounter new unique resistance. The problem here remains of being unable to reproduce these items with the exact fidelity of the original. Newer technology can overcompensate for the original quirks. Newer processors can make a game run faster and better than it ever could, LCD and HDTV’s do not display the games in the same way CRT televisions blur pixels together, and the interface is often not the original joystick. Compensating to make these items run like their original and to feel less advanced is its own form of resistance.
But as seen above, these characteristics can be forgiven if we view emulation as an attempt to implement the “score” of the original game and its most significant elements. These very good efforts can still ensure that the social memory of these cultural artifacts survive. And to what extent is perfection necessary? Perfection, in fact, can be problematic in communicating authenticity. Sound recordings do not perfectly reproduce the natural world, the sounds of particular elements are amplified to be heard better. Furthermore, some consider vinyl recordings to have a warmer, more authentic sound than their high quality digital versions. In these examples, the consumer expects imperfection. Additionally, Glitches (often as result of a system running incorrectly) in computers can be hints of the materiality of the system and also provide a certain authenticity.
If there are always errors or difference, how do we determine the acceptable level of tolerance of them in the preservation of objects and their social memory? I think there can be multiple tolerances as Rinehart and Ippolito describe the “both/and” approach to preservation that allows for multiple representations of the same work. So if a range is okay, where do we stop? Overall, the platform study process Montfort and Bogost undertake seems like an essential framework to understand sufficient context to decide what level of preservation is good enough. But how do we make it scalable?
11 Replies to “Emulation as Resistance and Social Memory”
In case anyone was interested in further details of the legendary ET landfill dump, the dump site was discovered in 2014, and has been excavated as a sort of modern-day archaeological site.
Ian Bogost has a fair amount of quotes about ET in this story about digging up the ET cartridges. http://arstechnica.com/gaming/2014/04/digging-up-meaning-from-the-rubble-of-an-excavated-atari-landfill/
Just came across this article published today about the designer of ET. It gets a little more into the work environment of the designer and the pressures put on him. http://www.bbc.com/news/magazine-35560458
Preserving Computer-Aided Design (CAD) by Alex Ball, as well as Digital Formats: Factors for Sustainability, Functionality, and Quality by Caroline Arms and Carl Fleischhauer get to your point about tolerances for preservation. Arms and Fleischhauer bring up the idea of significant characteristics and balancing the factors for sustainability, functionality, and quality. They compare the original artwork of a cartoonist, a digital photo submitted by a community member for an oral history project, and a documentary nature photograph. Different factors are important for each of these and the chosen format for perseveration would vary because of this. You might give up tonal variety in the oral history photo because it’s less important and save some space in the process, but that would be unthinkable for the original artwork and the nature photo. Arms and Fleischhauer provide a practical look at balancing the options for preservation.
The article on CAD addresses the same idea of preserving the models in multiple formats in order to preserve the various significant features that might be necessary to future users. These formats will be different depending on whether you might need to reconstruct a product or if you only need to have the visualization.
I think in both of these situations they have come to terms with determining acceptable levels of tolerance. You make choices in attempting to preserve the most important aspects of an object and I guess, to answer your question about where do we stop – I suppose we don’t. Formats and platforms will continue to change, so that range will continue to expand. Or do we stop trying to emulate Atari games on new machines? Maybe just the story of Atari (and how awesome it was!) lives on, much like the great story behind Agrippa in Kirschenbaum’s book Mechanisms.
Great points on the already established acceptance of variance in the field. I agree there has been a lot of good work done with practical approaches to preservation and the different varying levels of preservation. By documenting approaches and standards, it makes the preservationist’s job easier when making these decisions in the future. The question becomes, who decides what level is enough? And what should be saved? With video games, a large amount of the public has a desire to preserve these objects due to nostalgia. How much say do you give to the community? Most of these people act on their own. I think there is a certain level of intersection with archival appraisal theory in preservation that is often overlooked. Sometimes, we should take a step back to understand our reasoning behind preservation rather than taking preservation (no matter the level or variance) as a given. And at the same time we should be interrogating whether the professional archivist, curator, etc. should be making these decisions.
On the other hand, the professional can take advantage of the public fanbase community, and interact with them to ensure that effort is not duplicated, leaving the professional community to build on the base started by fans, and occasionally advising them on the best way to preserve or sustain something.
I’m sure this has come up in actual discussions of ports and emulations of games, maybe someone can point some out, but a good example of this kind of decision-making involves coding bugs and glitches — players would take advantage of certain glitches to skip to secret areas, get bonus power-ups, etc. But the game makers didn’t intend to allow gameplay to work that way, they just didn’t catch the bug before the console shipped. Does the archivist respect the original intent of the creators and remove the glitch, or respect the original code and the nostalgia factor and leave it in? What happens if you try both?
(As an aside, this video showed up on my tumblr roll the other day. Vaporwave is the name of the larger culture that covers the types of remix art we’ve been discussing this week as well as related synth media.)
Yes, last week we talked a bit about the larger community being involved with preservation, rather than professional archivists or curators. In the case of Agrippa, many of the forms that still exist are due to the community or fans. This was also the case with the digital Warhol files that were saved due to some help from the Carnegie Mellon University computer club. Hopefully this work can be done in tandem with professionals, who also realize the importance of input from the community.
I think Alex Ball’s “Preserving Computer Aided-Design” speaks for how we wish to proceed with collecting other software dependent art production. I am particularly interested in version control and distributed work model. (The former is recorded almost always automatically and the latter can be seen in the existence of linked sub-files.)
With in mind how documentation helps the acquisition process, I wonder how we should take into consideration the influence of “assumed disclosure” on the production mentality. I think my entertaining is not irrelevant to the shift of Atari policy depicted in Montfort and Bogost’s book: “In early days, Bushnell [Nolan Bushnell, the founder of Atari] maintained a policy that no one would be fired (although they might be denied a raise) and ensured that everyone, from executives to assembly line workers, had the same health care plan. But with VCS development organized along a model of the lone programmer who was almost completely, individually responsible for a sometimes very lucrative game, it became less tenable to claim that the programmer was not more important than any other human resource” (100).
It seems to me that knowing this shift helps archivists determine what to collect. With the early days Atari, it seems likely that the emphasis is on the product and not the individuals. It follows that the functionality of the product is the main preservative concern. With the latter days Atari, the emphasis is on the individuals. It follows, therefore, that there will also be an emphasis on how they realized the product. While different production models call for different additional information, I hope there will be a way to look into different individuals’ contributions within the distributed work model.
But here is the rub. Would “assumed disclosure” of work process recorded by software destroy the notion of safe environment where workers and creators may explore and venture without the fear of erroneous footsteps hindering their career? Setting aside the trade secret aspect, I am concerned how a good preservation practice may infringe with the creative process. What do you think? Is the question even relevant to archivists?
Great post Joe. I like your scale question. My sense is that there are a few different ways of answering this.
1) We generalize through theory: That is, we take a book like Racing the Beam as a framework for us to use to understand other kinds of platforms, and also as a roadmap for thinking through the kinds of documentation they needed to have to be able to do this kind of scholarship and research.
2) We encourage this kind of work for other key platforms: For example, there is a books in the platform studies series on various different video game systems. Beyond that though, there is also a book in their series on Flash. Given how widely used Flash is, that book is likely of great use to anyone who wants to preserve a set of Flash works.
3) We accept that this kind of work is potentially one of the best ways to preserve software based work: That is, it may well be that doing this kind of research is itself one of the best ways to help preserve and contextualize works. In this vein, the book itself could be a model for what critical documentation of software based works might look like.
“Or seen in a different way, the retelling of a game on a different platform can perhaps be compared to the retelling and memory sharing process of oral history.”
I really liked this comparison to oral histories. It connects to the 8-bit PBS video when they talk about how years from now people won’t even know what Ataris are, but without knowing it they’ll be able to connect to its history through 8-bit music.
It also brings up this idea of the mythos surrounding the machine, and how it can compare to the mythos surrounding the artist for analog works. Ian Bogost and Nick Montfort contribute to the preservation of the Atari by creating this book to analyze the cultural, economic, and technological context surrounding its production. In a very familiar way, many art historians have written on artists and their works to place them in their own context, thus helping preserve the work by writing their own contribution. As with the myth of the ET game landfill, there are many myths about artists and the production of their works; one that comes off the top of my head is Vincent Van Gogh’s cutting his ear off.
For your question about when is enough enough when it comes to preservation, I think for initial preservation saving the elemental characteristics of a work with the time and resources available is the best we can do. We can then rely on secondary sources like Bogost and Montfort’s Racing the Beam to do the research and keep providing fresh perspectives on the context surrounding that work.