Innovation can mean many different things depending on the topic being discussed. In Nick Montfort and Ian Bogos’ book Racing the Beam they write that “Technical innovations are often understood as the creation of new technology—new materials, new chip designs, new algorithms. But technical innovation can also mean using existing technical constraints in new ways, something that produces interesting results when combined with creative goals.” (p. 53). The idea that things already in existence can be used in a new or different ways is one factor that lead to computers and games to evolve into what we know today.
In Racing the Beam, we learn about the history of the Atari VCS, later renamed the Atari 2600, and its impact on the development of gaming. In spite of what we now consider limited technology, it changed how people relaxed and pioneered the way for future computer and gaming systems. Those programming at the time of the Atari developed a computer opponent to play against, no longer requiring two people to play a game, and created interchangeable cartridges on which to store the games, no longer requiring new hardware to be sold for each game and lowering the price for the consumers.
What interested me most was that they decided to use less than the full computing power of the time in order to make it affordable and attractive to the public. The programmers worked within those self-imposed limitations to create what they needed and wanted the games to do. One of the cost-saving methods was to not include much RAM in the system, forcing the programmers to find creative solutions to complex problems while ‘racing the beam’, or using only the time it takes for a line to be drawn on the TV screen to compute the next one to be displayed.
While the system was affordable for the average person, it placed a lot of strain on the programmers developing the games. They needed to be creative so that their games would be playable on the limited hardware. This creativity led to more of the game’s graphics being stored in ROMs, a hardware component, and not computed in software, as it is done now. It seems like preserving these old games and programs would be difficult without the original hardware even if we knew how the games were created, without completely re-creating the games within new technology. Everyone has played pong or space invaders at some point in their lives, but few have used an Atari. Can we say that it is really the original game being played and not just something that was created to look and feel like it?
Currently, innovation in games and many other digital media has very little to do with hardware, as the software can accomplish the same task and it is easier to program. However, the problem becomes one of different file formats. As new formats are created and others become obsolete, preserving the original file becomes challenging. For instance, audio has numerous formats the file could be in but only a few are preservation quality. Should all digital audio in a library or archive be converted to the same format for preservation? Or should the original file be preserved for as long as possible? On the other hand, text is usually preserved as a PDF file, but there are many different subtypes that all have the .pdf extension but only the PDF/A subtype is meant for preservation. How do you know what the subtype is? Or even if the format has subtypes?
Caroline Arms and Carl Fleischhauer in their article “Digital Formats: Factors for Sustainability, Functionality, and Quality” pointed out that “Preservation of content in a given format is not feasible without an understanding of how the information is encoded as bits and bytes in digital files.” (p. 3). However, I would argue that for every information professional to understand how all digital files work is more than is really needed to preserve the information. A basic knowledge that some formats are better than others can be all that is needed in a professional environment, as the details of file formats can be found online if they are required.
Megan, your post got me thinking about originals and versions and the difference between the original game and the game in play – or as played, specifically, the Lets Play phenomenon (where gamers record play-throughs and then post them, generally to YouTube). If we agree that a game is more than just the components, that its also the experience, then would the varying ways of playing the game, the different experiences, also count as valuable parts of a particular game’s record? For an adventure game where time counts and you’d want to have both the absolute fastest and the fastest with the best possible score, right? One could definitely get this information in the code, in sector x there is the option for bonus y if combination of elements z is met, but since this is a multi-sensory experience, shouldn’t preservation package make every effort to include the player’s perspective of the experience in its optimal form in the same way that traditional special collections would preserve the rare and valuable? Would not the proof of an exquisite execution of the game be a worthy addition to the AIP?
Good point, Catherine. I do think that the players should not be forgotten about when it comes to preserving a game because the experience of playing can be considered an integral part of the game. The same game on different systems can have vastly different experiences, such as the Atari version of Pac-Man which was poorly received due to it being so different from the original arcade game.
But is finding the ‘best’ paths through the game cheating? Examining the code to get the best score or figure out the fastest completion time seems to go against the intention of the game, which I’m assuming is to play the game and have fun. In addition, to complete a game as quickly as possible, the gamer will usually modify the game in some way, normally to make it run faster or for the player to be harder to kill. Is that still part of the ‘gaming experience’ or has it been changed enough that it is not the same experience as someone playing the game without the advantages?
You both bring up some interesting points here about how the game is “more than its components,” it is the user experience (including novel ways of playing the game). I think it is worth remembering that users do not always play a game to get the best score or to work towards any type of goal that is implemented by the designer. Video games–especially the ones that show up on YouTube–frequently facilitate emergent gameplay, situations in which game mechanics allow the user to do things that the designers did not necessarily intend. A great example is the “rocket jump” strategy from the game Quake: users can quickly soar across the virtual battlefield by firing a rocket at their feet and, well, jumping. This strategy sounds pretty ridiculous and is something the designers did not consider when they programmed their physics engine; however, it became an integral part of any serious player’s battle strategy. Aspects of emergent gameplay are essentially invisible to an archivist because they are not part of the code, just the social context. I think users “cheating” is perhaps some of the most interesting stuff to consider when preserving these works (and I think glitch artists and chip tunes performers would agree). As Catherine suggests, I definitely think that these types of things are valuable parts of a game’s record.
Interesting discussion about the larger context around digital materials. I would agree that these aspects may be important to preserve. In a comment on his post Joe mentions bringing in archival appraisal theory into the discussion, and I was thinking about how documentation strategy or other techniques could serve as a model for collecting some of this surrounding material. Especially given that many more recent games are designed primarily or exclusively for collaborative online play, in some cases gameplay videos preserve a better sense of what it’s like to play the game (and why people play it) than simply preserving the code minus the broader context.
(As an aside, it was interesting to read Montfort and Bogost’s description of how video games moved from the social setting of the arcade into the home setting, with the subsequent development of one-player games and competent computer AI. Now as games have focused increasingly on the online experince, the pendulum seems to have swung back to some extent–the one-player version is often non-existent or an afterthought, and the arcade community has been replaced by the online community).
Even if the computer opponent is less prevalent in gaming, the computer AI is getting better. First, an AI learned tic-tac-toe, then checkers, then chess, now they are learning go. While go has very few rules, it is consider the world’s most complex game due to the number of possible moves. Next month, Google’s AI, AlphaGo, is going to play Lee Sedol, the top go player. They plan to stream the games on YouTube.
https://googleblog.blogspot.com/2016/01/alphago-machine-learning-game-go.html
Megan, great point regarding the difference between the context programers were working in on the Atari 2600 and the context they now work in. This raises a question for us to think through, given how much more complex current computing systems and environments are, to what extent do the lessons from Racing the Beam apply to contemporary creative computing work?
I think one of the places where we see the general value of the book relates to your other note about innovation in games. On page 97, related to Yars’ Revenge, they say “When the work being developed is innovative, it is often enabled by new exploration of a platform’s capabilities, by reconceptualizing the platform’s limitations, and by attending in new ways to how and why people use it.” In this vein, I think we can start to see all the various platforms (the parts that a developer or programmer takes for granted in making something) as providing a basis for dialog through their affordances in the creative process.
I thought you brought up a really great points in this post. It really made me think about how, as professionals, we think about the long term effects of these older technologies, and even current ones. I ended up having more questions than I did answers. Which is more important the hardware or software; it’s almost like “which came first the chicken or the egg?” I don’t really have an answer for this, except that with the advantage of emulation one can “feel” like they are utilizing both. For the long term I can see how hardware is becoming more and more obsolete. Gone are the days where we need a CD player, or an iPod, or a specific gaming system for video games. More and more these are just a file type and made completely with 1’s and 0’s. The next question I thought of was, does there need to be standardization of file formats for preservation? For this I feel like the answer is a big YES. Having taken Digital Preservation last semester, I personally know that the more types of files there are the more complicated things can get.
I, admittedly, do not know a lot or even care much for video games, but I found it really interesting how these programmers of the early days of Atari had to find such creative ways to make the games. It put into perspective for me how the tech of the 80’s really wasn’t as bad as I had thought. The statement I liked the best from you, and it really stuck with me was, “The idea that things already in existence can be used in a new or different ways is one factor that lead to computers and games to evolve into what we know today.”
“However, I would argue that for every information professional to understand how all digital files work is more than is really needed to preserve the information. A basic knowledge that some formats are better than others can be all that is needed in a professional environment, as the details of file formats can be found online if they are required.”
This is a great point. If nothing else, it’s entirely practical. We can’t expect that the entire field will become technology wizards. This is especially true for many archivists with humanities backgrounds that are unfamiliar or even uncomfortable with technology. It’s prohibitive and potentially dangerous to the field if we insist that all archivists/information professionals must also maintain knowledge comparable to that of a computer scientist/programmer. It’s almost certain that our field will continue to progress further into the realm of technology (after all, that’s why digital archivists are around!), but as you said, a working knowledge of file formats is what’s necessary (for now) to make educated decisions about basic preservation.
This also points to the interdisciplinary nature of our field as it evolves. Collaboration between the technology and archival worlds is an essential component if we hope to adequately preserve digital materials. Rather than trying to transform archivists into also being technological masterminds, our field should seek to build solid working relationships with individuals across the relevant disciplines. We are stronger and more effective when we work together.
I agree with you completely, expecting one individual to learn and master the entire scope of skills to required to preserve digital records is ridiculous. The nature of the field really calls for a team of specialists, not someone who kinda knows everything. The problem is convincing the people with money and authority that this is the case because for some reason the majority seem to thing IT people are all that are necessary to preserve their records.