Mystery_House.dsk: What It Is and Why It Matters

What is it?

Mystery House is a primitive interactive fiction created by Ken and Roberta Williams for the Apple II, one of the first highly successful mass-produced home computers introduced in 1977. In interactive fiction, the player has the ability to influence the outcome of the story. This type of fiction was particularly appealing to the program’s creators. They first conceived the idea for the Mystery House after playing a text-adventure game called Colossal Cave Adventure. Roberta liked the concept of a text-based interactive story, but thought that players would enjoy seeing images to go along with the text. She designed the Mystery House, basing it off Agatha Christie’s novel And Then There Were None while her husband Ken developed the software.

When Mystery House was released in 1980, the game was extremely popular. It was the first adventure game to use computer graphics, rather than just text. On-Line Systems (later Sierra On-Line) sold more than 10,000 copies in a niche and burgeoning market for home computer software. In 1987, Mystery House became public domain and modifiable versions of the interactive fiction, known as Mystery House Taken Over, have been placed in the public domain as well.

A Few Brief Comments on Downloading and Playing

Downloading and playing Mystery House is challenging. At first, I attempted to play online by clicking the link on the Mystery House Taken Over webpage.

The directions indicate that you will be able to play the game online with a Java update. When that didn’t work, I attempted to download it. In order to download, you have to download a Glulx interpreter in order to run the program. A Glulx interpreter allows the game to be played on any device, mac or windows, without having to alter the original source code of the game. Unfortunately, my attempts to download the interpreter failed. Instead, I was able to demo the game on the Internet Archive.

After a brief set of instructions, the game begins outside a Victorian mansion.

As Laura has walked us through the purpose of the game and the various walkthroughs and maps that exist to guide players, I will discuss a few challenges I faced when playing. I began first without a walkthrough but soon realized how difficult it was to navigate using only two word commands containing a single noun and verb. I took an embarrassingly long time just to navigate up the stair and into the hallway because the noun and verb used must match one of the 70 preprogrammed commands. For example, in many cases, you can’t move forward unless you first type the command “open door” or you indicate which direction you wish to travel in. You also have to give very specific commands to interact with items.  If you want to pick an item up, you must use the word “take” and you must recognize and name that item correctly. For example, there is a knife in the sink but you must say “take butterknife” rather than the simpler “take knife.”

Compared to video games today, Mystery House leaves a lot to be desired. But at the time, the game was an innovation. While the game itself fascinated novice home computer owners, the underlying programming attracted the attention of both programmers and hackers. Kirschenbaum will similarly find interest in the underlying components of the Mystery House disk.

Kirschenbaum’s Forensic Walkthrough

The author’s walkthrough of the Mystery House is “a media-specific analysis” or “a close reading of the text that is also sensitive to the minute particulars of its medium and the idiosyncratic production and reception histories of the work” (129). In simpler terms, he examines the Mystery House disk like a bibliographer or a paleographer might examine a fifteenth century manuscript. Kirschenbaum uses the hex editor to examine the foundational binary data that makes up Mystery_House.dsk and discovers that there is more to the disk than just the game itself. For example, he ascertains that prior to the game being downloaded onto the disk, evidence from two other games could be found underneath it. From this information, he can extrapolate information in the same way that a historian can extrapolate information from handwriting or ink type on a material object. 

He concludes that while computers give the “illusion of immateriality” with their ability to correct themselves within a millisecond of a discrepancy in data being discovered, the mathematical precision of measurements, and impression of infinite space (unlike material texts such as newspapers, TV, and records), the digital environment is one of formal materiality. This has important implications for the way scholars conduct research on digital material. Kirschenbaum describes it as difference: using the hex editor gives the researcher a different view or perspective on an object that a simple textual analysis would not reveal. The use of the hex editor on the Mystery House disk was to “serve as a primer for bibliographical or forensic practice in future studies” (158). Not only will understanding the foundational materiality of a disk assist in preserving and storing digital information, it will be just as important as more traditional methods like paleography in analyzing materials.

The Complexities of Paper

What do Kindles, telegrams, and restaurant menus all have in common?

They are all documents.

In Paper Knowledge: Toward a Media History of Documents, Lisa Gitelman explores significant historical events in which the use of a document, set of documents, or genre of documents was as key to the shaping of those events as the people who utilized them. Thus, she creates a brief history of the ‘‘scriptural economy’’ through anecdotes at its most crucial moments in the nineteenth and twentieth centuries.

Therefore, I will begin by asking a simple question- what is a document?

You probably all just rolled your eyes at that question, right? Well, this childish question has been analyzed, examined, and reconsidered for the past century.

Gitelman states that the word “document” comes from the Latin root docer, to teach or show, which suggests that the “document exists in order to document.”

Additionally, Gitelman argues that “scriptural economy” is an ever-expanding realm of human expression. The document can be manipulated, reproduced, counterfeited, saved, formatted etc. by people. Thus, communication has grown and transgressed across structural borders, from paper through photocopies, and into digital documents. Imagine all the documents you have in your possession right now…you probably have your driver’s license in your wallet, a PDF file saved on your laptop, and an electronic bank statement on your phone. Think about that…all the different forms of documentation and means of communication you obliviously have on you at all times.  

Paper Knowledge also largely focuses on printing. Gitelman argues that the nineteenth century job in printing is crucial to the history of media. Today, it still has never been fully defined what impact printers had on subjects, authors, editors etc. being printed. Therefore, many questions are still left unanswered in media history- Who was reading these prints? How were they being preserved? Etc.

Moreover, printing history can be traced through its transformation during Industrialization and its’ competition with smaller, amateur printers. Gitelman states that during the managerial revolution, secretaries in offices “produced and reproduced documents as means of both internal and external forms of communication”. Consequently, the 1930’s is recognized as an era of “new media for the reproduction of documents”. This can be seen through the use of mimeographs, hectographs, and microfilms.

Now let’s fast-forward to the twenty-first century when “printable documents on the web” become widely popular- Gitelman argues that the PDF File is interesting because it is so sutured to the genre of the document: “all PDF’s are documents, even if all digital documents are not PDF’s”. Therefore, Gitelman continues to ask, “how is the history of PDFs a history of documents, of paper and paperwork? And what are the assumptions about documents that have been built into PDF technology, and how does using that technology reinforce or reimagine the document?”

Before reading this book, I was completely oblivious to the complexity of paper. Paper can be paradoxical, ephemeral, literal, figurative, theoretical etc. So, I ask you, what do you think the difference between paper and a document is? Where is the document going to be in the future and what new forms will it show up in? Lastly, will the tactile feature of the document be totally erased in the future? Gitelman’s description of a death certificate explains its’ physical characteristics, such as its’ raised intaglio printing, elaborate watermark, and thermochromic ink. She makes the argument that you do not just read this document but you “perform calisthenics with one”. Will this phenomenon be lost in the future as society is moving toward a more digital, online presence?

TAGOKOR: migration, data fragments, and archiving for the future

On June 30, 1950, Kenneth Shadrick became the first casualty of the Korean War and thus became the first record in a sprawling, 109, 975 item archive recording casualties—deaths and injury—in the Korean War.

TAGOKOR, standing for The Advocate General’s Office, KORean war, began as a simple punch card archive, organized initially by casualty date. From there, it would embark on a complicated route toward archival and dissemination.

The punch cards, created between June 30, 1950, and July 28, 1953, were created to be as detailed as possible: name, rank, service, hometown, and so forth. These pieces of information provided the basis by which the Advocate General would catalogue the dead and injured, but also provide notification to family. At the time, the data was crafted to be readable to the contemporary AG’s office, including specific codes.

As time went forward, the number of punch cards, one for each casualty, became a burden. As a result, the AG ordered the cards transferred to 556 BPI (bit per inch) magnetic tape and the cards themselves destroyed: encoded in Extended Binary Coded Decimal Interchange Code (EBCDIC), the data began a new life as a digital record. One more transfer to a denser, 1600 BPI magnetic tape followed on January 29, 1970.

In 1989, the National Archives and Records Administration (NARA) acquired a copy of TAGOKOR, and there the problems began.

First, the aforementioned EBCDIC was non-standard, diverging strongly from the industry standard ASCII format. Second, NARA did not have the right equipment to read the data, borrowing time on systems like those at the National Institutes of Health to copy the data over to 27871 BPI magnetic tape and verifying the information. Third, in the verification process, bits were dropped due to an inability to check the original record. Finally, NARA itself had been an agency in flux, moving multiple times between 1968 and 1988.

Ultimately, TAGOKOR was brought into a somewhat readable format, albeit with errors abundant. In 1999, the Archival Electronic Records and Inspection Control system confirmed these errors, notating the record accordingly, and in 2012, the Electronic Records Archive finally posted TAGOKOR for public consumption, errors and all.

This route from punch card to internet and the data artifacts and errors within point to important questions in archival work.

How archives handle incomplete records

When modernizing records, archives are often faced with the prospect of an incomplete, error-riddled data set. Sometimes, the incompleteness is related to a lack of complete entry, but in some cases, like TAGOKOR, there is a confluence of problems archivists must tackle. Considerations that an archive must make are related to an equation of how capable the archive is at preserving the record and how much money the archive has to do this archival. For TAGOKOR, NARA was faced with the question of not only missing bits, which could be anything from codes to data flags, but also the fact of a missing source document: the punch cards had been destroyed, and many of the codebooks were deteriorating or unreadable.

When creating a digital record from a pre-digital computer record, archivists should be aware of these circumstances first and foremost, something NARA was quickly immersed in. The fact that you cannot simply look at the “12-zone” in the indexing region of a punch card to verify codes means that you end up without vital information, so with TAGOKOR, an abundance of ampersands and empty fields abound. Something as simple as the physical header of a punch card provides a wealth of information that was not considered at disposition after the records were put on magnetic tape.

If anything, an archivist should strive to maintain physical copies as often as possible, especially associated ephemera like entry guides and memos. Without these little pieces of information, an archive can become a big problem.

Planning for the future

The most vital job of an archive is saving data, objects, and other relevant items for the future. In that case, it is important then that archivists are aware of the state of technology, future changes, and the likelihood of obsolescence in their methods. Original entrants are not bound by these considerations, but rather, their own entry guidelines, but the eventual need to save records means that even a data entry clerk should be aware of how these records will be viewed in the future.

A best practice for an archive should always err on the side of providing data in its simplest form from the word “go”. Digitally, there exist international data standards that will be readable for decades to come. Furthermore, providing stable physical copies of digital archives when possible creates a secondary preservation tool, one that can be potentially used for migration in the case of an archival failure.

Understanding that some formats, like EBCDIC, are non-standard and more or less inadequate for archiving, should be the first hint of the direction an archival project is headed. If an archive chooses proprietary over access, it does not mean more data safety, it simply means less access in the future: anyone over the age of 30 knows this when they come across references to Real Media or Real Player.

Furthermore, beginning your archive in a format that is more universal also provides an opportunity to tailor the data for a multitude of internal uses. With a wide variety of data tools, archivists can often create a basis for future data verification, manipulation, and other acts of finesse upon the data, all by simply utilizing a format that is standardized and widely-used.

What does this mean for archivists?

For archivists, having data easily accessed, especially digital-born or first step digital data, opens up the archive to outside interpretation and interaction. For TAGOKOR, this included the efforts by Whitey Reese, whose attempt at decoding records and creating a database for veterans and family members to look up casualties began in 2000 and persisted for a few years thereafter. Even TAGOKOR, as difficult as it had been coded, was still somewhat readable to Reese, who benefited from the clear and accessible format provided to him.

Archivists should look at examples like TAGOKOR and ask themselves what good their work is if it is going to be unreadable?

Archivists should also be ready to admit that sometimes a large physical record is vital, especially when involving heavily technological natures. Punch cards, diskettes, magnetic tapes, and so on, all should be preserved until the absolute last bit is verified.

Another consideration for archivists is providing ample explanation of the data. For TAGOKOR, the codes used in some bits were lost to time because they came from unpreserved memos and the like. Standardizing is one thing but ensuring that standard is interpretable by future readers is another.

Finally, archivists should be fully cognizant of the fact that although not every archive they receive for digitizing will be as incomplete as TAGOKOR, they should treat each archive as though they were. Instead of relying on the assumption of due diligence being the originator’s duty, archivists should be quick to rely on instinct when a seemingly complete dataset seems to have gaps. The “why” of those gaps is as much a part of the history of the record as the data.

Conclusion

Archiving a new digital collection that may require interpretation is a difficult task. Archivists are not by trade code breakers, but by necessity they often become such. Looking to an archive like TAGOKOR as an example of both best practices and the resilience of data demonstrates how each record can be an unreadable salad of bits, but nonetheless be of extreme importance. Archivists should be prepared to not only work with this data in an intense and complete way, they should be ready to let gaps remain until they can be filled. As long as an archivist follows good preservation practices, that data will remain intact until someone else can come in and work to make the fix.

Glitching can give us deeper understandings of digital objects

Back in 2012, our fearless leader Professor Owens wrote this cool blog post explaining why glitching digital objects can give us a deeper understanding of their value and how to break them down.

As he observed, digital objects are encoded bits of information on some sort of medium designed for a software that can read it. But, if we play with those bits of information and break those digital objects down a bit, we can grasp a better understanding of the objects internal structure, how the computer understands it, and what the original object was meant for.

There are three ways to break down and alter digital files to give us a more multidimensional, ‘non-essentialist’ read of digital objects.

First you can alter an mp3. or wav. file that you either previously had on your computer or downloaded online and alter its file extension to .txt. I did this with both a mp3. of an Oral History audio I have from a few years back and then I tried it with the track “Who Lives, Who Dies, Who Tells Your Story” from Hamilton the Musical.

This is from the Oral History interview I conducted with a woman involved in a student led campaign at Duquesne University to save the school from financial collapse in the 1970s. The text is pretty unintelligible. This audio was created on a recorder and downloaded into Audacity so its format might not be super advanced.
This one is the track from Hamilton, still pretty unintelligible but here if we zoom in we can see text that says: (Original Broadway Cast Record):soar2data0Oringinal Broadway Cast of Hamilton and other tidbits of information. This is a professional recording so it kind of makes more sense that some of the metadata is recognizable here.

You can then try and alter the same file from a .txt to a .raw file which should give you a pixelated image of what the audio looks like. If you do this for .wav and mp3 files you should see a noticeable difference and get a feel for patterns in the data. However, my Macbook for some unexplainable reason can not read these files once I change them to the .raw format. No matter what files I altered to this format I got the same error notice.

Boo you Macbook
Sad face 🙁

I used both the Preview application and IPhoto apps on my Mac and got the same notice both times. I’m not sure if it’s just an issue with my Mac or Mac in general, but I’m hoping one of my co-practicum bloggers can give you a better idea of what this looks like. If not you can get a glimpse of it in Prof. Owens blog here.

My Mac did however allow me to try out the third glitching technique, which is to take a digital image in .jpg format and change its file extension to .txt remove some of the info, revert it back to .jpg and open it back up to see how your changes altered the image.

The images below are the original and two glitched versions of a photo of the narrator I interviewed a few years ago.

the original

The first level of damage which has made the image darker and kind of duplicates it?
In this third level of damage the image now has a magenta tone and we can see how it’s been duplicated and zoomed out? Cool stuff honestly.

By looking at these glitched files we can see how the original file was damaged by removing or altering its original data. We can also see how the image was intended to be viewed and how the data works to produce it and what happens when some of it is taken away.

In conclusion, glitching is really cool because it helps us read objects “against the grain” if you will– to see the digital object from multiple dimensions and perspectives to better understand it. It’s also just kind of fun and has led to some new pathways in the creation of digital art.

But to all my digital humanitarians out there, what do you think we can stand to learn from glitching digital artifacts and how might we use this technique in our work?

Mystery House: The First Graphics Adventure Game Ever

Mystery House is an adventure game released in 1980 by Roberta and Ken Williams for the Apple II. The game is remembered as one of the first adventure games to feature computer graphics and the first game produced by On-Line Systems, the company which would evolve into Sierra On-Line. Though the game is often considered the first to use graphics, role-playing video games had already been using graphics for several years at the time of release. Applying graphics to an adventure game, however, was unprecedented as previous story-based adventure games were entirely text-based.

Development and Release

Roberta Williams created Mystery House, the first graphical adventure game, a detective story inspired by Agatha Christie’s And Then There Were None. Her husband Ken spent a few nights developing the game on his Apple II using 70 simple two-dimensional drawings done by Roberta. The software was packaged in Ziploc bags containing a 5¼-inch disk and a photocopied paper describing the game and was sold in local software shops in Los Angeles County. To their great surprise, Mystery House was an enormous success, quickly becoming a best-seller. In 1980, the Williams founded On-Line Systems, which would become Sierra On-Line in 1982.

The Game

The game starts near an abandoned Victorian mansion. The player is soon locked inside the house with no other option than to explore. The mansion contains many interesting rooms and seven other people: Tom, a plumber; Sam, a mechanic; Sally, a seamstress; Dr. Green, a surgeon; Joe, a gravedigger; Bill, a butcher; and Daisy, a cook. Initially, the player has to search the house in order to find a hidden cache of jewels. However, terrible events start happening and dead bodies (of the other people) begin appearing. It becomes obvious that there is a murderer on the loose in the house, and the player must discover who it is or become the next victim. The parser understands two words, the monochrome graphics are extremely basic and there is no sound to speak of.

How to Play

Map


Utilities


If you would like another walk-through, here is another example.

Want to see someone play this? Go here. Maybe put on some music, the sound of the keyboard is depressing.

  Kirschenbaum Using Mystery House

In chapter 3 of  Mechanisms: New Media and the Forensic Imagination (2008), Matthew Kirschenbaum uses a disk image of the vintage interactive fiction game Mystery House to conduct a forensic walk-through, or multivalent reading, of an electronic object, a bitstream image of an original instance of 5 1/4-inch disk storage media. This exercise allows the reader to explore critical reading strategies that are tightly coupled to technical praxis, including the use of a hex editor to inspect heterogeneous information once deposited on the original storage media. It distinguishes between forensic and formal materiality more sharply into focus, using the overtly forensically charged spaces of the original game to peek and poke at the content of the disk image. Chapter 3 locates the “factive synechdoches” of bibliographical knowledge within new media, while exposing a new kind of media-specific reading, new tools for critical practice, and relevant contexts surrounding personal computing in the 1980s. Forensics is ultimately presented as a mode of difference or defamiliarization rather than an attempt to get closer to the soul of the machine (20). By walking through Mystery_House.dsk, by reading the disk image forensically, he conducts a media-specific analysis: a close reading of the text that is also sensitive to the minute particulars of its medium and the idiosyncratic production and reception histories of the work (129). Kirschenbaum successfully argues that formal materiality is the normative condition of working in a digital environment. Mystery House emulators are textbook examples of formal materiality, relying on cascades of virtual machinery to reproduce the functionality of long-gone systems and hardware, the physical limitations of mothballed chips re-instantiated in formally construed mechanisms of control and constraints (155).