Preserving Undertale: Do you want to have a bad time?

This is the trailer for Undertale, a video game playable on PC and Mac that was released in September 2015. To the uninitiated in the world of indie games, this might seem like something out of the 1990s: the graphics and soundtrack are reminiscent of older games made for the NES or Gameboy systems. The graphics and sounds are nothing new, and even playing such games on a modern system is rather old hat as emulators have grown in popularity.

However, Undertale has made quite the splash in the gaming world. On the online distributor Steam, Undertale is owned by more than one million people. The game has been reviewed as being the game of 2015 by IGN and best game ever by the GameFAQs community. This game is significant simply because of its popularity, but such accolades after a few months on the market, and as an indie game made by essentially two people, raise a few questions: why is Undertale so exceptional, and how did it elicit such an intense fan response? Also, as an archivist, how and what should we try to capture from Undertale’s moment in video game history?

[The following contains some mild spoilers to the plot of Undertale. Be forewarned].

The Power to “Save.”

Undertale follows some standard features of a role-playing game. The player controls one character, a child who has fallen into the underground, a large cave-like area below a mountain, where monsters have been exiled after a ware with humans. They encounter monsters, some of whom are friendly, and some of whom are not, and they talk, battle, and trade with them, going through different towns and regions on their journey to return to the surface.

This game acts as a commentary on the RPG genre as a whole in a number of ways. First, you can play the game without killing anyone. Toby Fox, the creator of Undertale, has built in a system where the player can “Act” in a number of ways to dissuade creatures from attacking them. This is known as a “True Pacifist” run of the game. Alternatively, the player can kill everything and everyone they encounter, a method many gamers follow in other RPGs– this is known as a “Genocide” run.

Another interesting feature of the game is that it remembers what users do, even if they do not save. For instance, if the player accidentally kills the first main “boss,” and then attempts to go back to their previous save file, and then saves the first boss, the following dialogue happens directly afterwards.

Such features have not been common in previous games, even those made by major gaming companies, as things such as improved graphics and battle mechanics have been valued over more introspective games. When commentary on the genre has been used, it typically is incorporated in dialogue, with characters breaking the fourth wall and discussing RPG tropes. Toby Fox, instead, has chosen to comment on the genre in the very mechanics of the game itself.

This is what is often praised in articles and reviews, and so for that reason, these reviews and articles are significant to Undertale; they highlight the moment in gaming when Undertale was released.

Underminers and Source Code

Downloading a copy of Undertale is not terribly challenging, and so presumably an archive could obtain a copy of it and maintain hardware to open it with. However, as has been noted by Matthew Kirschenbaum in his book Mechanisms, there are things we do not see that are important to how the game is functioning, and these are things that are important to not only scholars of gaming, but to the community itself. Because the game remembers what players do, there has been an entire section of the Undertale wiki devoted to “Consequence Avoidance,” so users can TRULY erase saved data, which requires some level of going into temporary codes/scripts.

Toby Fox had said on Twitter that he did not want people data mining originally, at least for the first year of the game, although he has had a less strong stance on this since January. Because Fox made the game in an application called Gamemaker, a lot of the data can be extracted without having the source files themselves. There is a Reddit community called Underminers who have gone through many of the games files, and have found new and interesting secrets about the game. Because Fox created a game with such depth to it, where certain actions could be remembered and trigger future, different dialogues and interactions between characters, such information is invaluable to those trying to understand Undertale without playing through it multiple times.

The Fan Community: Memes, Art, and Games

The fans of this game have been incredibly active, and incredibly creative in their own rights. Particularly with data from Underminers, many fan games have come about. These all build on the plot and lore present in the original Undertale, often allowing the player to battle existing characters in the game who were not fought in the original, or introducing new characters who talk about their interactions with other characters. Most of these fan games are in the format of a battle, where the player is able to chose whether to not hurt their opponent, or kill them, highlighting that this is one of the mechanics gamers thought was most important, along with the detailed plot of the game.

Another major addition made by the fan community has been in the form of art and memes. Undertale has a number of memorable and repeated lines, and these have been used by a number of people in the gaming community to create memes and art pieces. These are significant and useful because, as a casual gamer myself, I have noticed that Undertale references have become abundant; people will say things like “[Insert anything here] fills you with determination,” “You’re gonna have a bad time,” or “Get dunked on!” These memes help highlight the phrases’ location/associations in the game, their usage, and their significance.

The fan response to this game has been immense, and is spread all over the internet. To document it, and to archive it in some way would allow users to see similar works, when and certain derivative games, stories, and images became part of the meta made by Undertale fans. This could also potentially become a growing trend for indie games. While major gaming companies provide games with expansive stories, requiring 40+ hours to complete, with lots of images, characters, and extra stories beyond the “main plotline,” Undertale is rather confined to one plotline with a few sidequests, and the game can be completed in under 10 hours. The fans of this game have filled in beyond this, and this could be a trend for future indie games.

Which Path to Take?

All of these aspects of Undertale are valuable, and could be documented/archived in some way. Their significance goes beyond Undertale itself, and the information they provide helps users understand Undertale, the gaming industry, gaming communities, and ultimately how the internet has affected so many aspects of how gamers and game creators interact.

Right now, I think that perhaps the most interesting route to take is in data mining and through the Underminers. This path, in my mind, highlights a number of the significant features of Undertale. By looking at certain aspects of Fox’s code available through data mining, and attempting to archive that content in a useful way, we can highlight the mechanics used by Fox that were so crucial to the game’s success. This type of data would presumably be useful for future game creators, something Fox seemed to be interested in fostering.

Also, this type of project highlights a moment in gaming history, where indie games are springing up because of programs like Gamemaker and RPG Maker that allow game designers to work efficiently, but also allow gamers to find out secrets about their games before playing them. Because of this, in many ways working with data mining and Undertale can include some social history elements as well, including perhaps a section on Fox’s opinions on data mining, and the Underminers’ reaction to such opinions.

Unlike the game Undertale, I think all routes here could lead to happy, fruitful conclusions.

The significance of Two Headlines, the Twitter bot

What is a bot?

The modern world is driven by the internet, especially social media. The popular microblogging site, Twitter, claims to be “your window to the world”, with several hundred million active users posting millions of tweets per day. Bots, little bits of code that do a thing, are everywhere, especially posting on Twitter.There is even a ‘botifesto extolling the virtues and possibilities of bots, not just those on Twitter, and the myriad actions that they are designed to do. It tries to capture the full width and breadth of what bots are and what they could be.

On Twitter with its set 140 character limit on posts and expectations on what those posts should look like, where artists and programmers have turned those bits of code into a new form of internet-based art, it is easier to create a bot that does something interesting or different than it would be anywhere else on the web.

But, what is a Twitter bot? The most apt definition I could find was from The New Yorker:

“Twitter bots represent an open-access laboratory for creative programming, where good techniques can be adapted and bad ones can form the compost for newer, better ideas. At a time when even our most glancing online activities are processed into marketing by for-profit bots in the shadows, Twitter bots foreground the influence of automation on modern life, and they demystify it somewhat in the process.”

It mentions several reasons why someone would want to preserve a bot: to study or learn from its code, to understand what it says about modern culture and modern life. But I would add another reason: simply because they find it funny. There are already researchers studying what the bots say about modern culture, either through their posts or through those that interact with the bot.

 

Why this bot?

Screenshot of @TwoHeadlines
Screenshot of @TwoHeadlines

Two Headlines takes two of the news headlines from Google News and then posts that combined  result. The posts give a humorous, if slightly jumbled, look at the current, important event happening around the world, at least according to Google. In under three years, the bot has managed to post more than 20,000 times and gain over 5,000 followers. While not an internet-high, it is a respectable following for something that is not advertised, instead relying solely on word of mouth.

The creator of the bot once described Two Headlines by saying,

“Part of the reason it’s funny is it’s timely — it’s always talking about what’s in the news right now because it’s pulling from Google News. The other advantage is that, much like Twitter, news headlines have a very specific way they’re written, both within publications and across publications. … It plays with the convention of headline-writing itself and subverts those expectations. Its hit rate is very high. Probably four or five tweets a day are very funny, which is a pretty high hit rate for a bot.”

Programs and their code are always studied by other programmers and those wanting to become programmers. People will always want to know how things work. Two Headlines’ code is freely available online and it has already been commented to help and explain the parts of the program. The comments were designed to allow others to modify the program for different results, which would make understanding the program easier for those with little to no programming experience. It is already being used as a teaching tool for those that want to learn about bot creation.

Preserving the code would be valuable to people that are interested in studying programming and/or Twitter bots or those that are studying online culture. However, there is no mention of if there have been revisions to the code, so there would be no way to preserve older versions of it, if they exist without the help of the bot’s creator.

 

Who said what?

There is also the context and commentary surrounding twitter bots that would be useful to anyone studying bots, especially those looking at them as more than fancy bits of programming. There have been many articles written about bots, their creators, and their cultural effects, not counting the articles trying to find the most interesting bots to follow. While Two Headlines may not have gotten that much press specifically dedicated to it, or any bots dedicated to mocking or adding to it as other popular bots have, at least none that I’ve found, it is still mentioned in the media, just not as often as its creator.

Speaking of the creator, let us not forget that Two Headlines is a program and, therefore, created by a person, in this case by Darius Kazemi, who is a prolific bot creator. It says so on his twitter and website, complete with links to the other projects he is working on or has created. There is also a bio and links to several news stories on the website.

In addition to his bot creations, Kazemi has done a lot of work to help others make their own bots and is responsible for Bot Summit, a conference “where botmakers from around the world get together, both in person and online, to discuss the art and craft of making software bots”. In doing all this bot-related work, he has developed a following of fans from many different fields, such as other programmers, game developers, comedians, philosophers and even an English literature professor, at least according to one article. Ian Bogost, whom you might remember being mentioned a few weeks ago on this blog, was quoted as saying, “You have a favorite comedian or favorite artist and you look forward to what they say, because you want to see the world through their eyes. The same kind of thing is happening with Darius.”

Preserving and clarifying collaborative contexts

The launch of YouTube in 2005 was quickly recognized as a watershed moment in the growth of social media and user-contributed content online. The ease of uploading and embedding video provided by YouTube made it accessible to a much wider non-specialist audience. Kutiman’s 2009 music and video project ThruYou builds on the subsequent explosion of homemade video content, using YouTube as its source material. Kutiman (aka Ophir Kutiel), an Israeli musician and producer, combed through YouTube, tracking down dozens of clips, musical and non-musical—homemade guitar lessons, piano recitals, amateur freestyle raps, random people screwing around with Theremins or synthesizers. He then used this raw material to create a set of seven original songs, looping and layering audio and video clips from a dozen or more sources to create each song and its accompanying video.

Continue reading “Preserving and clarifying collaborative contexts”

The heart and soul of an archive

lights

As we’ve already discovered this semester, the performing arts have a long history of documentation, so in this sense my project will be nothing new. But the readings we’ve had thus far have mostly covered how the performing arts deals with archiving works anchored in the temporal, not how it deals with the digital aspects of those temporal works.

My project this semester is going to focus on exploring avenues for archiving all the different production and design elements, the paperwork and properties that go into creating and running a theatre show. I am going to use a specific musical I worked on a few years ago as a case study. I picked this show because I was more involved in the design process than I usually am as a master electrician, since the load-in was especially complicated and I also ended up assisting by programming the show for the lighting designer, but I also recently discovered that the theatre company in question actually lost a good amount of their archival material on the musical while they were in the process of archiving their own copies, so it also serves as a good object lesson in what can be lost.

The production in question is a bit of an adaptation of an adaptation: the 1988 movie Big was adapted into a musical for Broadway in 1996, and this is the Theatre for Young Audiences (TYA) version. Yeah, this wouldn’t be my first choice for a TYA production either, but there’s also a TYA version of Avenue Q, so here we are. And the libretto isn’t really why we’re here, though we’ll archive that too. I’m interested in the more technical aspects.

Big was a bit of a game-changer for Adventure Theatre, since they had recently acquired a new lighting product, to be implemented on this show, and used in subsequent shows: flexible LED tape, that had red, blue, and green LEDs on it, allowing for near-infinite color mixing. This low-profile ‘tape’ could be attached directly to set pieces, so there was a high amount of coordination between the scenic designer and the lighting designer, and in fact reviewers often attributed the LED tape more to the scenic designer than the lighting designer. It also had the unintended consequence of making the lighting programming so complicated that we actually ran out of internal memory on the lighting console before we could finish building the show. The lighting console which was several generations out of date, ran on DOS, and only took floppy drives as external memory.

This was compounded (compounded!) by timeline issues: IMG_0187we had to find a board that would read the existing show file and execute it in the same manner as we didn’t have time to rewrite the whole thing, and the show was so fast-moving that there was no pause in the cue sequence long enough to swap disks during the run (the load process was estimated at 2 minutes, there wasn’t a single page on the script that didn’t have cues). The LED tape was being controlled by programming boxes made from scratch by the (amazing) technical director, so documentation was minimal and fixes were only accomplishable by that one individual, and I believe that to still be the case to this day (especially in terms of documentation). Other digital elements include the projections, the basic CAD files for the set and the ‘regular’ part of the lighting, and the sound cues, which were run entirely through a digital program. The sound designer and the lighting designer often worked together to time lighting cues or adjust the length of sound effects so they would complete together.

These are essential elements that were born digital and must stay digital in order to maintain their essential qualities. Focusing on the preservation of these elements and exploring what resources are out there to support them that are aimed at or affordable for the non-profit community would allow not only for better archiving of cultural history, but for sharing innovation as well — the digital equivalent of reaching over someone’s shoulder and typing in code from memory.

The stakeholders obviously include the theatre company, the designers and actors, but also potentially those interested in studying theatre on a variety of levels: the work, the design, or the designers. It also includes the general public.

The theatre company: Theatre companies will use items from past productions for many reasons: moving or still images can be used in advertisements for the theatre as a whole or in promotional or fund-seeking material for the company; the company may need the design elements if they want to stage a revival; certain set or props pieces may need to be re-worked for another show, or a tricky effect or certain board pre-sets may be re-used by a designer from an earlier show they worked on. Good records of a show and how it works are also important during the run — for example, if an actor is injured or the stage manager needs to be replaced (an actual emergency that happened mid-tech on this show).

Designers and actors: Portfolios are an integral part of a designer’s self-promotional arsenal, they act as visual supplements to a resume or CV. Photography is generally discourage during live theatre, both to prevent the actors from being distracted, and to ensure the design integrity. Promotional photography will usually be taken during one of the last few dress rehearsals, with set specific moments if called for afterwards. This guarantees that production stills will be of the best quality, and designers and actors alike can get professional images of their craft, to promote it to other talent-seekers. Designers will have their copy of the paperwork submitted to the company, but may also receive (if they desire) the plot work for the finished pieces, which account for any differences or adjustments that may have happened between basically the first draft and the finished product.

Researchers: Theatre research tends to be either script-based (studying a playwright’s oeuvre), or methodology-based (Stanislavski method, Alexander technique), but the history of the physical craft of theatre has its investigators as well. Available materials, techniques, and design influences can all be read longitudinally through a theatre company’s collective archive.

General Public: Some theatre archives, like the TOFT archive at the NYPL, require users to prove that they are in the industry, but not all film and tape archives have that requirement, and even then, if you are in the performance industry, or a student of it, you can still watch something just for entertainment. Also, having these archives available for designers to work from helps build a better production for audiences in the future to enjoy.

Brendan DeBonis as Billy and Greg Maheu as Josh in Big, The Musical TYA. Photos by Bruce DouglasThe ‘magic of theatre’ is, most of the time, just endless hours of manual labor and seat-of-your-pants improvisation to get the show up and running, and to keep it that way, especially amongst smaller theatres that don’t have the same budget as Broadway or the Kennedy Center or Disney World. But they still want to put on a good show. Big is about finding out you’ve bitten off more than you can chew, and discovering what’s great about what you are. Discovering things you didn’t know you had the capacity to do is exactly the kind of goal theatre archives are here to serve.

Moving Still Art: Rob and Nick Carter’s “Transforming”

A traditional painting is static to the human eye, despite the imperceptable movements of the atoms or the refresh rate of the screen if it displayed or created digitally. The husband and wife duo, Rob and Nick Carter, artist collaborators, looked to challenge the notion of how static these pieces need to be as part of their series called “Transforming.” Delving into a new venture between 2009 and 2013, they worked with the English visual effects firm, Motion Picture Company (MPC), to create a series of computer based digital paintings in a reimagining of still paintings from the Golden Age of Dutch art, Renaissance, and 18th century Germany.

Four of these works are presented as films on Mac screens or iPads with traditional portrait frames, each ranging from approximately two to three hours in length that loop and repeat again. Each piece, slowly and often imperceptibly, changes over the course of the playback, employing databases of insect movements and plant life cycles, algorithms, and traditional computer animation. The intentions of their pieces are to promote sustained engagement with the paintings in contrast to the six seconds on average that a museum goer looks at an artwork.

Transforming Vanitas Painting

Transforming Still Life Painting

Transforming Diptych

Transforming Nude Painting

Significance and Communities

Groups interested in the survival of these works are art scholars across various concentrations. To those studying the original works of inspiration, these new pieces serve as a vital link to understanding the impact and tracing their influence over time. Rob and Nick Carter’s work is also an important example of remixing or reuse and serve as important pieces to document the influence of the original artwork along with the new work itself. Ultimately, preservation of the digital paintings also means allowing for further transformation as the digital files and code are much easier to transform than their analog counterpart. Thus, these works are part of the social memory creation surrounding both the original works and the genres they represent.

Another group that would want these art pieces preserved are those studying new media art and its history. Kate Bryant of the Fine Art Society of London claims that these are the world’s first digitally rendered paintings (old paintings entirely recreated with a computer), making it an important to preserve as documentation of the establishment of a new genre or technique. While the approach of a modern day homage to earlier forms of art was innovative, I believe the work of Rob and Nick Carter is conservative compared to some new media art which can be quite jarring from traditional paintings.

The conventional elements may have made the work palatable to more traditional galleries such as The Frick Collection and The Mauritshuis which exhibited some of these works alongside centuries old still life paintings (in fact it is apparently the first digital work exhibited at The Frick). The works of “Transforming” are therefore important to understanding how the genre of still life is being adapted to contemporary society due to changes in technology and how new media is making its way into older traditions. I think the intersection of old and new is important to document and will be interesting to users in the future.

diptych

At the same time, their work is using cutting edge technology in animation, coding, and display, which will interest computer art and design historians. Additionally, since Rob and Nick Carter worked with a visual effects firm, the works also will interest those who want to understand how corporate entities are involved with art, especially those facilitating digital art for those who may not have the technical skills to realize their vision.

Finally, these pieces are part of the contemporary attempts of creators and producers to foster user engagement with media content. With the ever growing amount of exposure to media on a daily level, the public often devote only a small amount of time to the images that pass before their eyes. These artworks represent a response to this moment, a clear commentary on the need to focus, and how undivided attention can be rewarded. Therefore, documenting “Transforming” means documenting the cultural conversation around media consumption in the early 21st century.