When I embarked on this
project, I underestimated how difficult it would be. Learning the program and
putting the documents into the necessary format took way more time than I
originally expected. As a result, I had to limit the size of my corpus. Instead
of looking at an entire decade of foreign policy documents, I focused on the
years from 1951 to 1954. One of the benefits of having such a small corpus was
that I was intimately familiar with the historical context of the corpus and
many of the documents themselves. After generating the results, I was pleased
to discover that the model generated many of the topics I expected to see.
Rather than being disappointed because it didn’t reveal any new avenues for
research, it confirmed for me the efficacy of topic modeling in general. I have
a greater desire now that I know the model works to apply it to a much larger
Despite the small size of my corpus, there were
occasions when the results prompted me to investigate something further. A
combination of topic modeling and close reading allowed me to explore new
avenues. For example, the presence of “tudeh” and the absence of religion
prompted me to ask different questions and question current scholarship. While
these new directions confirmed, more than challenged, an already held view, it
still demonstrates how a topic model can get you to think about a subject in a
This project also gave me the
opportunity to make some mistakes and learn from them for future research. I
didn’t realize the importance of labeling my documents with the date and
document number so that I could map my findings over time. This would have been
a really interesting addition to my research. Other scholars who have done such
projects, such as Cameron Blevin, David Allen, and Matthew Connelly, discovered
some fascinating trends that would not have been obvious through a close
Overall, I would highly recommend
people give this program a try. It’s a lot of work, but it really does allow
you to see patterns that would otherwise have remained hidden.
ARIS Games, which stands for Augmented Reality Interactive Storytelling Engine, are location-based games and stories. It is an app that allows you to create games, tours, and interactive stories.
ARIS Games was created by Field Day Lab,
based at the Wisconsin Center for Educational Research at the University of
Wisconsin. Their approach centers on “the
intersection of theories of interactive media and situated and sociocultural
learning.” The ARIS app allows researchers and educators, even children, to create
their own interactive media. This program is free to use, if not easy to
First, you need to register an account.
Once you choose a user name and password, you are given the option of making a
new game, editing an existing game, or importing an exiting file. I chose
making a new game.
Now, without the manual, this website is really hard to maneuver. Fortunately, the manual provides step-by-step directions and video tutorials, which I used to get started.
I decided to create a tour of the American University campus. The first thing I did was create plaques for all the hot spots on my tour: CAS, SIS, and the Spiritual Center! A plaque is an object that provides your audience with information, either in the form of text or media.
Then I made sure the locations of those plaques were correct. The default will always be Wisconsin (where the program was created) so I had to manually move the markers on the map to the correct locations.
Next, I added media to my plaques by selecting the plaque and editing. I gathered some photos from the AU website, though I could have easily used my own photos or videos.
Once included media, I added triggers. This tells the app when to allow the viewer access to the plaques. I’ve set all my triggers to geographic location. When the viewer is in the radius of the pin, they will be able to access the information. I could have also used a QR code.
Lastly, I created a conversation, which provided an interactive way of communicating with the audience (see example below).
Nearby Games, Tours, and Quests
You can use the app to locate nearby games, tours, and quests. I will highlight two that were particularly interesting.
(Infra)Structural Ghosts – Demo Version
I’m not sure what is happening at Hurst Hall, but I’m a little scared to find out.
This quest is intended to promote recycling. It indicates where garbage has been found across campus and sort of guilts you in to doing something about it.
Overall, this is a really cool app. It provides a platform for scholars and educators to create a game or tour that allows the audience to interact with the space around them. By using the map on American University campus, I was able to consider my space through a multitude of different lenses: conservations, history, fiction, etc.
Here are some questions to think about:
How do you see an app like this contributing to your own research interests?
Based on this short overview, what are the
benefits of using map-based interactive apps? What are the drawbacks?
is an automated application, usually on social media, which performs a
specific, repetitive task. Doing a cursory search on the Internet provided me
with some important context for Mark Sample and Steven Lubar’s blog posts. There
are a lot of different bots out there, many of which give bots a bad reputation.
They can be used for spamming purposes, in automated network attacks, political
campaigns, or to post comments designed to deliberately inflame users. But they
can also be used for more benign purposes: to order pizza from Dominos, to
answer questions on commercial websites, and to find discounts on eBay.
Sample and Lubar’s posts explore two types of benign twitter bots. For those of you who are twitter-illiterate (like me), twitter bots are automated twitter accounts that can perform simple actions like tweeting, retweeting, and messaging.
In Museumbots: An Appreciation, Lubar shares his thoughts on the value of museumbots. These bots randomly select and post objects from a museum’s collection several times a day. While seeing cool historical objects on your news feed is interesting, it is not what makes these bots so illuminating. It is what these objects say about choice.
seemingly randomness of the objects shown by the bots are actually more
representative of the museum’s collection than the objects displayed for public
consumption. Museums make choices that
influence which objects in the collection the public sees. They have to
consider what is appropriate to the public or dealers; what the curator and
conservator expect; what is in the budget; what fits in the space. The public
also engages in choice. They decide which exhibits to visit based a map of the
museum; what is advertised; what catches their attention.
“[I]t does an excellent job of making clear the differences between what’s at the museum, and what I see on display.”
the randomness of the bots that reveal
choice, something that we take for granted. These museumbots disclose SOME of those
choices. But wouldn’t it be cool if a bot existed that could reveal the choices
curators make when they purchase new collections? Or reveal items that have
been removed from the museum?
Mark Sample explores another type of bot: the protest bot or “a computer program that reveals the injustice and inequality of the world and imagines alternatives.”
What makes a twitterbot a “bot of conviction” with a message “so specific
you can’t mistake it for bullshit”? Sample lists the five characteristics
shared by all bots of conviction.
Topical – they speak to recent news stories and current events.
Data-based – they rely on reliable research, statistics, etc.
Cumulative – their true message is revealed in the aggregate.
Oppositional – they take a stand.
Uncanny – they reveal the hidden or hide the obvious.
@TwoHeadlines is topical and data-driven,
but not cumulative or opposition. There is no theme inherent in the accumulated
tweets and they do not reflect or take a stance on the news itself.
@TheHigherDead is oppositional, uncanny,
and topical, but is not data-driven as it doesn’t use actual news of ed-tech
What does a bot of conviction look like in practice?
@ClearCongress is a great example of a
protest bot. It retweets members of Congress and redacts parts of their tweet
based on their current congressional approval rating. It’s topical, gathers
information from polling data and congressional accounts, and hides what should
be visible. When seen together, the tweets reveal the disconnect between
Congress and their constituents and the uselessness of Congress in general.
@congressedits, which has recently been suspended, and @NSA_PRISMbot are two other
Sample himself created “a bot of consolation and conviction” called @NRA_Tallyin response to a 2014 shooting near the UC-Santa Barbara campus. It creates headlines for imagined mass shootings, followed by a fictional NRA response.
Each tweet contains a number between 4 and 35, victims drawn from
historical record, a location based on historical sites of mass shootings, a
type of firearm that has been used in mass shootings in the US, and a response
from the NRA which mimics rhetorical statements made after mass shootings.
@NRA_Tally is an example of what Sample
calls tactical media “that
engages in a ‘micropolitics of disruption, intervention, and education.’” It
strips the NRA’s of their main tactic for shutting down the gun control debate,
namely accusing those who talk about it of politicizing the victim’s deaths. In
this case, there are no real victims. It momentarily unsettles the gun control
debate and instead focuses on the weapon itself.
For @NRA_Tally and protest bots in general, it is the persistence of the bot that makes it powerful. “[T]his is a bot that doesn’t back down and cannot cower and will tweet for as long as I let it.”
Both Lubar and Sample demonstrate two ways that bots can be used to draw attention to certain issues: the way we collect and exhibit historical objects and they way we use can use bots in social protests. What do you think are the benefits of turning over our interpretations and exhibitions to machines? What are the downsides?
House is a primitive interactive fiction created by Ken and Roberta Williams for
the Apple II, one of the first highly successful mass-produced home computers
introduced in 1977. In interactive fiction, the player has the ability to
influence the outcome of the story. This type of fiction was particularly
appealing to the program’s creators. They first conceived the idea for the
Mystery House after playing a text-adventure game called Colossal Cave
Adventure. Roberta liked the concept of a text-based interactive story, but
thought that players would enjoy seeing images to go along with the text. She designed
the Mystery House, basing it off Agatha Christie’s novel And Then There Were
None while her husband Ken developed the software.
Mystery House was released in 1980, the game was extremely popular. It was the
first adventure game to use computer graphics, rather than just text. On-Line
Systems (later Sierra On-Line) sold more than 10,000 copies in a niche and
burgeoning market for home computer software. In 1987, Mystery House became
public domain and modifiable versions of the interactive fiction, known as
Mystery House Taken Over, have been placed in the public domain as well.
A Few Brief Comments on Downloading and Playing
Downloading and playing Mystery House is challenging. At first, I attempted to play online by clicking the link on the Mystery House Taken Over webpage.
The directions indicate that you will be able to play the game online with a Java update. When that didn’t work, I attempted to download it. In order to download, you have to download a Glulx interpreter in order to run the program. A Glulx interpreter allows the game to be played on any device, mac or windows, without having to alter the original source code of the game. Unfortunately, my attempts to download the interpreter failed. Instead, I was able to demo the game on the Internet Archive.
After a brief set of instructions, the game begins outside a Victorian mansion.
As Laura has walked us through the purpose of the game and the various walkthroughs and maps that exist to guide players, I will discuss a few challenges I faced when playing. I began first without a walkthrough but soon realized how difficult it was to navigate using only two word commands containing a single noun and verb. I took an embarrassingly long time just to navigate up the stair and into the hallway because the noun and verb used must match one of the 70 preprogrammed commands. For example, in many cases, you can’t move forward unless you first type the command “open door” or you indicate which direction you wish to travel in. You also have to give very specific commands to interact with items. If you want to pick an item up, you must use the word “take” and you must recognize and name that item correctly. For example, there is a knife in the sink but you must say “take butterknife” rather than the simpler “take knife.”
Compared to video games today, Mystery House leaves a lot to be desired. But at the time, the game was an innovation. While the game itself fascinated novice home computer owners, the underlying programming attracted the attention of both programmers and hackers. Kirschenbaum will similarly find interest in the underlying components of the Mystery House disk.
Kirschenbaum’s Forensic Walkthrough
The author’s walkthrough of the Mystery House is “a media-specific analysis” or “a close reading of the text that is also sensitive to the minute particulars of its medium and the idiosyncratic production and reception histories of the work” (129). In simpler terms, he examines the Mystery House disk like a bibliographer or a paleographer might examine a fifteenth century manuscript. Kirschenbaum uses the hex editor to examine the foundational binary data that makes up Mystery_House.dsk and discovers that there is more to the disk than just the game itself. For example, he ascertains that prior to the game being downloaded onto the disk, evidence from two other games could be found underneath it. From this information, he can extrapolate information in the same way that a historian can extrapolate information from handwriting or ink type on a material object.
He concludes that while computers give the “illusion of immateriality” with their ability to correct themselves within a millisecond of a discrepancy in data being discovered, the mathematical precision of measurements, and impression of infinite space (unlike material texts such as newspapers, TV, and records), the digital environment is one of formal materiality. This has important implications for the way scholars conduct research on digital material. Kirschenbaum describes it as difference: using the hex editor gives the researcher a different view or perspective on an object that a simple textual analysis would not reveal. The use of the hex editor on the Mystery House disk was to “serve as a primer for bibliographical or forensic practice in future studies” (158). Not only will understanding the foundational materiality of a disk assist in preserving and storing digital information, it will be just as important as more traditional methods like paleography in analyzing materials.