Interactivity in Practice: Practicum 4/7

In our lifetimes, video games and play have become a fundamental piece of popular culture and, as a result, a powerful tool for learning. For this week’s practicum, I will be looking at examples of video games, game editors, and interactive applications, including the ARIS Editor, Smithsonian’s Will To Adorn, and the game Do I Have the Right?

ARIS Editor:

ARIS Editor is a game creation site that, rather than requiring a software download, can be used on any device with Flash. The Editor is a part of the larger ARIS project, an open-source software project with source code fully available to the public.

There are three sections to the ARIS project: the Editor, the Server, and the Client. Essentially these are the places you create your game, the place your game “lives,” and the app through which users can play your game. The login and initial game creation (which simply requires a name) is easy. However, for users not used to the interface, the setup might be a little confusing.

The interface opens on a blank screen with a set of tabs at the top listing: Scenes, Locations, Quests, Conversations, Media, AR Targets, Notebook and Game Settings. In each tab there is a sidebar with “Game Objects.”

https://manual.arisgames.org/tutorials/getting-started

To start, you have to create a scene. While you can create multiple scenes, it is easiest to learn ARIS within a single scene. From here, the creator can add objects and triggers.

Objects are the items you want players to see and interact with inside the game, while Triggers are the avenue through which users access an object (i.e. most objects you create will require a trigger).

There are different trigger types that can be used in the ARIS editor: Location, QR Code, and Sequence. Two of these are rather obvious: location means when someone is at a certain location physically, and QR codes that require people to scan the codes to access the object (good for inside spaces or something more intricate, like a museum exhibit). Sequence triggers are triggers that allow an object to “appear” for a viewer once they have taken another specific action within the game that you choose as the catalyst for your object to appear. This type of trigger requires Locks (another type of game object).

https://manual.arisgames.org/tutorials/getting-started

Other objects include:

  • Plaques: a virtual plaque that offers information to the user
  • Locks: locks are “the logical glue you can use to give your games structure.” They allow any trigger or other items to be locked, giving your game a literal narrative and progression.
  • Conversations: Created conversations between your user and characters/places in the game
  • Items/Attributes: Objects your users can collect, or that you can give them after certain triggers.
  • Webpages: exactly what you would expect, webpages embedded into the game experience that open with a trigger

These objects can incorporate media, and a series of objects and triggers can be incorporated into different “quests” created in the editor.

While this editor certainly comes with a learning curve, it is a good entry point for people who have no experience creating a game or who have no coding skills. I do think that the website would pull in more users if they found a way to make the interface a little more tangible and user friendly. There is a lot of terminology that doesn’t explain itself within the interface at all, and it was difficult to visualize what something might look like in gameplay based on the interface experience.


The Will to Adorn:

Will to Adorn is a research and public presentation project created by the Smithsonian’s Center for Folklife and Cultural Heritage. The project represents the work of scholars and cultural practitioners to explore the aesthetics of African American identities as represented through artistic expression of the body, dress, and adornment. While there have been various programs and papers associated with this project, the project primarily lives on the Will to Adorn website and app.

Unfortunately, the website itself has been stripped down to not include all of the items it previously held, including some of the Research Tools, Field Notes, Events, etc. However, it still offers some resources with their research guide and access to contacts users can reach out to for questions.

https://willtoadorn.si.edu/

The app is also still running, and functions in two ways:

  1. Users can tell their story. You offer the app basic information about your age, gender, etc. and then choose a question to answer. Once you have chosen a question, you create an audio recording of your answer that is submitted to the app.
  2. Users can listen to stories from other users and from fieldwork about dress and adornment.

The website and app are great examples of how digital tools can expand the reach and impact of a project, but it does also offer a lesson in terms of the longevity of the project. Perhaps upkeep on the site, for example, would allow for users to continue the project.

Do I Have A Right?

iCivics, a nonprofit organization created by Sandra Day O’Connor, works to promote civics education and encourage youth involvement in active citizenship. To do so, they create lesson plans and educational video games like “Do I Have A Right?”

The game allows students to run their own firm, focused specifically on constitutional law. The more cases won for each client, the more your law firm grows.

In the game, users have the option to play either the “Full Edition” or the “Bill of Rights Edition,” i.e. the cases you receive are only about the original 10 amendments. For this practicum, I decided to play the Bill of Rights Edition.

The game then takes you to create your avatar; I would say there was general success in using inclusive practices by iCivics, including avatars of multiple races and genders, as well as add on options of glasses or a wheelchair.

You then pick a partner and open your firm, with the aim to match cases with lawyers who specialize in the relevant amendment. Each potential client that walks in must be evaluated by the user, who decides whether or not the case is an infringement on the person’s rights. The game should ultimately help students gain a better understanding of their rights, and how they are protected by the judicial system. Overall, I found this game very engaging, while also remaining simple enough in design to allow for learning. However, some of the time constraints that make the game feel “high stakes” does prevent users from fully reading into each amendment.

Or How I Learned to Stop Worrying and Love the Glitch

This week I attempted to recreate the results of glitching files as demonstrated in this blog post by Trevor Owens. As we shall see, I ran into a few difficulties in reproducing this experiment exactly. But first what is a glitch? According to Wikipedia, “A computer glitch is the failure of a system, usually containing a computing device, to complete its functions or to perform them properly.” In this post, I chronicle my attempts to create glitches by using files in ways other than their intended purpose to reveal what we can learn about the formats themselves.

A Textual Audio Experience

I started trying to view an .mp3 as a .txt file. I could not use the same audio file as in the original blog post because the Library of Congress does not provide direct download any longer, having switched to streaming-only for access. Instead, I randomly selected an .mp3 of the Rush classic Tom Sawyer. From here I changed the file extension to a .txt file and opened it with my computer’s text editor. Here is the result:

rush
A real toe tapper

Just as with the audio file Owens used, much of the information in the .mp3 is a confused mess and the result of the text editor’s attempt at interpreting the bits as alphanumeric sequences. However, along the top there is some embedded metadata such as information on the writers of the song: Alex Lifeson, Geddy Lee, Neil Pert, and Pye Dubois. These bits are meant to be read as text and therefore can be read by the program.

Where the Trouble Began

In the next step, I tried to view an .mp3 and .wav file as .raw images. Because I did not use the same audio file as the original blog post, I did not have a .wav file to accompany my .mp3 when trying to replicate this part. Rather than simply changing the extension on my Tom Sawyer .mp3 I used a media encoder and converted the file to a .wav file. From here, I changed the extension on each to .raw and attempted to view them in an image editor. Unfortunately these files would not open in any of my image editing software. Borrowing a computer that had Photoshop, I was able to view the results seen below:

01 Tom Sawyer wav and mp3 raw photoshop
On the left: .mp3 as .raw, on the right: .wav as .raw

Just as above, an image editor can do no better than a text editor when attempting to read the audio files in a visual manner. Unlike Owens’s results, my two images look largely the same. The .wav as .raw did produce a large black bar at the top of image, which I am assuming is due to the difference in original format. I thought the similarity might have been because I converted my .mp3 into a .wav, so I downloaded a different .wav audio file directly from the web and repeated the steps and yet it still yielded the same results.

Complete Failure

While I was able to replicate most of the outcomes in the preceding section, I failed at the next step of editing an image with a text editor. The link Owens listed for the image in his post was broken, but luckily the original image was also available in the post. I downloaded this image and changed the extension from .jpg to .txt. I opened the file in the text editor, deleted some of the text, and changed it back it into a .jpg. Unfortunately, the file would not open in any of the image software I tried, including Photoshop. I kept receiving error messages that the file was unsupported, corrupted, etc. I tried these steps again but with copying and pasting parts of the text back into itself or even deleting only one character. I even attempted using a different image entirely and doing all the same steps again. Alas, all my attempts failed to produce a glitched image that could open.

Tom Hanks typing

Conclusions

While I was not able to reproduce all the tasks that Owens accomplished in his blog post, I was still able to see his main point that screen essentialism masks that digital objects are more than what they appear to be on the screen. The different ways the files can be read demonstrates the different structures of the formats, even if they look the same on the screen. My failure in this process has made me realize how much the public is pushed to a limited understanding and shielded by the programs that are meant to read certain files in certain ways. Perhaps my failures are just a result of well working computer software that allows you only to produce the desired outcome of these files. I encourage everyone to try glitching some files. Can you do it?

UPDATE: I was able to fix the problems I mentioned in this post. Here are the results:

Desert (2)
Glitched Image

doh
Correct comparison, .mp3 as .raw on left, .wav as .raw on right

To see how my issues were fixed, see the comments section below. Many thanks to Nick Krabbenhoeft for helping me fix the problems.

Some Thoughts on Visualization in the Humanities or the worst blog post title ever (sorry)

This week’s readings expressed a wide and deeply conflicted range of attitudes regarding the assorted uses of computers, computer modeling and the data-ization of the humanities. The authors were all for it, but some of the arguments against the idea they discussed were interesting – and valid. This validity is incredibly important; having been discussing diversity and cultural inclusion on LBSC 631 this week, I found myself hyper-aware of the attitudes some of the authors were displaying to their techno-tentative brethren. However, this is a blog and I’m am going to make some grand and sweeping statements – which I will then try to back up… hopefully using memes.

Grand Statement #1: Let’s not be that guy.

You know the guy I mean.

Grand Statement #2: “Computers allow you to go further.”

If there is to be a rallying cry for the Digital Humanities, this might well be it. Yesterday, I was whining to a mechanical engineer of my acquaintance about Underwood’s observations of the reluctance in the Academy to embrace digital technologies, how they fear a total seismic shift* in their world.

I would like to assuage those fears. According to my mechanical engineer, “Computers allow you to go further. They don’t take away the work.” I was scrambling for a pen here so the next bit is a paraphrase: computers make more work and they make what you’ve got more accessible.

Take the work done with MALLET, Blevins describes how computer modeling validates itself. The Ballard diary, chosen in part one assumes for its completeness, shows how well the computer can model. Blevins even relates how surprised he was that it initially worked so well. But it worked. The tool did the thing it was designed to do. That’s great! And now there’s all this data to play with. If you wanted to only focus on the number of babies born when the crocuses were in bloom, then it’s a simple matter of correlating your data. If you want to take up the argument discussed in Graphs, Maps, and Trees, that there is no such thing as a “gothic novel” and dissolve that grouping from his chart of genres to see what the effects are, you can do so. Vistas abound, new peaks arise to be surmounted.

Tumblr

The problem, I think, is that Humanists see things like “correlating data” and “data manipulation” and they freak out because these are STEM things. That they are not scientists, but humanists. Theirs is a world of logic and rhetoric. Well, yes, fine, but notice how scientists get all the grants?

Science vs. Humanities… ›› See... and my family thought I was crazy for being a Humanities major.

We don’t need to spend years waiting on graduate students to count everything by hand. We can load that puppy, or important literary work, into a computer and run analysis, any analysis, all analyses. And then tomorrow, we can do it again, go further and deeper. Instead of relying on grad students, you can partner with other academics on the other side of the world as easily as in the next building over a la Graham, Milligan, and Weingart. Where privacy and exclusivity are a concern, there is no need to make work public as they did, but this opens a work up to more input, catching mistakes earlier, and examining multiple points of view since no one publishes the book to make money anyway. You publish the book to get tenure.

Grand Statement #3: This is not the Singularity.

Technology is moving at a brisk clip, but we’re not in any danger of be replaced by robots today or tomorrow or the next day. For whatever reason, and I’m going to guess it has a lot to do with not being good with computers 10+ years ago, some humanists aren’t on board with putting the digital into their work. This is a massive disappointment for the rest of us because the kind of work that they’re doing, work like breaking down the linguistic anachronisms in Downton Abbey (a point of much personal vindication for me) and examining the Ballard diary, is really interesting. And doing it with graphs means that people who don’t have PhDs can understand it too. Perhaps therein lies the fear; that if outsiders can see – and understand – what we’re doing, we’ll all revert to the seventh grade and get made fun of by the popular kids for liking to evaluate complexly and dig a little deeper. So how do we embrace our intelligence, how do we share the fruits of our enthusiasm in the best possible way? I would argue that charts and graphs – visualizations of complex data – are the way forward.

 

 

 

*To be fair, the idea of a seismic shift as representative of a complete overhaul of any working system was no doubt active before Mr. M. Watkins published his article “How Managers become Leaders,” but it is from him that I got the idea so I have linked to it in Google Scholar: Michael D. Watkins, “How Managers Become Leaders,” Harvard Business Review 90 (June 2012), 65-72.

 

Digital Visualization as a Scholarly Activity

Everyone knows that age-old expression, “A picture’s worth a thousand words,” right?  Martyn Jessop in “Digital Visualization as a Scholarly Activity,” expands this expression and argues that digital visualization is a way to not only transmit and teach those “thousand words”, but also a way to discover new knowledge from those underlying messages.  Jessop sets out to explain that digital visualization is a scholarly methodology, demonstrate that the use of visual methods and sources is not a new phenomenon in the humanities, and provide steps needed in order to further acceptance of digital visualization by the academic world.

Digital visualization is more than just an illustration and is a scholarly methodology.  Jessop states, “An illustration is intended merely to support a rhetorical device (usually textual) whereas a visualization is intended either to be the primary rhetorical device or serve as an alternative but parallel (rather than subordinate) rhetorical device.”[1]  Digital visualization is another medium where scholars can teach others as well as research and discover new knowledge.  He also explains that digital visualization is interactive and allows scholars to manipulate the visual and its data.

While the field of digital visualization is new, the use of visual tools in the humanities is not. Historians have used image galleries, museums and collections, film and television, reenactments, maps, and graphs to further their research and teach others.  Jessop explains that visualization, including the new field of digital visualization, has been used to portray many types of data, including spatial dimensions, quantitative data, text, time, and 3D visualization.  Jessop provides a plethora of examples where scholars have used digital visualization tools to portray these data types.  The Valley of the Shadow and Salem Witch Trials are visualization projects that showcase time, space, quantitative data, and qualitative data.  The British Library’s Turning the Pages displays text in its original form.  In addition, the Theatre of Pompey project is a neat 3D visualization. There are other examples I found of digital visualization.  Flickr can be considered a digital visualization tool where institutions, such as the Smithsonian, display a collection of images that tell a story. Amateurs and history buffs also dabble in digital visualization.  Maps of War is a great example.

Jessop finishes with a discussion on how humanists can set guidelines to ensure the value of digital visualization as a scholarly methodology.  He uses the London Charter, which lays out basic principles for the use of 3D visualization in scholarly research, as a jumping off point for objectives for the broader field of digital visualization.  The principles he highlights are aims and methods, sources, transparency requirements, and documentation.  Scholars should address why a certain method of digital visualization was used and which sources were considered.  There should be an explanation of the creator’s aims and use of methods so readers can discern for themselves if this creator’s approach was the best approach.  Finally, there should be documentation of the process of creating the visual.  Since digital visualization, according to Jessop, “[satisfies] the roles of the discovery, exchange, interpretation, and presentation of knowledge,” it is a scholarly methodology and as such should have rigorous scholarly guidelines.[2]

Is a picture worth a thousand words? Will digital visualization ever reach a place of equality to that of the written word in historical practice? Does digital visualization achieve the same outcomes as the written word?  A visual is a great teaching tool for people not immersed in the field of history, but can digital visualizations, in comparison to full explanations through written work, further learning for academics?

 

[1] Martyn Jessop, “Digital Visualization as a Scholarly Activity,” Literary and Linguistic Computing 23, no. 3 (2008): 283.

[2] Ibid., 289.

On the Potential Benefits of “Many Eyes”

In 2007 IBM launched the site Many Eyes, which allows users to upload data sets, try out various ways of visualizing them, and most importantly, discuss those visualizations with anyone who sets up a (free) account on Many Eyes.  As professor Ben Shneiderman says, paraphrased in the New York Times review of Many Eyes, “sites like Many Eyes are helping to democratize the tools of visualization.”  Instead of leaving visualizations to highly trained academians, anyone can make then and discuss them on Many Eyes, which is a pretty neat idea.

Many Eyes allows viewers to upload data sets and then create visualizations of them.  Many Eyes offers users the ability to visualize data in 17 different ways, ranging from the wordle type of word cloud, to maps, pie charts, bubble graphs, and network diagrams, just to name a few.  There are other sites or programs that will allow users to create charts in some of these ways, Microsoft Excel for example, but Many Eyes offers the advantage of multiple types of visualizations all in one place.

Additionally,  people in disparate locations can talk about the data sets and visualizations through comments.  The comment feature even allows for the “highlighting” of the specific portion of a visualization you might be referencing. The coolest feature of Many Eyes is that anyone can access and play with data uploaded by anyone else, in the hopes that “new eyes” will lead to surprising and unexpected conclusions regarding that data.

If you create an account on Many Eyes, you can access their list of “Topic Centers”, where people who are interested in data sets and visualizations relating to specific topics, can interact and comment with one another, as well as link related data sets and visualizations.  However, a quick perusal of the topic centers show that the vast majority of topics are being followed by only one user.  The few topics that have more than one user seem to be pre-established groups with specific projects in mind.

Unfortunately, it appears that a crowdsourcing mentality, where people who don’t know each other collaborate to understand and interpret data, hasn’t really materialized.  In this IBM research article, the authors even hint at how Many Eyes “is not so much an online community as a ‘community component’ which users insert into pre-existing online social systems.”  Part of the difficulty in realizing the democratizing aspect of Many Eyes might be a simple design problem in that the data sets, visualizations, and topic centers display based on what was most recently created, rather than by what is most frequently tagged or talked about.  This clutters the results with posts in other languages or tests that aren’t interesting to a broader audience.  Many Eyes developers might adopt a more curatorial method where they link to their top picks for the day on the front page in order to sponsor interest in certain universal topics.  But maybe the problem might be more profound; what do you think?

Ultimately, I’m not sure how relevant Many Eyes is to historians.  It seems that asking for a democratized collection of strangers to collaborate on visualizing your data seems unlikely based on the usage history of the site.  However, groups of researchers who already have a data set to visualize and discuss might be able to make use of this site for cliometrics-style research.  Classrooms and course projects in particular can benefit from this site, since it’s relatively easy for people with a low-skill level to use.  What do you think?  What other applications do you see Many Eyes having?  How relevant will it be for your work in the digital humanities?