There’s an App for That, But Why?

Stories from Main Street and The Will to Adorn are projects created by the Smithsonian Institution that are very different in subject matter and in execution but which share the element of encouraging members of particular marginalized groups to contribute their own stories to the endeavors. Both projects have websites and accompanying apps for mobile devices, and it is in these mobile apps where the two projects are most similar.

The Stories from Main Street website and app are offshoots of the Smithsonian

Stories from Main Street Website
Stories from Main Street Website

Institution Traveling Exhibition Service’s Museum on Main Street (MoMS) program. MoMS works to bring the Smithsonian’s traveling exhibitions to cultural institutions serving the small towns (defined as having an average population of 8,000 people) of rural America. The Smithsonian staff envisions that their programs help to bring together the residents of such towns to share their stories with each other, fostering community pride. The MoMS website allows people from anywhere in the country to contribute photos, videos, audio recordings, and written stories pertaining to their experiences in rural America and to experience the content contributed by participants.

The Will to Adorn project, begun in 2010 by the Smithsonian Center for Folklife and Cultural Heritage, “explores the diversity of African American identities as expressed through the cultural aesthetics and traditional arts of the body, dress and adornment.” The project appears to have culminated with an exhibit, demonstrations, workshops, performances, hands-on visitor participation activities, and daily fashion shows at the 2013 Smithsonian Folklife Festival held on the Mall in Washington, D.C. The website seeks to provide an explanation of the questions and goals addressed by the project and provides some sample photo and video content, but it does not offer a means of exploring the full content of the project.

Will to Adorn Website
Will to Adorn Website

While both websites are rather celebratory in the sense of bringing to prominence topics that have generally been excluded from mainstream historical and cultural practice, the projects and websites are very different in tone. Unlike Stories from Main Street, Will to Adorn projects itself as a scholarly endeavor, with researchers actively seeking to distill meaning from the evidence that they gather through the project. Whereas I did not find any user participatory element on the Will to Adorn website, collecting user content and allowing site visitors to explore it is the raison d’etre for Stories from Main Street, which to me has a very haphazard feel to it. Specific geographic location at the level of the town is also an important aspect of the Stories from Main Street content whereas local geography does not appear to figure significantly into the Will to Adorn website.

Main Screens of Both Apps Compared
Main Screens of Both Apps Compared

Despite the stark differences between the two websites, the mobile apps for these projects are actually quite similar. Both apps allow the user to record their own stories related to the topic of the project and also to listen to stories that other people have contributed. Aside from imagery, presentation-wise, the apps are pretty much identical. The Stories from Main Street app was built using Roundware, which bills itself as “an open-source, participatory, location-aware audio platform” that does pretty much exactly what both of these apps do in term of recording audio, being able to add some metadata, uploading content, and being able to select, to a certain extent, the content that will be streamed to the listener. Will to Adorn most certainly was also built using Roundware, but I did not see a credit for it in the app.

Recording content to contribute is (theoretically) easy with these apps. Start by pressing the “Add a Story” button on the main page. On Stories from Main Street, you then have a choice of six general topics from which to choose- Life in Your Community, Sports – Hometown Teams, Music – New Harmonies, Food – Key Ingredients, Work – The Way We Worked, and Travel – Journey Stories. You then identify yourself as a man, woman, boy, or girl, and finally you are asked to choose one specific question (from a provided list of four to six questions) about the subject you selected. Doing so brings you to the recording page, where your question is displayed for you at the top. When you’re ready, press the record button (I recommend the large button at the bottom; I had trouble with the smaller buttons in the middle of the page) and there will be a three second countdown. Then you will have a minute and a half to discuss your chosen question. When you’re done, press stop, and you will then have the option of listening to what you recorded, rerecording it, and uploading it (or you can exit the recording section without posting by hitting the cancel button at the top of the screen, which takes you back out to the main menu).

Stories from Main Street - Screens to Add Story
Stories from Main Street – Screens to Add Story

I chickened out at the point of actually uploading content. I’m not from a small town, and although I did record an answer to one of the Travel section questions, I was afraid of sounding like I was an Easterner mocking something from Midwestern culture that I don’t understand. I gather that the app uses your phone’s GPS to attach location information to your recording when you upload it, which is curious, because geography is such an important part of the Stories from Main Street website and a person may be inspired to record something about their town while away from home or conversely may wish to talk about a small town they’ve visited before from the comfort of their own home, which means the content may have an inaccurate geolocation if it’s based solely on the location of the phone at the moment of recording. On the website, users are able to type in the appropriate location for their content.

Will to Adorn - Some Metadata Choices
Will to Adorn – Some Metadata Choices

Will to Adorn works similarly to Stories from Main Street, although the metadata Will to Adorn collects is a bit more nuanced. After pressing “Add a Story,” the app asks for your age (15-19, 60+, and then each decade in between). They ask for gender, but in addition to the expected male and female, there are also options for “trans” (with an asterisk that goes unexplained) and “other” (which could mean all sorts of things). You then select from one of six broad geographic areas (Alaska and Hawaii I guess have to content themselves with being from the West). Will to Adorn only gives you the choice of a total of five questions to answer. However, and this is kind of key, once I made all of these selections, the screen looked like it was going to send me to a recording screen similar to Stories from Main Street. Nope.

Will to Adorn Recording Screen - Um... Do not get your eyes checked.  The screen is indeed all black.
Will to Adorn Recording Screen – Um… Do not get your eyes checked. The screen is indeed all black.

Black empty screen of doom. I have to presume that the app was tested before it was released, so maybe it’s just not compatible with my iPhone 6, because not being able to record on an app whose whole purpose is to be able to record is rather a problem. And I was more willing to answer and submit to this site (“What are you wearing?” seems like a mostly harmless question). At any rate, images on the Will to Adorn website show recording pages nearly identical to those in the Stories from Main Street app, although you may get up to two minutes to discuss your clothing choices. Website text also indicates that you can attach photos to your story submission too, but the app does not show user images anywhere, and I did not see on the website either the archive of user submissions or a way to record and upload stories so I cannot verify this aspect of the app’s functionality.

In terms of the listening aspect of these apps, after pressing the “Listen” button on the main page and waiting for what seemed like a rather long period of time in both apps for content to load, the app will start playing recordings from the collection. Stories from Main Street defaults to the recordings in the “Life In Your Community” section. Users can flag the content, like it, or if you’re inspired to record your own story, there’s a record button there too.

Listening Screens - Both Apps
Listening Screens – Both Apps

The user does have the option to choose to a certain extent which stories they will hear on the app. On Stories from Main Street, the “Modify” button at the top of the screen allows you to select one of the six content areas and to further narrow down by what specific question(s) you want to hear about. The “Refine” button in the same spot on Will to Adorn allows the user to narrow by age, gender, region, and specific question. No audio played for the first two questions that I selected to listen to on Stories from Main Street, so perhaps no one has actually contributed stories related to those particular topics, but I did have success on my third try. Interestingly enough, on the sports section, there were more question options to listen to than there were to record on your own. And in the Travel section, the “favorite journey” answers were mostly about going to a large city rather than a small town.

I’m not sure that anyone’s actively curating the user responses. There was a recording on the Stories from Main Street app that I heard where some kids were messing around doing a recording and one of them used a slur. In another one, a young man discussed how he and his friends as teenagers would go to the river, drink moonshine, get high, and watch alligators. One snippet was simply “[town name] sucks.” And a recording I heard on Will to Adorn started out as a heartfelt commentary about a certain style of dress but then suddenly turned into a profanity-laden tirade on the subject. I’m not sure if it’s a matter of not wanting to censor what people say or if the Smithsonian is just relying on the community to use that flag button to police the content. There also doesn’t actually appear to be very much content to curate on either site. According to the Stories from Main Street website, there are 519 contributions in the archive. Will to Adorn appears to have far fewer stories than that, as I heard much of the content at least twice while listening to the stories.

While some of the stories contained in the Stories from Main Street and Will to Adorn archives are genuinely interesting, honestly, I really don’t get the point with either of these apps. The stories are snippets of two minutes or less that are for the most part divorced of context. Neither app displays any metadata about the audio that’s playing, so if particular facts are known about the contributor of a recording, the listener won’t have that information. And the contributors don’t always give you much information in their recordings. For example, if a person opens up their recording in Stories from Main Street with “In my town…,” well, which town? How would I know that if the subject doesn’t actually say it in their recording? Assuming the geolocation attached to the recording is correct (an issue with Stories from Main Street that I discussed earlier), the listener doesn’t know what it is and doesn’t have a great way of determining if the speaker is talking about life in Boise, Birmingham, or Burlington (and Wikipedia tells me that there is a Burlington in 24 of the U.S. states!).  Maybe I’m missing the forest for the trees, but I’m a details kind of person.

Many of the recordings on Will to Adorn sound like they were made at the Folklife Festival, and the participants there were generally asked by volunteers about their name, age, and location and were sometimes asked to elaborate on their responses. But the following is the extent of one non-Folklife Festival story on Will to Adorn: “How I feel when I have it on—it makes me feel beautiful.” Have WHAT on? Disembodied from all context, this particular snippet doesn’t seem to me to add much to the conversation about creating meaning and forging identity through one’s attire.

Another interesting context issue with Will to Adorn concerns race. The project as explained on the Will to Adorn website specifically concerns how African Americans express themselves through dress and other adornment. The app invites anyone to contribute their story, which is perfectly fine. But the app does not provide a way to self-identify by race or ethnic/cultural background unless you choose to speak to that issue in your recording. So I guess I don’t understand how any user contributions added to the project’s database from the app could be marshaled as evidence for the original conception of the project.

Context for these stories aside, I also just don’t understand not why “there’s an app for that” but rather why the public would download either of these apps and use them over and over again. Sure, one’s smartphone provides a really convenient way to record very short stories, but I don’t really see much of a reason for an individual to do this more than once or twice. There is no essential tie to a physical place for either of these apps that would prompt a user to open up the app and learn something about that location through the project’s content. There could have been on Stories from Main Street, but there’s no way on the app to search for a particular location to find content related to a place where you happen to be or might be interested in knowing more about. Stories from Main Street does provide a link to the project’s website on the main page (Will to Adorn does not) where visitors can search for audio on a map. Similarly, given the limited amount of content in these collections, I’m not sure why anyone would use the listen function on either app more than a couple times, particularly on Will to Adorn. I’m not saying that the effort to collect and share people’s thoughts on these apps is uninteresting and completely devoid of value, I’m just struggling to see why someone might keep these apps on their phones and use them more than a very few times.

What do you think? How might these apps be improved to increase their current interest and/or enduring value? Without a great deal of context, what can we learn about the subject matter of the projects by listening to these recording snippets?

 

 

Dude, Where’s My History?: A Look at Historical Mapping Interfaces

The advent of digital technology allowed a greater exchange of knowledge and ideas to enter homes at an astonishing new level. This change brought information and services straight to users that before may have required someone to actually leave their home to seek it. The advancement of mobile computing technology furthered the trend of information coming directly to people but without restricting its access in one physical place. Many cultural heritage institutions have noticed these changes and adapted to become not only places that house information, but resources that increasingly push it directly to their patrons wherever they may be. The affordances of this new media also allow institutions to bring their materials into geographic space, adding another layer of interpretation and context while bringing to the public’s attention that history is all around us.

Histories of the National Mall

One site that takes advantage of mobile application and a spatial understanding of history is Histories of the National Mall created by the Roy Rosenzweig Center for History and New Media run using our old pal, Omeka. Taking their own advice from their report Mobile for Museums, the site is device independent, made to run on a web browser allowing for use across desktop, laptop, and mobile and is not a native downloadable app that needs tailoring for each device. As the title indicates, the site is an interface for learning about the histories of the national mall through maps, explorations (short examinations based on questions people might have about the mall), people, and past events. Most of these sections can be filtered into different historical periods. Some of my favorite sections, and much to my chagrin,  are the great explorations of unmade designs of the national monuments. There are also a number of scavenger hunts that send you to a specific part of the mall and have images of places for you to find. Once you find the images, you tap or click them and can read or listen to more about it.

mallhistdesk
Histories of the National Mall Map

The key feature of this site is the map, which has over 300 points containing historical information, audio, video, images, and documents. The user can filter by each of those categories as well as by place and event. As stated above, the site is web browser based and largely looks the same when using on a desktop/laptop or a mobile device. Using GPS, Histories of the National Mall centers the map on the user’s coordinates and locates them within historical context. What is good about the map is that there are no set way to explore the points, you can wander around and discover new facts and events that shaped the environment all around. This allows the user to set their own narrative in a serendipitous combination of explorations.

 

Aris Games

While Histories of the National Mall is a ready made site, Aris Games is both an open source application to create geographically based games and a mobile app to play the games. The back end is not the scary coding or programming that some in the cultural heritage sector may fear, but a simple interface so even those without the technical skills can make the games with the infrastructure invisible to them. One downside to the Aris created games not encountered in the mall histories site is that the mobile app is only available on Apple products and has a much more limited audience because of it.

Creating

The Aris editor interface to create is simple but it is by no means easy to understand without first reading the manual or viewing the helpful video tutorials on certain topics. It is important to understand the different elements (especially non-obvious ones such as scenes, plaques, and locks) and how they function so you can create a working game. The games are largely tours or explorations of certain areas. Building a game is based on creating “scenes” or different scenarios that the user can encounter as they travel around. You can make conversations for the user to have at each location that can lead them further into the game. All of the features you create can be mapped to a certain location to create an exploratory geographic environment. This feature is unfortunately cumbersome to use as the only way to find your points is through precise GPS coordinates or by dragging the point to where you want with no way to search for your general location so you can get there quicker. Also there is no way to see how your game will look in app without having and opening the app. Since I have an Android device, I needed to borrow an iPhone to do this. Despite these drawbacks, Aris editor is a good way to make games without requiring programming experience.

ariseditor
Aris Editor
Playing

Playing the games is fairly simple but, as mentioned above, does require downloading their Apple based app. Inside the app you can play any number of games created with the editor. You can either find  games based on your geographic location, sort by popularity, or search for a specific title. Aris provides a demo that will give you a good overview of what it is like to play these games (avert your eyes if you dislike semi-obsolete media):

Overall, National Histories of the Mall and Aris Games are good examples of the creative ways spatial history and mobile technology can work together to engage the public. By embracing this new trend and the ubiquity of mobile phones, institutions will add layers of meaning, attract a wider audience than before, and bring content out from behind closed doors.

Source-
AppPicker.com

Or How I Learned to Stop Worrying and Love the Glitch

This week I attempted to recreate the results of glitching files as demonstrated in this blog post by Trevor Owens. As we shall see, I ran into a few difficulties in reproducing this experiment exactly. But first what is a glitch? According to Wikipedia, “A computer glitch is the failure of a system, usually containing a computing device, to complete its functions or to perform them properly.” In this post, I chronicle my attempts to create glitches by using files in ways other than their intended purpose to reveal what we can learn about the formats themselves.

A Textual Audio Experience

I started trying to view an .mp3 as a .txt file. I could not use the same audio file as in the original blog post because the Library of Congress does not provide direct download any longer, having switched to streaming-only for access. Instead, I randomly selected an .mp3 of the Rush classic Tom Sawyer. From here I changed the file extension to a .txt file and opened it with my computer’s text editor. Here is the result:

rush
A real toe tapper

Just as with the audio file Owens used, much of the information in the .mp3 is a confused mess and the result of the text editor’s attempt at interpreting the bits as alphanumeric sequences. However, along the top there is some embedded metadata such as information on the writers of the song: Alex Lifeson, Geddy Lee, Neil Pert, and Pye Dubois. These bits are meant to be read as text and therefore can be read by the program.

Where the Trouble Began

In the next step, I tried to view an .mp3 and .wav file as .raw images. Because I did not use the same audio file as the original blog post, I did not have a .wav file to accompany my .mp3 when trying to replicate this part. Rather than simply changing the extension on my Tom Sawyer .mp3 I used a media encoder and converted the file to a .wav file. From here, I changed the extension on each to .raw and attempted to view them in an image editor. Unfortunately these files would not open in any of my image editing software. Borrowing a computer that had Photoshop, I was able to view the results seen below:

01 Tom Sawyer wav and mp3 raw photoshop
On the left: .mp3 as .raw, on the right: .wav as .raw

Just as above, an image editor can do no better than a text editor when attempting to read the audio files in a visual manner. Unlike Owens’s results, my two images look largely the same. The .wav as .raw did produce a large black bar at the top of image, which I am assuming is due to the difference in original format. I thought the similarity might have been because I converted my .mp3 into a .wav, so I downloaded a different .wav audio file directly from the web and repeated the steps and yet it still yielded the same results.

Complete Failure

While I was able to replicate most of the outcomes in the preceding section, I failed at the next step of editing an image with a text editor. The link Owens listed for the image in his post was broken, but luckily the original image was also available in the post. I downloaded this image and changed the extension from .jpg to .txt. I opened the file in the text editor, deleted some of the text, and changed it back it into a .jpg. Unfortunately, the file would not open in any of the image software I tried, including Photoshop. I kept receiving error messages that the file was unsupported, corrupted, etc. I tried these steps again but with copying and pasting parts of the text back into itself or even deleting only one character. I even attempted using a different image entirely and doing all the same steps again. Alas, all my attempts failed to produce a glitched image that could open.

Tom Hanks typing

Conclusions

While I was not able to reproduce all the tasks that Owens accomplished in his blog post, I was still able to see his main point that screen essentialism masks that digital objects are more than what they appear to be on the screen. The different ways the files can be read demonstrates the different structures of the formats, even if they look the same on the screen. My failure in this process has made me realize how much the public is pushed to a limited understanding and shielded by the programs that are meant to read certain files in certain ways. Perhaps my failures are just a result of well working computer software that allows you only to produce the desired outcome of these files. I encourage everyone to try glitching some files. Can you do it?

UPDATE: I was able to fix the problems I mentioned in this post. Here are the results:

Desert (2)
Glitched Image
doh
Correct comparison, .mp3 as .raw on left, .wav as .raw on right

To see how my issues were fixed, see the comments section below. Many thanks to Nick Krabbenhoeft for helping me fix the problems.

Crowdsourcing History: Take a Look at What’s on the Menu?

This week’s practicum websites all employ some form of crowdsourcing to create content and/or augment existing content. Anyone with a computer and an internet connection can contribute to these projects. On Wikipedia, users actually write and edit the content of encyclopedia articles. Flickr is a great site for storing, organizing, and sharing your digital images and video and to immerse yourself in photography. In the Commons section of the site, users are encouraged to add tags and comments to photographs uploaded by participating cultural heritage institutions to provide additional information and help make the images more accessible. Finally, What’s on the Menu? encourages its users to transcribe content in the records of its extensive collection of digitized menus to facilitate access. As people are less likely to have heard of this project than the other two, I will offer an overview of the What’s on the Menu? site.

What’s on the Menu? (menus.nypl.org) is an effort by the New York Public Library (NYPL) to create a “database of dishes” from their collection of over 45,000 restaurant menus dating from the 1840s to the present so as to “learn about the foods of the last century to see what these historic menus can teach us about the culinary landscape today.” A library employee began the collection for the library in 1900, amassing more than half of the collection herself in the first 25 years of the century. Around a quarter of the menus have been digitized, but only basic information such as the restaurant name and location and the date of the menu were cataloged. The library would like to make information about the actual culinary content of the menus available to make it easier for people with interests in the history of food and culture to find and study this information. Thus in April of 2011, they launched What’s on the Menu?, inviting members of the public to help them transcribe the food and price information on the individual menus.

Figure 1. The Main Page of What's on the Menu?
Figure 1. The Main Page of What’s on the Menu?

Anyone is free to participate in the transcription, and volunteers need not sign up for any account to do so. Just click on the large green “Help Transcribe” button in the middle of the home page (Figure 1) to get started. This brings you to a screen where you can select a menu to transcribe. At the time of this writing in February 2015, there are four menus available for transcription. Clicking on the thumbnail for a menu brings you to the main page for that menu (Figure 2).

The Menu Page
Figure 2. The Individual Menu Page

A box to the left of the screen displays the basic cataloging info about that menu. Thumbnails of the individual pages of the menu are presented in the middle of the page as well as in a horizontal row at the top of the screen. To the right, another box displays the master dish list for the menu, which includes the dish, menu page number, and price information for any items that have already been entered for that menu.

Figure 3. An individual page from a menu.  The green check marks indicate a menu item that has already been transcribed.
Figure 3. An individual page from a menu. The green check marks indicate a menu item that has already been transcribed.

To begin transcribing the menu, click on a thumbnail for a page that appears to have food information on it. A larger version of the menu page appears on the screen (Figure 3), possibly already with some green check marks on it, along with the master dish list on the right. To actually transcribe, click on the first letter of any menu item that doesn’t already have a green check mark next to it. You are then brought to a page with a closer view of the part of the image that you just clicked on (which you can make even bigger by clicking on the largest A button underneath the image on the left; See Figure 4 below). You then enter the dish exactly as it appears on the menu (with a few caveats) in the text box below and the price of the item in the price box. Then click the “Enter Dish” button, and your work will be recorded. The screen reverts back to menu page, and you will see a green check mark next to the item you just entered. Easy!

Figure 4. Transcribing a menu item.
Figure 4. Transcribing a menu item.

You can transcribe as much or as little of the menu as you’d like. If you feel that all of the menu items on all pages of the menu have been transcribed, you can click the “Submit for Review” link, which is hard to see without knowing where it is already (on the left side above the horizontal row of menu thumbnails; See Figure 2). Doing so places the menu in the “Under Review” queue, which offers another opportunity for volunteers to assist with the project.

Reviewing entails checking over the transcribed menus for accuracy. You can find menus to review at the bottom of the site’s main page—either click on one of three menu thumbnails presented there, or if those don’t appeal, click on the words “Help Review” to see all of the menus that await their quality assurance check. Click on any menu, and then look for typos, price errors, and any missing items. To edit an item, click either the green check mark next to the item on the menu or the pencil icon next to the menu item in the master dish list. Either action brings you to the same screen you used to add new information, only it’s already filled in. Make your changes and click the “Enter Dish” button. There is also an option to delete the dish if it’s an entry that shouldn’t be there. Any missing items can be added in the same manner as before. When you have reviewed the entire menu and believe that it is accurate, click on the “Mark as Done” link, which is again on the left side above the horizontal row of menu images. The status of the menu is now “Done” and the menu items are searchable.

There is one more way that volunteers can add information to the menus. What’s on the Menu? is now adding geotagging to their menus. This feature can be accessed from the site’s main page by clicking on the “Map our Menus” image (Figure 1). There is also a link at the bottom of the basic cataloging info box on the menu page, just above the social media icons (Figure 2, not visible in screenshot). This brings you to NYPL’s geotagging application (Figure 5, below). You are presented with a large, scrollable image of a random menu. If there is a street address or a general city location for the establishment somewhere in the menu, enter it into the “Address or City” box and click the blue “Find on Map” button. Or if you determine that the menu is from a ship, train, or airplane, click the corresponding button below the map. Then hit submit, and the next menu in the queue will pop up. Again, you can stop at any time, and there is also a button to skip a particular menu if you’re not sure about the geographic data or just don’t wish to work on that menu.

Figure 5. Geotagging page.
Figure 5. Geotagging page.

There are multiple ways to access the information in the What’s on the Menu? database. Visitors can search by keyword in the search box in the upper right corner of the site’s pages. Place multiple terms in quotation marks for an “and” search, otherwise results will be returned for term A “or” term B. The menu bar in the page headers also includes tabs for Menus and Dishes. Both of these results can be limited to a particular decade. The Menu page can be further limited by place in the processing queue (new, under review, or done) and sorted by date, name, or dish count, while the Dishes page can be sorted by date, name, popularity, or obscurity. The clicking the link for the Explore section at the bottom of the main page will also bring you to the Menu page.

Also at the bottom of the main page is a section called “Today’s Specials.” While this section does not link to another page (logically I would think it would link to the Dishes page), it offers a small sampling of some of the dishes from the menus. Clicking on any of these dishes leads you to what is perhaps the most interesting part of What’s on the Menu?, a page that provides information about that dish, most of which is gleaned from the menus as transcribed by the public (Figure 6).

Figure 6. Individual dish page.
Figure 6. Individual dish page.

The left side of the page shows the lowest and highest prices for that item and the earliest and latest date that the dish appears on one of the menus. There is also a placement map illustrating where on the menu the dish appears, which may illustrate the relative importance of the item. At the center is another very cool feature, a graph illustrating the frequency with which the item appeared on the menus by year, which can illuminate culinary trends over time. Beneath this are the thumbnails of the menus on which the item appears, which of course link to the full menu. The right side of the page has a list of related dishes to account for slight stylistic differences in naming, word order, and punctuation on the various menus. Finally, at the bottom left is a “more information” section which offers a series of links outside of What’s on the Menu for a variety of additional information related to your dish of choice, including images, recipes, books, restaurants currently offering that dish on their menu, and “general information” with links to Google, Wikipedia, and Twitter.

Overall, the What’s on the Menu? project is pretty interesting. Looking through the old menus provides a fascinating glimpse at history, and not just in terms of menu offerings and their prices at various points in time. Many of the menus are themselves beautiful examples of artwork. They also represent not only traditional restaurant menus, but also transportation menus (rail, train, and plane) and banquet menus for special events. In looking through the website, one imagines that there is quite a bit of cultural history to be learned from these artifacts.

There are of course limitations to the database as well. More than half of NYPL’s existing menu collection remains to be digitized, cataloged, and transcribed, and the sampling of menus is by no means scientific. The collection is primarily although not exclusively focused on New York City; indeed, there are menus from around the world in the collection. While food trends certainly vary over time, they vary from region to region as well. But even for New York City, one wonders how representative the menu collection as a whole is. More than half of the collection was amassed by a single NYPL employee in the period from 1900 through 1924. Coverage of the years before and after that time period cannot be nearly as comprehensive as it is for those 25 years. One wonders too if upper class, high-end establishments and events are wildly overrepresented in the database. In addition, the library currently is not capturing section headings, which would be useful in classifying dishes (appetizer, main course, salad, dessert, etc.). Also, non-food information such as descriptions of artwork, marginalia, and other menu text that does not represent a food, beverage, or smoking item are also currently not being captured in the record, making it easier to concentrate on developing the food database but potentially limiting other cultural information that could be gleaned from the collection. Given these limitations, historians should exercise extreme caution and avoid overgeneralizations when drawing conclusions from this dataset.

I feel that a few web site improvements might enhance the user experience of What’s on the Menu? While the color scheme such as it is (the page is actually mostly white space and black text) is attractive enough, I don’t believe that the sort of light olive green color used to denote links stands out enough, particularly on the section headings for “Help Review” and “Explore” on the main page.  I also do not understand why the “Map Our Menus!” and “Today’s Specials” section titles are not hyperlinks. On the menu pages, the “Submit for Review” and “Mark as Done” links were not very obvious to me at all; it might be better to make these into colored buttons, similar to the “Help Transcribe” button on the main page. I’m also not sure how the user would know from looking at the menu page whether or not that menu had been geotagged.

The Help section I thought was clear and well-written, and on the menu pages for transcription and review, there is a small, red button-like area that the user can point at for brief instructions for how to complete the task at hand, which was also good. However, you must keep your cursor pointed at this red area in order to read the instructions and there is no way that I found to either keep that box displayed or to scroll down to see what’s in the bottom part of the box if your computer does not display the whole thing, as mine did not. I found that aspect to be quite frustrating, although fortunately the Help tab on the page header is directly above the red help area and the task is quite easy to learn and remember, so the need for that quick help section should be minimal. Finally, they do have a blog with a few interesting discussions about some of the materials in the collection, but sadly it has not been updated since 2013.

In terms of the crowdsourcing aspect of the project, I think What’s on the Menu? illustrates both the benefits and to a certain extent the pitfalls of relying upon anonymous volunteers to create metadata for a database like this. NYPL has actually had a great response to this project, with 22,000 menu items transcribed in the first three days of the project and more than 800,000 dishes in the first year. Crowdsourcing the transcription of menu items has enabled NYPL to move forward with the database while reserving paid staff labor for presumably more complex archival tasks. As tasks go, the transcription, where visitors are asked to enter each item (essentially) exactly as it appears on the menu, is pretty easy. I saw very few typographical errors of menu information. There will, however, be a good deal of data cleanup required to standardize some of the dishes for spelling and punctuation differences between menus, although this would be an issue whether the labor was crowdsourced or not.

Given that there are currently only four menus available for transcription, it would seem that the digitization of the menus may not be keeping up with the volunteers’ appetite to complete the transcriptions. Another issue with the crowdsourcing is the misinterpretation of the prices on the older menus. The four menus currently open for transcription are all from Adam’s Restaurant in 1913. Most of the prices, even for the steaks, are expressed in cents, not dollars, which may not be obvious to a casual user who may not be thinking historically. On at least two of the four menus, many of the prices were transcribed as dollars rather than cents. Such errors can be easily fixed by another person both at the transcription and review stages, but if they reach the done stage in this state, then someone must first notice the error and then email a staff member to correct it. Another drawback, at least with these four menus, is that they have a high number of menu items. With so many entries, the check marks get cluttered on the page and it becomes easy to miss menu items. Also, transcribers skip around the menu, preferring to start with a fresh menu section rather than finish an incomplete one, making omissions more likely if the person who finally clicks on the “Ready for Review” link does not look over the menu carefully.

Overall though, I don’t think that there’s much that the volunteer transcriber could do to really mess up this database. There should be at least one other pair of eyes looking over all of the transcribed data for errors and omissions, so I think the likelihood of any major errors making it in to the completed data would be quite low. All three tasks that users are invited to participate in—transcription, review, and geotagging—are easy, and I actually had fun and felt like I was doing something useful when I engaged in these activities. To me, What’s on the Menu? appears to be a historical project ideally suited to the crowdsourcing concept. Do you agree? Tell us your thoughts about What’s on the Menu? and the idea of crowdsourcing labor on digital history projects in general in the comments.

 

PhilaPlace.org

Created by the Historical Society of Pennsylvania, PhilaPlace.org is an interactive Web site that allows visitors to explore—and actively participate in—the history of Philadelphia through multimedia formats including Google maps, historical essays, audio and video files, and photographs. Although PhilaPlace.org is focused on telling the story of two specific areas—the Old Southward and the Greater Northern Liberties, both historically immigrant and working class neighborhoods—the site contains information on multiple neighborhoods and streets. It aims to promote the rich cultural history of the city’s spaces and sites, and provides users a glimpse into how their neighborhoods evolved over time.

The site allows users to investigate this history in many different ways. Visitors can search by collection, neighborhood, topics (including cemeteries, immigration and migration, education, and over forty-nine oral interviews), type of media (image, audio, video), and contributor (Historical Society of Pennsylvania, partners, and even community members); all of which provide interactive information on the historical events, buildings, and people associated with each location/topic. A blog post, with essays written by the PhilaPlace team, partners, and community scholars, also provides further information and in-depth stories about local history.

The most exciting and interactive feature of PhilaPlace is the Google map interface. Geographical representations of the site’s historical information allow users to take a virtual historic tour of contemporary Philadelphia. The home page contains a modern map of the city in which users can discover historical information about particular pinned locations. Prominent historic sites, buildings, etc. are pinpointed and linked with stories and media files that unveil that location’s history, both past and present. Visitors can also explore maps by topic, focus in on specific neighborhoods or individual streets, and view city maps from 1875, 1895, 1934, and 1962. Additionally, interactive map tours are featured that take visitors through two neighborhoods and over three centuries with photographs, oral interviews, videos, and stories.

A key mission of PhilaPlace is to “encourage investigations of place as a lens to understanding history and culture,” and the creators have done much through their site to promote this through community education and involvement. Not only does the site contain interactive media, it also features a section for educators. Lesson plans and school projects, aligned with Pennsylvania state standards, are provided for grades 6-12. Teachers can use the interactive exhibits to engage students—both virtually and physically—in local history through GIS mapping projects, public art and cultural expressions, and treasure hunts through particular streets.

All visitors are also encouraged to “map their own stories in place and time.” The site contains oral interviews with local historians, immigrants, and other community members whose knowledge and experience informs and enriches the history of these Philadelphia neighborhoods. Any user can also “add a story,” meaning they can upload images, videos, audio files, and other information regarding specific places or street addresses. One hope of the creators is for the site to eventually encompass information on the entire city, in large part from contributions made by local community members. This “add a story” feature, however, is not prominent on the site.

Searching through the site, I found myself captivated by it.  It brought the neighborhoods and history to life and provided me a rare glimpse into the Philadelphia streets that my great-grandparents traversed. Its interactive content was easy to navigate and the variety of media, topics, and stories invite users of all kinds—not just academic historians. In this way, the site impressively accomplishes its mission of producing history from a community perspective and for a diverse audience. The inclusion of modern community voices and experiences also enriches the site. This, however, could be done better. The “add a story” feature could be more visible on the home page, and throughout the site. Users could also be invited to expand the site in other ways—such as asking volunteers to transcribe archival documents or oral histories, encouraging students to comment on their experiences using the site for school projects, or including a genealogy feature that allows users to share information about specific individuals or families and connect with other researchers.

These ideas and additions, however, raise other challenges and questions about how to maintain and update sites like this in the most effective ways. The Historical Society of Pennsylvania also hosts many other websites, projects, etc., so how can staff best reach users and continuously connect with them and encourage participation, both virtually and physically? What are the advantages and disadvantages to active community member participation on Web sites like PhilaPlace? In what other ways could staff members improve this kind of project?

Ultimately, though, PhilaPlace presents an innovate digital archive of Philadelphia’s rich history—juxtaposing images and stories of the past and present for users to interact with and explore.

Born-Digital: The September 11 Digital Archive

A collaboration between the American Social History Project at the City University of New York Graduate Center and Rosenzweig’s Center for History and New Media at George Mason University, with funding from the Alfred P. Sloan Foundation, the September 11 Digital Archive represents a significant turning point in the realm of the online archive. While previous digital humanities efforts had focused on digitizing (essentially duplicating) materials from existing physical archives with the goal of promoting broader access, the events of September 11, 2001 occurred at moment in time when born-digital materials were increasingly the primary mode of cultural production. With relevant artifacts simultaneously easier to collect and more ephemeral, this required a different approach to online archiving. As such, the September 11 Digital Archive represents a number of interesting steps forward in the conceptualization of the online archive generally. The following are some of the characteristics and issues that struck me when exploring the site

1.  What Just Happened?

The September 11 Digital Archive set a new standard for immediacy in archival practice. With the plethora of born-digital content and the speedy launch of a simple user interface, materials and personal testimonials were being collected in temporal proximity to actual events that was previously unimaginable. While the campaign to record the experiences of Holocaust survivors often captured these recollections in excess of fifty years after the fact, this archive features emails sent as early as days or weeks after the attacks. In my estimation, this has the potential to capture a different kind of cultural memory than recollections shaded by the passing of time and subsequent events.

2. Abundance: Drowning in Primary Resources

As discussed by Rosenzweig, digital archives often engender heated debates between archivists and historians over what to save and how much. In the case of the September 11 Digital Archive, it is obvious that the creators erred on the side of abundance with over 150,000 born-digital artifacts collected in the form of photos, video, audio, and personal recollections and correspondence.

3. The Archive is Dead.

The September 11 Digital Archive introduces some interesting questions for digital archivists about whether or when archiving should end. According to the website, the project responsible for creating this resource ended as of June 2004. While user submissions are still possible, they state that the website is no longer being updated. How do we decide that a digital archiving project is over? Is it a practical decision related to funding windows? Is it a scholarly decision that the period for producing valuable contributions to an online archive has closed? Can we ever consider the archive closed if materials can still be submitted? What happens when an inactive digital archive becomes outdated in terms of format or user interface? Does it affect the power of the resource if no one is adapting the vast quantities of materials collected to new types of search algorithms or other user interfaces that would enhance interaction with those collections?

4. Collaboration is king.

Beyond the original partnership between the primary civic institutions of higher education and their private funding source, the September 11 Digital Archive illustrates the essential role collaboration can have in determining the success of digital archiving initiatives. While the original project is technically over, the material was added to the permanent collection of the Library of Congress in September 2003. The archive’s website lauds this partnership as a means of ensuring the “long-term preservation” of the collection and a public acknowledgement of the importance of born-digital content by the Library of Congress. Of course, as we’ve been reading this semester, there is still some uncertainty about the true meaning of “long-term preservation” in regard to digital materials.

The September 11 Digital Archive also forged other collaborative relationships that enriched the resource. Under Special Collections, the site describes a collaboration with NPR that produced The Sonic Memorial Project, an aggregation of sounds related to the 9/11 attacks on the World Trade Center specifically. There is also discussion of how the September 11 Digital Archive served as the “Smithsonian Institution’s designated repository for digital materials related to 9/11,” linking yet another legacy cultural institution with this project. This kind of centralization of materials into a single resource seems like an excellent model for future digital archiving projects and a useful means of overcoming the fragmentation of information across disparate sites that seems so typical of the Internet.

5. Resources NOT Narratives.

It is also interesting how clearly the September 11 Digital Archives delineates itself as a collection of resources rather than a curated narrative of events like one would expect to find in a museum or a history textbook. In its FAQs section, it directs visitors with specific questions about the timeline of events, the origins and identities of the terrorists, the activity of first responders, and the rebuilding of the site of the World Trade Center to resources created by other sources, particularly the websites of the New York Times, CNN, and the Washington Post.

6. Content Does Not Always Equal Context.

The final point that I want to make about the September 11 Digital Archive is that there are still a lot of unanswered questions about how best to standardize vast quantities of born-digital materials for ease of search and uniformity of display for the web while still retaining important contextual information. Using the example of the “Satan in the Smoke” collection of emails, you can see how the visual layout of the document below is unfamiliar to anything we associate with reading emails, whether you use a web interface, a desktop client, or a mobile device.

Furthermore, embedded content has been pulled out and placed elsewhere on the archive site and all identifying information regarding the sender and recipient(s) have been removed. I’m not making an argument about any of these practices being right or wrong, just attempting to draw attention to the importance of context when dealing with born-digital archives, just as with any other category of artifacts, and the unique problems that the ability to strip and reconfigure digital text and data can raise.

This is just a sampling of the issues that exploring the September 11 Digital Archive triggered for me. If you noticed anything that I did not touch on, please feel free to contribute to the discussion in the comments below.

Voyeur/Voyant

Have you ever found yourself wishing you could find a web-based text analysis program that was created to theorize text analysis tools and text analysis rhetoric?  If such a specific desire has ever burdened you, fret no more!!  Your wish has been answered by the collaborators of hermeneuti.ca with their creation of Voyeur!

How does Voyeur work?  Users paste a URL(s) or text into the “add text” box and click on “reveal” for the program to calculate frequency of words in the text.  The results are shown two ways: one is visual (like wordle) with the most frequent words appearing the largest in a word cloud, the other is shown in the “summary” or “words in the entire corpus” box.  Both of these list the most common words in descending order.

Once the data has been analyzed, users have several options of what to do with it.  One of them is exporting it.  There are several options of how and where to export the data to.  For a historian doing research on multiple documents, this tool is very valuable.  If a user is looking for the frequency of a particular word, they can type it into the “search” box under “words in the entire corpus.”  Double-clicking on a word brings up three more boxes of information: “word trends,” “keywords in context” and “words in documents.”  If there is a favorite word users want to store they can click on the heart with a plus sign in the “words in the entire corpus” box to save it.  These features work for foreign languages as well (they must be text, symbols are not recognized).

While Voyeur has many positive attributes, it also has its negatives.  The most frustrating of which is the limited data type it can analyze.  Hermeneuti.ca acknowledges the flaws of this website-in-progress but it claims the ability to break down a variety of web-based texts.  When I entered the URL for a JSTOR article, an error message appeared.  I also tried entering the URL for blogs and it would not analyze those either.  I was not able to test an e-book with Voyeur but I would be interested to see if it would break it down.  Another downside to this program is that it analyzes common words like “the, and, of, in” etc.  Wordle does not show these common phrases in the word clouds it creates.  This is not a terrible feature but if it could be eliminated to focus on more key words that would improve it.

How useful can this program be for historians when it lacks the ability to analyze a variety of documents?  It would not be my first choice for text analysis if there are more versatile programs available.  However, for the documents it can break down, it is useful in comparing multiple texts at one time, finding the most frequent words from the documents combined.  The ability to export the data and store favorite words makes it convenient for some types of historical research.

What do fellow historians think of this?  Can programs like Voyeur be useful even if they have a limited capability for analyzing documents?  What should we be looking for in text analysis programs?

(posted at 10:26 pm on 5/5)

On the Potential Benefits of “Many Eyes”

In 2007 IBM launched the site Many Eyes, which allows users to upload data sets, try out various ways of visualizing them, and most importantly, discuss those visualizations with anyone who sets up a (free) account on Many Eyes.  As professor Ben Shneiderman says, paraphrased in the New York Times review of Many Eyes, “sites like Many Eyes are helping to democratize the tools of visualization.”  Instead of leaving visualizations to highly trained academians, anyone can make then and discuss them on Many Eyes, which is a pretty neat idea.

Many Eyes allows viewers to upload data sets and then create visualizations of them.  Many Eyes offers users the ability to visualize data in 17 different ways, ranging from the wordle type of word cloud, to maps, pie charts, bubble graphs, and network diagrams, just to name a few.  There are other sites or programs that will allow users to create charts in some of these ways, Microsoft Excel for example, but Many Eyes offers the advantage of multiple types of visualizations all in one place.

Additionally,  people in disparate locations can talk about the data sets and visualizations through comments.  The comment feature even allows for the “highlighting” of the specific portion of a visualization you might be referencing. The coolest feature of Many Eyes is that anyone can access and play with data uploaded by anyone else, in the hopes that “new eyes” will lead to surprising and unexpected conclusions regarding that data.

If you create an account on Many Eyes, you can access their list of “Topic Centers”, where people who are interested in data sets and visualizations relating to specific topics, can interact and comment with one another, as well as link related data sets and visualizations.  However, a quick perusal of the topic centers show that the vast majority of topics are being followed by only one user.  The few topics that have more than one user seem to be pre-established groups with specific projects in mind.

Unfortunately, it appears that a crowdsourcing mentality, where people who don’t know each other collaborate to understand and interpret data, hasn’t really materialized.  In this IBM research article, the authors even hint at how Many Eyes “is not so much an online community as a ‘community component’ which users insert into pre-existing online social systems.”  Part of the difficulty in realizing the democratizing aspect of Many Eyes might be a simple design problem in that the data sets, visualizations, and topic centers display based on what was most recently created, rather than by what is most frequently tagged or talked about.  This clutters the results with posts in other languages or tests that aren’t interesting to a broader audience.  Many Eyes developers might adopt a more curatorial method where they link to their top picks for the day on the front page in order to sponsor interest in certain universal topics.  But maybe the problem might be more profound; what do you think?

Ultimately, I’m not sure how relevant Many Eyes is to historians.  It seems that asking for a democratized collection of strangers to collaborate on visualizing your data seems unlikely based on the usage history of the site.  However, groups of researchers who already have a data set to visualize and discuss might be able to make use of this site for cliometrics-style research.  Classrooms and course projects in particular can benefit from this site, since it’s relatively easy for people with a low-skill level to use.  What do you think?  What other applications do you see Many Eyes having?  How relevant will it be for your work in the digital humanities?

Flickr

Flickr is a free photosharing site. It allows you to create a profile and upload photos to a format that makes them easy to share with friends, family and the general public. Flickr makes it easy to get started. In addition to step by step instructions when creating a profile, it also provides a tour of the site that explains all of its features. Aside from uploading photos, you can comment on other users’ uploads or mark images that are especially interesting to you as favorites, allowing you to easily return. Flickr also lets you add people to photos to easily alert other users who may like that image. One feature that I found interesting was the guest list. This feature allows access to images that you choose for people who do not have a Flickr account. On that note, it also contains privacy settings that limit who can see photos on an individual basis.

Two features that I thought were especially useful were the map and linking. Flickr allows you to upload collections of photos from your account to a separate website. This feature is helpful for institutional accounts because they can connect the photos on Flickr to their main webpage. It also could be used by bloggers to share Flickr collections through that medium. The map feature allows you to attach photos to a specific location. Again, this type of technology could be utilized by historical institutions to teach about events or themes through photos.

The search feature is a great way to explore the Flickr world. When searching it brings up photographs tagged with that term as well as groups, individual photographers and places associated.Flickr also allows you to comment on photos. One piece of this feature that was interesting was that you can comment directly on a photo.

The Flickr Commons is the most obvious historical aspect of this site. The Commons provides users the opportunity to help describe photo collections from various institutions across the globe, such as NASA, The National Archives, the New York Public Library, and Smithsonian. Users can add tags and comments to any of the photos available in The Commons.

Flickr also allows you to organize photos into sets and collections, as well as create groups to aggregate photos with a common theme. Some examples of historically minded groups are

http://www.flickr.com/photos/nersess/sets/72157603339444029/with/2066890192/

African American laborers at Alexandria, near coal wharf

Earning Your Badges: A review of Gowalla

In Julie Meloni’s article, she reviews the Gowalla site and discusses how its features can be applied as a supplement towards education and visitor experience at museums.

At first look, Gowalla is a location-based social network, much similar to the Foursquare application. Users on their mobile devices “check-in” at spots near notable locations, such as landmarks, statues or building sites, receiving a badge/item to add towards your account’s collection (these may be redeemed for real-life prizes). Gowalla comes with challenges to get special badges and users can create customized trips to provide other users tours that target specific sites to visit.

The ability to create these custom trips becomes a useful tool for education. Because any location can be marked for visit on the trips, these places can range from favorite stores, to little-known historical markers and sites; this allows users to reconnect the history of special locations to others. As each location has a short paragraph with information about the site along with photos made by other users, Gowalla can help bring more exposure about these places to other who may not know about them.

Meloni suggests several ways that Gowalla can be used with museums to enhance the visitor experience. These suggestions include linking objects in an exhibit to its place of origin (and vice versa, where going to a location may link the visitor to related examples at nearby museums), creating specialized exhibits to collaborate with Gowalla trips, and creating specific bonus badges that are earned in addition to the initial badges from the exhibit.

Gowalla can become a great tool in uncovering historical sites and locations to both students and visitors, providing a nice interactive approach in combing both sightseeing and learning into a single tour or “trip”. Other than the ways that Meloni suggests in her article, can you think of other ways that Gowalla could be applied to learn about locations?