Time Travel and Word Clouds: A Short Guide to Three Fun and Informative Digital History Websites

Practicums Review Post for January 21:

a) PhilaPlace is an interactive website about space, time, and Philly. It was created by the Historical Society of Pennsylvania in 2005; it connects stories to places across time in Philadelphia’s neighborhoods. It has different formats: text, pictures, audio and video clips, and podcasts. It also includes community programs and publications, from workshops for teachers, to trolley tours, and exhibits.

PhilaPlace focuses on two areas: Old Southwark and the Greater Northern Liberties, they were always home to immigrants and working class. Philadelphia was known as a multi-ethnic “workshop of the world.” By using the landscape as a lens, PhilaPlace reveals how each population that arrives in a neighborhood creates new histories, traditions, and memories tied to place. Residents of Philly are encouraged to interact with and contribute to this project. Studies showed that younger users of this website wanted to experience the neighborhoods on their own while older audiences wanted to continue to have a guided experience.

Perhaps the most fun aspect of this website is its map. By clicking on any pin, you are given a well-written and easily-digestible information about the place. You would feel like you are walking through the city with a very-knowledgeable friend who tells you about Philadelphia’s past and present. While using it, you become ever-more aware of the concept of space in an exciting way.

b) Historypin is a website that collects, curates, and structures stories to bring people together, one story at a time. It hosts 365,951 stories pinned across 27,844 projects and tours – across 2,600 cities. It is built by a community of 80,000+ storytellers, archivists and citizen historians. Historypin is a not-for-profit organization. It no longer has a community forum due to technical issues, and also probably online harassment. To sign up, go to the top right corner; the easiest way is do so through Facebook. Everyone with a profile can create a collection, and upload images and story to the website. To add a pin: go to the profile page, “add a pin” or “create a tour” would be on the right side. One of the most popular collections is San Francisco MTA archival collection. By navigating the arrow on the map, you can view pins, which then appear as old archival photos. It feels like you are traveling in the past but with useful context provided in text. This website is useful for small organizations that want a platform or to create even an easily accessible tour.

c) Wordle helps you generate “word clouds” from text that you provide. The clouds give greater prominence to words that appear more frequently in the source text. You can tweak your clouds with different fonts, layouts, and color schemes. Because the Wordle web toy does not work, you should install a desktop version of it on your laptop. Do not try to use the web one, even after downloading Firefox Extended Support Release, it does not work. Instead, if you do not have it, download and install Java. Then download Worldle for Mac or Windows, the link is on their website’s main page. It’s pretty straightforward after that, you just copy and paste the text.  

Featured Image: René Magritte, Golconda (Golconde), 1953. Oil on canvas, 31 1/2 × 39 1/2 in. (80 × 100.3 cm). The Menil Collection, Houston. © 2017 C. Herscovici / Artists Rights Society (ARS), New York

There’s an App for That, But Why?

Stories from Main Street and The Will to Adorn are projects created by the Smithsonian Institution that are very different in subject matter and in execution but which share the element of encouraging members of particular marginalized groups to contribute their own stories to the endeavors. Both projects have websites and accompanying apps for mobile devices, and it is in these mobile apps where the two projects are most similar.

The Stories from Main Street website and app are offshoots of the Smithsonian

Stories from Main Street Website
Stories from Main Street Website

Institution Traveling Exhibition Service’s Museum on Main Street (MoMS) program. MoMS works to bring the Smithsonian’s traveling exhibitions to cultural institutions serving the small towns (defined as having an average population of 8,000 people) of rural America. The Smithsonian staff envisions that their programs help to bring together the residents of such towns to share their stories with each other, fostering community pride. The MoMS website allows people from anywhere in the country to contribute photos, videos, audio recordings, and written stories pertaining to their experiences in rural America and to experience the content contributed by participants.

The Will to Adorn project, begun in 2010 by the Smithsonian Center for Folklife and Cultural Heritage, “explores the diversity of African American identities as expressed through the cultural aesthetics and traditional arts of the body, dress and adornment.” The project appears to have culminated with an exhibit, demonstrations, workshops, performances, hands-on visitor participation activities, and daily fashion shows at the 2013 Smithsonian Folklife Festival held on the Mall in Washington, D.C. The website seeks to provide an explanation of the questions and goals addressed by the project and provides some sample photo and video content, but it does not offer a means of exploring the full content of the project.

Will to Adorn Website
Will to Adorn Website

While both websites are rather celebratory in the sense of bringing to prominence topics that have generally been excluded from mainstream historical and cultural practice, the projects and websites are very different in tone. Unlike Stories from Main Street, Will to Adorn projects itself as a scholarly endeavor, with researchers actively seeking to distill meaning from the evidence that they gather through the project. Whereas I did not find any user participatory element on the Will to Adorn website, collecting user content and allowing site visitors to explore it is the raison d’etre for Stories from Main Street, which to me has a very haphazard feel to it. Specific geographic location at the level of the town is also an important aspect of the Stories from Main Street content whereas local geography does not appear to figure significantly into the Will to Adorn website.

Main Screens of Both Apps Compared
Main Screens of Both Apps Compared

Despite the stark differences between the two websites, the mobile apps for these projects are actually quite similar. Both apps allow the user to record their own stories related to the topic of the project and also to listen to stories that other people have contributed. Aside from imagery, presentation-wise, the apps are pretty much identical. The Stories from Main Street app was built using Roundware, which bills itself as “an open-source, participatory, location-aware audio platform” that does pretty much exactly what both of these apps do in term of recording audio, being able to add some metadata, uploading content, and being able to select, to a certain extent, the content that will be streamed to the listener. Will to Adorn most certainly was also built using Roundware, but I did not see a credit for it in the app.

Recording content to contribute is (theoretically) easy with these apps. Start by pressing the “Add a Story” button on the main page. On Stories from Main Street, you then have a choice of six general topics from which to choose- Life in Your Community, Sports – Hometown Teams, Music – New Harmonies, Food – Key Ingredients, Work – The Way We Worked, and Travel – Journey Stories. You then identify yourself as a man, woman, boy, or girl, and finally you are asked to choose one specific question (from a provided list of four to six questions) about the subject you selected. Doing so brings you to the recording page, where your question is displayed for you at the top. When you’re ready, press the record button (I recommend the large button at the bottom; I had trouble with the smaller buttons in the middle of the page) and there will be a three second countdown. Then you will have a minute and a half to discuss your chosen question. When you’re done, press stop, and you will then have the option of listening to what you recorded, rerecording it, and uploading it (or you can exit the recording section without posting by hitting the cancel button at the top of the screen, which takes you back out to the main menu).

Stories from Main Street - Screens to Add Story
Stories from Main Street – Screens to Add Story

I chickened out at the point of actually uploading content. I’m not from a small town, and although I did record an answer to one of the Travel section questions, I was afraid of sounding like I was an Easterner mocking something from Midwestern culture that I don’t understand. I gather that the app uses your phone’s GPS to attach location information to your recording when you upload it, which is curious, because geography is such an important part of the Stories from Main Street website and a person may be inspired to record something about their town while away from home or conversely may wish to talk about a small town they’ve visited before from the comfort of their own home, which means the content may have an inaccurate geolocation if it’s based solely on the location of the phone at the moment of recording. On the website, users are able to type in the appropriate location for their content.

Will to Adorn - Some Metadata Choices
Will to Adorn – Some Metadata Choices

Will to Adorn works similarly to Stories from Main Street, although the metadata Will to Adorn collects is a bit more nuanced. After pressing “Add a Story,” the app asks for your age (15-19, 60+, and then each decade in between). They ask for gender, but in addition to the expected male and female, there are also options for “trans” (with an asterisk that goes unexplained) and “other” (which could mean all sorts of things). You then select from one of six broad geographic areas (Alaska and Hawaii I guess have to content themselves with being from the West). Will to Adorn only gives you the choice of a total of five questions to answer. However, and this is kind of key, once I made all of these selections, the screen looked like it was going to send me to a recording screen similar to Stories from Main Street. Nope.

Will to Adorn Recording Screen - Um... Do not get your eyes checked.  The screen is indeed all black.
Will to Adorn Recording Screen – Um… Do not get your eyes checked. The screen is indeed all black.

Black empty screen of doom. I have to presume that the app was tested before it was released, so maybe it’s just not compatible with my iPhone 6, because not being able to record on an app whose whole purpose is to be able to record is rather a problem. And I was more willing to answer and submit to this site (“What are you wearing?” seems like a mostly harmless question). At any rate, images on the Will to Adorn website show recording pages nearly identical to those in the Stories from Main Street app, although you may get up to two minutes to discuss your clothing choices. Website text also indicates that you can attach photos to your story submission too, but the app does not show user images anywhere, and I did not see on the website either the archive of user submissions or a way to record and upload stories so I cannot verify this aspect of the app’s functionality.

In terms of the listening aspect of these apps, after pressing the “Listen” button on the main page and waiting for what seemed like a rather long period of time in both apps for content to load, the app will start playing recordings from the collection. Stories from Main Street defaults to the recordings in the “Life In Your Community” section. Users can flag the content, like it, or if you’re inspired to record your own story, there’s a record button there too.

Listening Screens - Both Apps
Listening Screens – Both Apps

The user does have the option to choose to a certain extent which stories they will hear on the app. On Stories from Main Street, the “Modify” button at the top of the screen allows you to select one of the six content areas and to further narrow down by what specific question(s) you want to hear about. The “Refine” button in the same spot on Will to Adorn allows the user to narrow by age, gender, region, and specific question. No audio played for the first two questions that I selected to listen to on Stories from Main Street, so perhaps no one has actually contributed stories related to those particular topics, but I did have success on my third try. Interestingly enough, on the sports section, there were more question options to listen to than there were to record on your own. And in the Travel section, the “favorite journey” answers were mostly about going to a large city rather than a small town.

I’m not sure that anyone’s actively curating the user responses. There was a recording on the Stories from Main Street app that I heard where some kids were messing around doing a recording and one of them used a slur. In another one, a young man discussed how he and his friends as teenagers would go to the river, drink moonshine, get high, and watch alligators. One snippet was simply “[town name] sucks.” And a recording I heard on Will to Adorn started out as a heartfelt commentary about a certain style of dress but then suddenly turned into a profanity-laden tirade on the subject. I’m not sure if it’s a matter of not wanting to censor what people say or if the Smithsonian is just relying on the community to use that flag button to police the content. There also doesn’t actually appear to be very much content to curate on either site. According to the Stories from Main Street website, there are 519 contributions in the archive. Will to Adorn appears to have far fewer stories than that, as I heard much of the content at least twice while listening to the stories.

While some of the stories contained in the Stories from Main Street and Will to Adorn archives are genuinely interesting, honestly, I really don’t get the point with either of these apps. The stories are snippets of two minutes or less that are for the most part divorced of context. Neither app displays any metadata about the audio that’s playing, so if particular facts are known about the contributor of a recording, the listener won’t have that information. And the contributors don’t always give you much information in their recordings. For example, if a person opens up their recording in Stories from Main Street with “In my town…,” well, which town? How would I know that if the subject doesn’t actually say it in their recording? Assuming the geolocation attached to the recording is correct (an issue with Stories from Main Street that I discussed earlier), the listener doesn’t know what it is and doesn’t have a great way of determining if the speaker is talking about life in Boise, Birmingham, or Burlington (and Wikipedia tells me that there is a Burlington in 24 of the U.S. states!).  Maybe I’m missing the forest for the trees, but I’m a details kind of person.

Many of the recordings on Will to Adorn sound like they were made at the Folklife Festival, and the participants there were generally asked by volunteers about their name, age, and location and were sometimes asked to elaborate on their responses. But the following is the extent of one non-Folklife Festival story on Will to Adorn: “How I feel when I have it on—it makes me feel beautiful.” Have WHAT on? Disembodied from all context, this particular snippet doesn’t seem to me to add much to the conversation about creating meaning and forging identity through one’s attire.

Another interesting context issue with Will to Adorn concerns race. The project as explained on the Will to Adorn website specifically concerns how African Americans express themselves through dress and other adornment. The app invites anyone to contribute their story, which is perfectly fine. But the app does not provide a way to self-identify by race or ethnic/cultural background unless you choose to speak to that issue in your recording. So I guess I don’t understand how any user contributions added to the project’s database from the app could be marshaled as evidence for the original conception of the project.

Context for these stories aside, I also just don’t understand not why “there’s an app for that” but rather why the public would download either of these apps and use them over and over again. Sure, one’s smartphone provides a really convenient way to record very short stories, but I don’t really see much of a reason for an individual to do this more than once or twice. There is no essential tie to a physical place for either of these apps that would prompt a user to open up the app and learn something about that location through the project’s content. There could have been on Stories from Main Street, but there’s no way on the app to search for a particular location to find content related to a place where you happen to be or might be interested in knowing more about. Stories from Main Street does provide a link to the project’s website on the main page (Will to Adorn does not) where visitors can search for audio on a map. Similarly, given the limited amount of content in these collections, I’m not sure why anyone would use the listen function on either app more than a couple times, particularly on Will to Adorn. I’m not saying that the effort to collect and share people’s thoughts on these apps is uninteresting and completely devoid of value, I’m just struggling to see why someone might keep these apps on their phones and use them more than a very few times.

What do you think? How might these apps be improved to increase their current interest and/or enduring value? Without a great deal of context, what can we learn about the subject matter of the projects by listening to these recording snippets?



Dude, Where’s My History?: A Look at Historical Mapping Interfaces

The advent of digital technology allowed a greater exchange of knowledge and ideas to enter homes at an astonishing new level. This change brought information and services straight to users that before may have required someone to actually leave their home to seek it. The advancement of mobile computing technology furthered the trend of information coming directly to people but without restricting its access in one physical place. Many cultural heritage institutions have noticed these changes and adapted to become not only places that house information, but resources that increasingly push it directly to their patrons wherever they may be. The affordances of this new media also allow institutions to bring their materials into geographic space, adding another layer of interpretation and context while bringing to the public’s attention that history is all around us.

Histories of the National Mall

One site that takes advantage of mobile application and a spatial understanding of history is Histories of the National Mall created by the Roy Rosenzweig Center for History and New Media run using our old pal, Omeka. Taking their own advice from their report Mobile for Museums, the site is device independent, made to run on a web browser allowing for use across desktop, laptop, and mobile and is not a native downloadable app that needs tailoring for each device. As the title indicates, the site is an interface for learning about the histories of the national mall through maps, explorations (short examinations based on questions people might have about the mall), people, and past events. Most of these sections can be filtered into different historical periods. Some of my favorite sections, and much to my chagrin,  are the great explorations of unmade designs of the national monuments. There are also a number of scavenger hunts that send you to a specific part of the mall and have images of places for you to find. Once you find the images, you tap or click them and can read or listen to more about it.

Histories of the National Mall Map

The key feature of this site is the map, which has over 300 points containing historical information, audio, video, images, and documents. The user can filter by each of those categories as well as by place and event. As stated above, the site is web browser based and largely looks the same when using on a desktop/laptop or a mobile device. Using GPS, Histories of the National Mall centers the map on the user’s coordinates and locates them within historical context. What is good about the map is that there are no set way to explore the points, you can wander around and discover new facts and events that shaped the environment all around. This allows the user to set their own narrative in a serendipitous combination of explorations.


Aris Games

While Histories of the National Mall is a ready made site, Aris Games is both an open source application to create geographically based games and a mobile app to play the games. The back end is not the scary coding or programming that some in the cultural heritage sector may fear, but a simple interface so even those without the technical skills can make the games with the infrastructure invisible to them. One downside to the Aris created games not encountered in the mall histories site is that the mobile app is only available on Apple products and has a much more limited audience because of it.


The Aris editor interface to create is simple but it is by no means easy to understand without first reading the manual or viewing the helpful video tutorials on certain topics. It is important to understand the different elements (especially non-obvious ones such as scenes, plaques, and locks) and how they function so you can create a working game. The games are largely tours or explorations of certain areas. Building a game is based on creating “scenes” or different scenarios that the user can encounter as they travel around. You can make conversations for the user to have at each location that can lead them further into the game. All of the features you create can be mapped to a certain location to create an exploratory geographic environment. This feature is unfortunately cumbersome to use as the only way to find your points is through precise GPS coordinates or by dragging the point to where you want with no way to search for your general location so you can get there quicker. Also there is no way to see how your game will look in app without having and opening the app. Since I have an Android device, I needed to borrow an iPhone to do this. Despite these drawbacks, Aris editor is a good way to make games without requiring programming experience.

Aris Editor


Playing the games is fairly simple but, as mentioned above, does require downloading their Apple based app. Inside the app you can play any number of games created with the editor. You can either find  games based on your geographic location, sort by popularity, or search for a specific title. Aris provides a demo that will give you a good overview of what it is like to play these games (avert your eyes if you dislike semi-obsolete media):

Overall, National Histories of the Mall and Aris Games are good examples of the creative ways spatial history and mobile technology can work together to engage the public. By embracing this new trend and the ubiquity of mobile phones, institutions will add layers of meaning, attract a wider audience than before, and bring content out from behind closed doors.


Or How I Learned to Stop Worrying and Love the Glitch

This week I attempted to recreate the results of glitching files as demonstrated in this blog post by Trevor Owens. As we shall see, I ran into a few difficulties in reproducing this experiment exactly. But first what is a glitch? According to Wikipedia, “A computer glitch is the failure of a system, usually containing a computing device, to complete its functions or to perform them properly.” In this post, I chronicle my attempts to create glitches by using files in ways other than their intended purpose to reveal what we can learn about the formats themselves.

A Textual Audio Experience

I started trying to view an .mp3 as a .txt file. I could not use the same audio file as in the original blog post because the Library of Congress does not provide direct download any longer, having switched to streaming-only for access. Instead, I randomly selected an .mp3 of the Rush classic Tom Sawyer. From here I changed the file extension to a .txt file and opened it with my computer’s text editor. Here is the result:

A real toe tapper

Just as with the audio file Owens used, much of the information in the .mp3 is a confused mess and the result of the text editor’s attempt at interpreting the bits as alphanumeric sequences. However, along the top there is some embedded metadata such as information on the writers of the song: Alex Lifeson, Geddy Lee, Neil Pert, and Pye Dubois. These bits are meant to be read as text and therefore can be read by the program.

Where the Trouble Began

In the next step, I tried to view an .mp3 and .wav file as .raw images. Because I did not use the same audio file as the original blog post, I did not have a .wav file to accompany my .mp3 when trying to replicate this part. Rather than simply changing the extension on my Tom Sawyer .mp3 I used a media encoder and converted the file to a .wav file. From here, I changed the extension on each to .raw and attempted to view them in an image editor. Unfortunately these files would not open in any of my image editing software. Borrowing a computer that had Photoshop, I was able to view the results seen below:

01 Tom Sawyer wav and mp3 raw photoshop
On the left: .mp3 as .raw, on the right: .wav as .raw

Just as above, an image editor can do no better than a text editor when attempting to read the audio files in a visual manner. Unlike Owens’s results, my two images look largely the same. The .wav as .raw did produce a large black bar at the top of image, which I am assuming is due to the difference in original format. I thought the similarity might have been because I converted my .mp3 into a .wav, so I downloaded a different .wav audio file directly from the web and repeated the steps and yet it still yielded the same results.

Complete Failure

While I was able to replicate most of the outcomes in the preceding section, I failed at the next step of editing an image with a text editor. The link Owens listed for the image in his post was broken, but luckily the original image was also available in the post. I downloaded this image and changed the extension from .jpg to .txt. I opened the file in the text editor, deleted some of the text, and changed it back it into a .jpg. Unfortunately, the file would not open in any of the image software I tried, including Photoshop. I kept receiving error messages that the file was unsupported, corrupted, etc. I tried these steps again but with copying and pasting parts of the text back into itself or even deleting only one character. I even attempted using a different image entirely and doing all the same steps again. Alas, all my attempts failed to produce a glitched image that could open.

Tom Hanks typing


While I was not able to reproduce all the tasks that Owens accomplished in his blog post, I was still able to see his main point that screen essentialism masks that digital objects are more than what they appear to be on the screen. The different ways the files can be read demonstrates the different structures of the formats, even if they look the same on the screen. My failure in this process has made me realize how much the public is pushed to a limited understanding and shielded by the programs that are meant to read certain files in certain ways. Perhaps my failures are just a result of well working computer software that allows you only to produce the desired outcome of these files. I encourage everyone to try glitching some files. Can you do it?

UPDATE: I was able to fix the problems I mentioned in this post. Here are the results:

Desert (2)
Glitched Image

Correct comparison, .mp3 as .raw on left, .wav as .raw on right

To see how my issues were fixed, see the comments section below. Many thanks to Nick Krabbenhoeft for helping me fix the problems.

Crowdsourcing History: Take a Look at What’s on the Menu?

This week’s practicum websites all employ some form of crowdsourcing to create content and/or augment existing content. Anyone with a computer and an internet connection can contribute to these projects. On Wikipedia, users actually write and edit the content of encyclopedia articles. Flickr is a great site for storing, organizing, and sharing your digital images and video and to immerse yourself in photography. In the Commons section of the site, users are encouraged to add tags and comments to photographs uploaded by participating cultural heritage institutions to provide additional information and help make the images more accessible. Finally, What’s on the Menu? encourages its users to transcribe content in the records of its extensive collection of digitized menus to facilitate access. As people are less likely to have heard of this project than the other two, I will offer an overview of the What’s on the Menu? site.

What’s on the Menu? (menus.nypl.org) is an effort by the New York Public Library (NYPL) to create a “database of dishes” from their collection of over 45,000 restaurant menus dating from the 1840s to the present so as to “learn about the foods of the last century to see what these historic menus can teach us about the culinary landscape today.” A library employee began the collection for the library in 1900, amassing more than half of the collection herself in the first 25 years of the century. Around a quarter of the menus have been digitized, but only basic information such as the restaurant name and location and the date of the menu were cataloged. The library would like to make information about the actual culinary content of the menus available to make it easier for people with interests in the history of food and culture to find and study this information. Thus in April of 2011, they launched What’s on the Menu?, inviting members of the public to help them transcribe the food and price information on the individual menus.

Figure 1. The Main Page of What's on the Menu?
Figure 1. The Main Page of What’s on the Menu?

Anyone is free to participate in the transcription, and volunteers need not sign up for any account to do so. Just click on the large green “Help Transcribe” button in the middle of the home page (Figure 1) to get started. This brings you to a screen where you can select a menu to transcribe. At the time of this writing in February 2015, there are four menus available for transcription. Clicking on the thumbnail for a menu brings you to the main page for that menu (Figure 2).

The Menu Page
Figure 2. The Individual Menu Page

A box to the left of the screen displays the basic cataloging info about that menu. Thumbnails of the individual pages of the menu are presented in the middle of the page as well as in a horizontal row at the top of the screen. To the right, another box displays the master dish list for the menu, which includes the dish, menu page number, and price information for any items that have already been entered for that menu.

Figure 3. An individual page from a menu.  The green check marks indicate a menu item that has already been transcribed.
Figure 3. An individual page from a menu. The green check marks indicate a menu item that has already been transcribed.

To begin transcribing the menu, click on a thumbnail for a page that appears to have food information on it. A larger version of the menu page appears on the screen (Figure 3), possibly already with some green check marks on it, along with the master dish list on the right. To actually transcribe, click on the first letter of any menu item that doesn’t already have a green check mark next to it. You are then brought to a page with a closer view of the part of the image that you just clicked on (which you can make even bigger by clicking on the largest A button underneath the image on the left; See Figure 4 below). You then enter the dish exactly as it appears on the menu (with a few caveats) in the text box below and the price of the item in the price box. Then click the “Enter Dish” button, and your work will be recorded. The screen reverts back to menu page, and you will see a green check mark next to the item you just entered. Easy!

Figure 4. Transcribing a menu item.
Figure 4. Transcribing a menu item.

You can transcribe as much or as little of the menu as you’d like. If you feel that all of the menu items on all pages of the menu have been transcribed, you can click the “Submit for Review” link, which is hard to see without knowing where it is already (on the left side above the horizontal row of menu thumbnails; See Figure 2). Doing so places the menu in the “Under Review” queue, which offers another opportunity for volunteers to assist with the project.

Reviewing entails checking over the transcribed menus for accuracy. You can find menus to review at the bottom of the site’s main page—either click on one of three menu thumbnails presented there, or if those don’t appeal, click on the words “Help Review” to see all of the menus that await their quality assurance check. Click on any menu, and then look for typos, price errors, and any missing items. To edit an item, click either the green check mark next to the item on the menu or the pencil icon next to the menu item in the master dish list. Either action brings you to the same screen you used to add new information, only it’s already filled in. Make your changes and click the “Enter Dish” button. There is also an option to delete the dish if it’s an entry that shouldn’t be there. Any missing items can be added in the same manner as before. When you have reviewed the entire menu and believe that it is accurate, click on the “Mark as Done” link, which is again on the left side above the horizontal row of menu images. The status of the menu is now “Done” and the menu items are searchable.

There is one more way that volunteers can add information to the menus. What’s on the Menu? is now adding geotagging to their menus. This feature can be accessed from the site’s main page by clicking on the “Map our Menus” image (Figure 1). There is also a link at the bottom of the basic cataloging info box on the menu page, just above the social media icons (Figure 2, not visible in screenshot). This brings you to NYPL’s geotagging application (Figure 5, below). You are presented with a large, scrollable image of a random menu. If there is a street address or a general city location for the establishment somewhere in the menu, enter it into the “Address or City” box and click the blue “Find on Map” button. Or if you determine that the menu is from a ship, train, or airplane, click the corresponding button below the map. Then hit submit, and the next menu in the queue will pop up. Again, you can stop at any time, and there is also a button to skip a particular menu if you’re not sure about the geographic data or just don’t wish to work on that menu.

Figure 5. Geotagging page.
Figure 5. Geotagging page.

There are multiple ways to access the information in the What’s on the Menu? database. Visitors can search by keyword in the search box in the upper right corner of the site’s pages. Place multiple terms in quotation marks for an “and” search, otherwise results will be returned for term A “or” term B. The menu bar in the page headers also includes tabs for Menus and Dishes. Both of these results can be limited to a particular decade. The Menu page can be further limited by place in the processing queue (new, under review, or done) and sorted by date, name, or dish count, while the Dishes page can be sorted by date, name, popularity, or obscurity. The clicking the link for the Explore section at the bottom of the main page will also bring you to the Menu page.

Also at the bottom of the main page is a section called “Today’s Specials.” While this section does not link to another page (logically I would think it would link to the Dishes page), it offers a small sampling of some of the dishes from the menus. Clicking on any of these dishes leads you to what is perhaps the most interesting part of What’s on the Menu?, a page that provides information about that dish, most of which is gleaned from the menus as transcribed by the public (Figure 6).

Figure 6. Individual dish page.
Figure 6. Individual dish page.

The left side of the page shows the lowest and highest prices for that item and the earliest and latest date that the dish appears on one of the menus. There is also a placement map illustrating where on the menu the dish appears, which may illustrate the relative importance of the item. At the center is another very cool feature, a graph illustrating the frequency with which the item appeared on the menus by year, which can illuminate culinary trends over time. Beneath this are the thumbnails of the menus on which the item appears, which of course link to the full menu. The right side of the page has a list of related dishes to account for slight stylistic differences in naming, word order, and punctuation on the various menus. Finally, at the bottom left is a “more information” section which offers a series of links outside of What’s on the Menu for a variety of additional information related to your dish of choice, including images, recipes, books, restaurants currently offering that dish on their menu, and “general information” with links to Google, Wikipedia, and Twitter.

Overall, the What’s on the Menu? project is pretty interesting. Looking through the old menus provides a fascinating glimpse at history, and not just in terms of menu offerings and their prices at various points in time. Many of the menus are themselves beautiful examples of artwork. They also represent not only traditional restaurant menus, but also transportation menus (rail, train, and plane) and banquet menus for special events. In looking through the website, one imagines that there is quite a bit of cultural history to be learned from these artifacts.

There are of course limitations to the database as well. More than half of NYPL’s existing menu collection remains to be digitized, cataloged, and transcribed, and the sampling of menus is by no means scientific. The collection is primarily although not exclusively focused on New York City; indeed, there are menus from around the world in the collection. While food trends certainly vary over time, they vary from region to region as well. But even for New York City, one wonders how representative the menu collection as a whole is. More than half of the collection was amassed by a single NYPL employee in the period from 1900 through 1924. Coverage of the years before and after that time period cannot be nearly as comprehensive as it is for those 25 years. One wonders too if upper class, high-end establishments and events are wildly overrepresented in the database. In addition, the library currently is not capturing section headings, which would be useful in classifying dishes (appetizer, main course, salad, dessert, etc.). Also, non-food information such as descriptions of artwork, marginalia, and other menu text that does not represent a food, beverage, or smoking item are also currently not being captured in the record, making it easier to concentrate on developing the food database but potentially limiting other cultural information that could be gleaned from the collection. Given these limitations, historians should exercise extreme caution and avoid overgeneralizations when drawing conclusions from this dataset.

I feel that a few web site improvements might enhance the user experience of What’s on the Menu? While the color scheme such as it is (the page is actually mostly white space and black text) is attractive enough, I don’t believe that the sort of light olive green color used to denote links stands out enough, particularly on the section headings for “Help Review” and “Explore” on the main page.  I also do not understand why the “Map Our Menus!” and “Today’s Specials” section titles are not hyperlinks. On the menu pages, the “Submit for Review” and “Mark as Done” links were not very obvious to me at all; it might be better to make these into colored buttons, similar to the “Help Transcribe” button on the main page. I’m also not sure how the user would know from looking at the menu page whether or not that menu had been geotagged.

The Help section I thought was clear and well-written, and on the menu pages for transcription and review, there is a small, red button-like area that the user can point at for brief instructions for how to complete the task at hand, which was also good. However, you must keep your cursor pointed at this red area in order to read the instructions and there is no way that I found to either keep that box displayed or to scroll down to see what’s in the bottom part of the box if your computer does not display the whole thing, as mine did not. I found that aspect to be quite frustrating, although fortunately the Help tab on the page header is directly above the red help area and the task is quite easy to learn and remember, so the need for that quick help section should be minimal. Finally, they do have a blog with a few interesting discussions about some of the materials in the collection, but sadly it has not been updated since 2013.

In terms of the crowdsourcing aspect of the project, I think What’s on the Menu? illustrates both the benefits and to a certain extent the pitfalls of relying upon anonymous volunteers to create metadata for a database like this. NYPL has actually had a great response to this project, with 22,000 menu items transcribed in the first three days of the project and more than 800,000 dishes in the first year. Crowdsourcing the transcription of menu items has enabled NYPL to move forward with the database while reserving paid staff labor for presumably more complex archival tasks. As tasks go, the transcription, where visitors are asked to enter each item (essentially) exactly as it appears on the menu, is pretty easy. I saw very few typographical errors of menu information. There will, however, be a good deal of data cleanup required to standardize some of the dishes for spelling and punctuation differences between menus, although this would be an issue whether the labor was crowdsourced or not.

Given that there are currently only four menus available for transcription, it would seem that the digitization of the menus may not be keeping up with the volunteers’ appetite to complete the transcriptions. Another issue with the crowdsourcing is the misinterpretation of the prices on the older menus. The four menus currently open for transcription are all from Adam’s Restaurant in 1913. Most of the prices, even for the steaks, are expressed in cents, not dollars, which may not be obvious to a casual user who may not be thinking historically. On at least two of the four menus, many of the prices were transcribed as dollars rather than cents. Such errors can be easily fixed by another person both at the transcription and review stages, but if they reach the done stage in this state, then someone must first notice the error and then email a staff member to correct it. Another drawback, at least with these four menus, is that they have a high number of menu items. With so many entries, the check marks get cluttered on the page and it becomes easy to miss menu items. Also, transcribers skip around the menu, preferring to start with a fresh menu section rather than finish an incomplete one, making omissions more likely if the person who finally clicks on the “Ready for Review” link does not look over the menu carefully.

Overall though, I don’t think that there’s much that the volunteer transcriber could do to really mess up this database. There should be at least one other pair of eyes looking over all of the transcribed data for errors and omissions, so I think the likelihood of any major errors making it in to the completed data would be quite low. All three tasks that users are invited to participate in—transcription, review, and geotagging—are easy, and I actually had fun and felt like I was doing something useful when I engaged in these activities. To me, What’s on the Menu? appears to be a historical project ideally suited to the crowdsourcing concept. Do you agree? Tell us your thoughts about What’s on the Menu? and the idea of crowdsourcing labor on digital history projects in general in the comments.