Digital Project Reflection: Washington on the Frontier

Here it is!

Washington on the Frontier now takes users from Washington’s first foray into the wilderness of the Ohio country in 1753 to his retirement from leadership of the Virginia Regiment in 1758. Each stop on the journey is marked by a map point. Each map point, which selected or reached by clicking through, displays an image relevant to the events that occurred at that map point, a title including the year, and between 100 and 120 words of text describing the events. There are seventeen map points and one introductory page, for a total of eighteen “slides.” Individual reading speeds will impact the time it takes to read through the entire StoryMap but the feedback I have received indicates that it does not take an onerous amount of time to read through everything.

The one issue I was unable to resolve to my satisfaction was the text background: the limits of the StoryMapJS program mean that my options for the background on which the text is displayed are 1) nothing, which sometimes makes portions of the text difficult to read against the map; 2) an image, which made it more difficult to read the text and sometimes cut off too much of the map; or 3) color, which proved difficult to adjust to a satisfactory hue that would allow the text to be clearly read while not interfering with the layout of the rest of the page.

The majority of my difficulties came from locating good images and from writing the text: I tried to avoid writing text that would involve significant scrolling, as that would break the alignment of the image and the text with the map point. This meant I had to work to condense large amounts of information into a very small amount of words. The result is that some of the entries are missing details which I would have preferred to leave in had the space been available. However, the core historical facts are all present and the narrative still holds together.

The StoryMapJS program is relatively intuitive and easy to use, but I have encountered some issues. For one, the program works best when the points on the map are all in a relatively linear formation, rather than bouncing around from place to place. This meant that I had to cut some elements of Washington’s story from the presentation because their inclusion would disrupt the flow of the StoryMap. For instance, Washington traveled to Boston in 1757 to meet with Lord Loudon, the British commander-in-chief in North America. Such a tangent away from the line between Lake Eire and the Virginia coast formed by the majority of the map points. In the end I decided to err on the side of streamlining the process from the technical aspect and cut the Boston journey from the map. The oftentimes competing imperatives created by the technical limitations of the digital tool and the full richness of the historical account being presented via that tool were on full display during the creation of Washington on the Frontier.

The StoryMapJS program offers the chance to do interesting digital history projects in a format that is easy for users to navigate. The spatial component allows for presentations that help users understand history as not just happening over time, but also over space.

Digital History Presentation: Mapping the Tracks of Serial Killers, Reflection

            When it comes to projects on forensics and crime, there is always the same concern: how do you examine a criminal, without focusing too much on the individual? Including too much information about a criminal, especially a serial killer, can lead to the glorification of the killer. Still, the perpetrator of the crime can inform readers about the nature of or reasons leading up to the incident.

            In my project, I repeatedly ran into this concern. Even when I attempted to limit how often I named the killer, since the project involved examining their killing sprees, the killer had to be included.

            That’s why I decided to start the project by making the website’s landing page about defining geographic profiling. In doing so, I hope the page gives a purpose to the map, which otherwise would be just a series of pins. Furthermore, although I include a short biography about the killers, I tried to limit it to information that would inform the map, such as which incidents were included and why, as trends in the murders and why the trends disprove theories that geographic profiling is based upon. In the map itself, I attempted to focus on the victims, both by provided information as to what happened to them and a profile photo, when possible.

            If I were to continue with this project, I’d want to include more killers to further examine trends. Additionally, I’d like to expand the site to include more information about the forensics behind geographic profiling. Given the timing of this project, I was only able to seek out and include information on four killers, who were chosen due to their prominence in American culture.

            Due to the randomness of the killers who I did choose, I did find it interesting that they did disprove geographic profiling theories to such an extent. Being able to visualize their crimes on a map both pulled the project together, and confirmed my theories surrounding the forensic technique.

            As a result, making the map was probably my favorite part of this project. It remains the main focal point, and the website simply provides additional information to support it.

            The main technical issues that I had with the map, however, was that I was unable to properly embed each individual map. While the main map could be embedded, the individual pins related to an individual could not (or if they could be, I could not figure it out).

            I hope that the full map can be a resource for historians, forensic scientists and those interested in serial killers in general. In promotion, I hope to distribute the map through social media sites, like Twitter and Facebook. If I were to expand the site, I may also create social media handles related to the project. For example, I think that this would be a great opportunity to create a Twitter bot that would share each victim and drive attention to the website.

Anyways, here’s my project:

And here’s my poster presentation (this was also my first time making a poster presentation) :

Final Project Poster and Some Reflections: Social Media and Its Role in Holocaust Remembrance

Social media could become an effective tool for ensuring that people remember and learn about the Holocaust in a meaningful way that transmits its importance, tragedy, and complexity. While Holocaust memorial sites and museums have been figuring out the most effective methods of teaching about it and maintaining the physical spaces, the digital space with its numerous users, who are constantly interacting with one another, is much harder to follow, let alone regulate. As a result, it is crucial to understand how and why visitors use their social media to share their experiences. Many of them came as travelers; others came there to learn and then spread what they learned to others. And since this study had used only location-based social media, it is clear that the users had to have been at least aware of the name of the memorial because they used “the Memorial to the Murdered Jews of Europe ” to tag the location. This project shows that if one were to browse Instagram, Flickr, Yelp, TripAdvisor, and WordPress posts from 2017 to 2019, they would find that there are numerous visitors who posted photos of themselves without acknowledging the meaning behind the place in their captions or comments. There are also users who expressed their anger with this situation and laid out explicit advice on how to be a respectful visitor of the site.

There is a clear disconnect between the intended message of commemorating Jewish victims and numerous visitors posting pictures of themselves with little awareness of what the monument represents. There are also numerous users who viewed the images, posts, and reviews, but did not leave a comment, rating, or like; as a result, it becomes even more challenging to try and measure the impact of social media. Did some users who left no visible digital feedback perhaps go on to find out more about the site they saw tagged? It is possible. Think about the layers one needs to consider when looking at a social media post of someone at the memorial: Who took that picture? Why was it taken and posted? Was the image cropped? Was it edited? Were some comments deleted? Was the image taken long before it was posted? The answers to all of these questions are not easily found, and thus this mini-study only shows you a very individual glimpse of what is happening. The question of remembrance then involves a variety of factors: if the users who saw pictures from the memorial, then looked it up on Google, did they go to a more reliable official website or a website with dubious information? It is hard to tell.

Location-based social media platforms could help people bridge that gap between the digital and the physical. It is time for interdisciplinary collaboration where historians and educators improve the way they teach about the past and Holocaust remembrance by working with experts on social media, digital technology, and psychology in order to understand what drives people to use social media and commemoration sites in the way that they do.


Howdy all, I’ve been working on mockups and a flowchart for access. The mockups suck, but I made sure the flowchart was functional and at least in a strong beta form.

Attached are the files:



PressForward and Ethical Content Scraping

What is PressForward?

Roughly speaking PressForward is a back-end WordPress plugin developed by the Center for History and New Media that allows users to aggregate, curate, and redistribute web content pulled from RSS or ATOM feeds, or through the use of PressForward’s Bookmarklet tool. Once a site-runner has added their desired feeds to the plug-in, or has marked content for rehosting through the Bookmarklet tool, they can review specific pieces, add metadata, format them for Word Press, add any categories or tags they wish, and finally publish the content on their blog.

Screenshot of the PressForward Dashboard, taken from the PressForward User Manual.


There are a few ways to start collecting content for rehosting through PressForward, but let’s start with web feeds. RSS (Really Simple Syndication or alternatively Rich Site Summary) basically takes unique text files from websites that a user would like to “subscribe” to that, when uploaded to a feed reader program like Feedly or The Old Reader, then allows users to create their own feeds of automatically aggregated content. So instead of visiting a bunch of blogs individually, you could just have posts from all of them pulled into the feed reader program to create your own newsfeed. ATOM on the other hand is a more recently created alternative format to RSS. Linking these feeds to PressForward creates a feed of content within your WordPress site (visible only to you), from which you can begin to select specific content for rehosting.

The second key way to collect content is through PressForward’s “Nominate This” bookmarklet. While RSS feeds pull content from designated sites as it is published, “Nominate This” allows for a more intentional selection of specific content from specific sites. Say you found a cool blog post on a site you have not incorporated into your RSS feed, or for which RSS is not available. In this case you can just click the “Nominate This” button on your browser’s toolbar and send the selected content to your WordPress Drafts section manually. If the site does have an RSS feed you are unsubscribed to, this tool also offers you the option to do so if such a feed exists.  

Nominate This bookmarklet in action. In this instance the old homepage of the Center for History and New Media website has been pulled for republication. Image taken from the “Installing and Using the Nominate This Bookmarklet” section of the PressForward User Manual


 Once you’ve got your feeds set up/articles from other sources nominated, it’s time to curate. At this stage you can start picking out content from your feeds for republication on your blog. There are two key panels to use here, the “All Content” panel and the “Nominated Panel.” The former contains all of the content pulled from your RSS feeds that are pending review and nomination, the latter contains content that you have marked for republication. At either stage you can use the Reader View option to open the content to check for readability and any errors in the text or formatting before sending it over to either the Nominated Panel or to a Word Press Draft.


Now that you’ve sent content over to the drafts section all that remains is formatting/editing the post and publishing it to your blog like any other post. Which brings us to the overarching goal of this plug in: to disseminate scholarship, blogs, digital projects, etc. to a wider audience by allowing bloggers and site runners to curate their own informal journals so to speak. Unlike content-scrapers, which have a less than stellar reputation among digital content creators, PressForward is not intended to be a platform by which people can collect and republish content to their sites in an unethical drive to increase their own site traffic (and ad revenue) by rehosting others’ unattributed work. Yet when you get down to brass tax I don’t really think its all that far off from such tools.

PressForward does a few things to encourage responsible aggregation and republication: it “offers the option to auto-redirect back to the original source,” it “retains detailed metadata about each aggregated post,” and “the original author’s name will appear with a republished post if you use WordPress default themes such as Twenty Fourteen.” The FAQs also emphasize that author consent should be sought before republishing. Reading through the plug-in’s Manual and FAQs I noticed that there are a lot of “ifs” involved when it comes to the display of metadata. If users want to display more metadata, they have to use Custom Fields. If users have the overwrite author option enabled (it is by default but can be shut off), the author of the original post will be displayed on your rehosted site. Links to the original post are contained in the new Draft post, but can be deleted if the user chooses to do so. None of these options seem to impose a strict requirement that users include metadata in their final posts. If a user does not “use default themes,” will the metadata still appear?

I don’t mean to be overly critical about PressForward in this respect, especially as there are far easier ways to go about plagiarism, and chances are digital humanities scholars aren’t exactly the same level of target for content-scrapers as say artists or tech-reviewers. But, I do think the conversation surrounding the ethics of content-scraping and rehosting is an interesting one to have especially if we are talking about shifts in the landscape of scholarly publication. While scholars may not be producing their content for ad revenue as other types of digital producers may be, is it ethical for a “big” blog like Digital Humanities Now (which does actually publish a full list of the feeds they are subscribed to) to pull content (and views) away from their pages? Is re-hosting really all that different from linking to a blog post as a form of citation (I think it is)? While it could certainly be argued that there are philosophical differences in the motivations behind publishing a scholarly article and a swing-cover of Nirvana’s “Smells Like Teen Spirit,” shouldn’t scholars still have a right to their labors?