Digital Project Summary and Reflection: A Digital Database for Past Exhibits and Exhibitions

Here is the link to my final project: A digital database of Titanic popular and museum displays, exhibits, exhibitions, etc.

This database, while still very much in a beta format, serves as a research tool for those who wish to easily research displays, exhibits, and exhibitions about the Titanic from 1912 to 2018. This WordPress website is an small scale example of my larger project idea, which was a digital database of exhibit display in the United States over time. As stated in previous posts, there isn’t a comprehensive database that exists which features this kind of content. The Smithsonian offers a database of their own internal displays, but that is nowhere near an exhaustive database, while it is still useless nonetheless. While my physical project is much smaller than my theoretical concept, a larger project would take a lot more time, funding, and buy-in from various institutions from around the country–as they would need to be okay with making this information accessible through an open access database.    

The site offers 5 different pages: home, about, database, contact, and works consulted. The home page serves as a basic landing page for the website. It provides some brief background about the project and myself. The about page features my project proposal and links to other databases I took inspiration from. The database is really the meat and potatoes of the website. It features various blog posts about each display which act as database entries. I used the five displays I found in my own research to fill the database, though going forward the goal would of course be to grow the entries in this section of the website. The contact page is a run of the mill WordPress contact page, where users of the database can contact me. And finally, the works consulted page features my bibliography from my research project. I opted to use the full bibliography rather than just the citations from images so as to cover my bases in terms of any interpretation I may have consciously or subconsciously put into my database entries.

Interpretation was something that I struggled with in this project. As the information for the database entries come from my own research, I obviously had a skewed interpretation of what it all meant, based on the trajectory of my line of argumentation in my paper. While I tried to merely describe the displays in simple language, some of the older ones had far less sources to go off of and so some interpretation was necessary, but I did find I often had difficulty striking a balance. I was constantly worried that this website would become a database of my senior thesis, rather that a research database which could be useful to others.

Another difficulty was creating an effective user interface. Given that I helped lead discussion for our designing digital projects class, and in particular read Dan Brown’s Communicating Design, I was really conscious about my user interface. That being said, I quickly realized how hard that would be given that I was using a free version of WordPress and I don’t have a lot of experience in web design. All of a sudden why Dan Brown’s steps to formulating a project and communicating that project were so detailed and comprehensive became clear. This is not something that is easily done, especially if you don’t have a lot of experience. One problem I ran into was trying to create a drop down menu for my database page so that they would be easy to see and navigate to. I was able to make a menu but the theme I chose for my site only let me put it at the bottom by the search bar, and it wasn’t in a drop down format–making it hardly user intuitive.

That being said, I think using WordPress for this project was overall effective. Given that I have never built something like this before, it was fairly easy to use. I do wish there was more customization and guidance offered for the free version, but the lowest paid version is still somewhat affordable. If I were to continue this project in hopes of growing the database, it would certainly be fairly cost effective to get the basic version of WordPress. That being said, I think a lot of the tools we learned about in class which help historians engage with technology, as well as the various blog posts and books, would be important points of references to continue building this website out to be more extensive and more effective. This project overall has shown both how important a tool like WordPress can be to a historian, as well as how easy it is to put historical content on the internet; something that I have always been a proponent of but never had the know how to carry out.

Also here’s my poster!


MLA Core

What is MLA Core ?

The landing page for MLA Core gives this description: CORE is a full-text, interdisciplinary, non-profit social repository designed to increase the impact of work in the Humanities.

So what does that mean?

Core stands for Common Open Repository Exchange. Funded by the National Endowment for Humanities, MLA Core is a collaboration between the Modern Language Association and Center for Digital Research and Scholarship at Columbia University. Core, which is currently a beta release, is basically a repository of open source scholarship housed by the MLA Commons, or the scholarly network for MLA members.

Through this initiative members have access to:

  • Upload a variety of objects and formats
  • Insert metadata for objects
  • Add additional authors
  • Assert CC copyright
  • Get a DOI or insert publisher’s DOI if published
  • Associate object with MLA Group
  • Comment on and discuss others’ uploads

Visitors to the site (aka people who aren’t a member of MLA Commons) have access to:  

  • Browse deposited material
  • Perform full search & faceted browse of deposits
  • View author’s Commons profile
  • Download deposited material      
What is special about Core?

Here’s what they have to say:

Not just articles and monographs: Core includes course materials, white papers, conference papers, code, and digital projects

Community notifications: Send instant notifications about the work you’ve shared to members of your Humanities Commons groups.

Citation and attribution: All items uploaded to CORE get a DOI, or digital object identifier, that serves as a permalink, citation source, and assertion of authorship all in one.

Licensing: Select the Creative Commons license that best meets your needs.

Archiving for the future: Files deposited in CORE are stored in the Columbia University Libraries long-term digital preservation storage system.

Open-access, open-source, open to all: Anyone can read and download your work for free (no registration required)

The great thing about the concept of CORE is that you can also use it to upload peer-reviewed journal articles, dissertations and theses; works in progress; conference papers; syllabi; abstracts; data sets; presentations; translations; book reviews; maps; charts; etc.; and you remain the owner of any work deposited here. This allows for a database of diverse scholarship, which is all open source. Also the collaborative aspect, which allows users to comment on, upload, and give input also helps to bridge the gaps in scholarly communication.

So how does it work in practice?

I decided to give the database a try. To upload scholarship, you must become a member of MLA Commons, either by being an MLA member, or joining the open humanities commons network. Membership to the MLA costs money based on your salary (anywhere from $26-$359), or if your a graduate student ($26). If you create a free account through the humanities commons, you have access to CORE, but not as much as full MLA members. I created an account through the open network:

At first glance, the form to upload things seems pretty simple!

I decided to try and upload my research paper from my Civil War and Reconstruction class.

It took a total of 5 minutes to upload my paper–super easy! It looks like there is a review process as well.

Now that I’ve uploaded my paper, I can find it in my deposits:

Overall, the the process of uploading scholarship seems super easy. I wonder how visible this will be to other people? A database is only as good as its search function, so I am going to test that out next.

Searching for scholarship on MLA Core

When you click on “find open access materials” you are brought to this page:

It automatically sorts deposits starting the the newest ones at the top. As you can see, the top three most recent are already fairly different topic wise, which is a testament to all the different academic fields that are using the Core.

Keeping with the theme, I typed in “civil war” to the search bar. It came up with 459 results, all of which (besides mine) seemed only tangentially related to civil wars.

I couldn’t seems to find an option to do an advanced search, other than the side bar which allows to narrow results by date, item type, or subject. There was also no option to sort the results by most relevant, only by most recent and alphabetically. I tried to search again using boolean phrases, hoping to narrow my results. I typed in “civil war” AND “united states” into the search bar. It turned back no results, suggesting it may not have the capability to process boolean phrases (or no one else has uploaded papers about the American civil war, which I doubt).

So, it seems as though the search function for Core is a little lack luster. Nonetheless, there are some other cool features. You can join different groups based on your areas of interest.


Whenever you upload something to Core, groups you are a member of will get notified by getting an email (which is a setting you can turn off), and by appearing in the activity feed of the group. I joined the Digital Humanists group:

You can also search for Core member’s personal websites, as well as create your own, using wordpress:

Overall, MLA/Humanities Core works as sort of a social network for scholars of really any discipline. It offers an easy way to communicate with people in your field as well as people who aren’t–working to open the lines of scholarly communication. While it the Core depository’s search function doesn’t seem great, the platform is still in Beta form. The website even offers a roadmap of whats to come. So, despite this minor flaw, this type of transparency combined with the overall concept of an academic social network results in what could become a highly effective platform for scholarly communication.

Digital Project Draft: Exhibit/Exhibition Database using the Titanic Disaster as a Model

My “digital database,” has been launched, though there is still some work to be done. The basic skeletal structure of it is live on wordpress. Here you can see my proposal, information about the research, a works consulted, as well as my first display: a 1912 model of the disaster using apples.

You can find my updated proposal on the “about” page of the site here. I am currently trying to figure out just how much information I want to put in with my various images, as well as the level of interpretation I want to provide. For most displays, I don’t have detailed information about the interpretation within the exhibit itself, so I worry adding too much information based on my own analysis will lead to a bias presentation of the overall display. I also need to decide how I am going to curate my other displays, as they have multiple images and are reconstructed through various media forms. The one I have up now is just one photo–so it was easy. But the next four I plan to include have anywhere for 3-10 primary sources associated with them. I don’t know if a gallery of images, or spreading the images out, would be the most useful and aesthetically appealing. Therefore, the overall format of the database page remains to be finished.

What remains to be done:

  • Add blog posts about the other displays that I plan to include: they span from 1914, 1968-1976, 2000, and 2018. This is going to take the longest amount of time, and I plan to block out all of next Saturday to create and edit the posts–so hopefully I’ll have an almost complete draft by April 22nd.
  • Decide on the level of interpretation I provide in each post
  • Decide an overall, streamline format for each post
  • Perhaps try to mess with the design a little more to make it more exciting

How to Deep Fry a Meme (aka glitch a file)

Okay, so glitching a file is a little more complicated and involved than deep frying a meme, which can be done simply by going to https://deepfriedmemes.com/, and apply various filters to make an image look distorted. But glitching a file is a lot like creating a homegrown deep fried meme, the old fashioned way.

There is a lot of merit to knowing how to glitch a file, as Trever Owens points out in his article “Glitching Files for Understanding: Avoiding Screen Essentialism in Three Easy Steps.” Owens quotes Scott Fitzgerald, who states that it’s important to know how to glitch a file because it helps you to understand the underlying structures which make up the file itself. In other words, knowing how to glitch a file is a lot like knowing how to fix a running toilet. Toilets are something we all use everyday, and also something we often take for granted (or at least I did until I moved in to my current apartment which has a chronically problematic toilet). Knowing how a toilet actually works is important in diagnosing the cause the the running, and ultimately fixing it. Files are somewhat the same. We often take the various extensions, .pdf, .mp3, .docx, etc. for granted. As Owens points out this can lead to “Screen Essentialism”:

“The heart of the critique is that digital objects aren’t just what they appear to be when they are rendered by a particular piece of software in a particular configuration. They are, at their core, bits of encoded information on media. While that encoded information may have one particular intended kind of software to read or present the information we can learn about the encoded information in the object by ignoring how we are supposed to read it. We can change a file extension and read against the intended way of viewing the object.”

Trevor Owens, “Glitching Files for Understanding: Avoiding Screen Essentialism in Three Easy Steps.”

So, in this post I will take you through the steps of glitching an image file, because as public historians its important to have deeper understanding of what makes up a file, rather than just taking it for what it is on the surface. To demonstrate this process I am going to stick with my meme theme by glitching *cough* deep frying *cough* a photo from the “they did surgery on a grape” meme because its one of my favorites, and also super obscure so maybe in 50 years someone will stumble across this post and write a paper on the cultural importance of “they did surgery on a grape” meme in 2019.

Anyways, back to glitching the file. To begin here is my original image:

This is my initial image, which is currently in .jpeg format.

Now to glitch the image file, change the extension to .txt. I’m using preview on a mac, but the steps should be roughly the same no matter what kind of set up you are using. Find the .jpg file in finder, right click, and select “get info”

Once in this information pane, change the file extension from .jpg to .txt:

The file will then convert into a text file:

Delete some of the code in here, and follow the same steps to turn the file back into a .jpg. This is where I ran into some issues. I had trouble finding a happy balance between recognizably glitching the image and destroying it so much my computer was unable to open it. Eventually I got the file to open with the photos app, but it was just white:

I got this alert a bunch when trying to convert the image back to a .jpg and reopen it on my computer
I was finally able to open the image in the photos application, but it was just white…

Alas, now that I was familiar process, I wanted to keep trying. Perhaps because I used a meme I got off the internet the process didn’t go as smoothly as it normally does. So I tried again with a more normal .jpg image of my cat, Omen. Here is the original image:

Here’s the image after following the steps above, deleting a small chunk of the .txt file:

And finally, here’s the photo again, now substantially degraded:

Honestly, I kind of like it more than the original

Overall, this has been a fun and interesting process. I can see how glitching an image file can very much be an art form–and now that I understand how it works (kind of), I feel like I have a deeper understanding of born digital objects. They are far more than then what you see on the surface.

I wonder what this means for the future of things like digital libraries. It’s so easy to corrupt a file, how can we know if the image in the library is the original? What does it even mean to be an original file? If someone such as myself can learn how distort an image in one afternoon, who’s to say someone couldn’t hack the server of a digital library and corrupt historical images which no longer exist physically–how do we get those images back? I did some quick searches and there are ways to undo damage, but I feel as though digital files are just as vulnerable as their physical counterparts, whereas before this process I did not. Which really proves Owens and Fitzgeralds point that its important to know how these files work on the backside to better preserve and study them on the front side.

I started this process excited to “deep fry” an image, but now I see there is a lot more at play here than making a funny meme. Perhaps the whole trend of deep frying an image for comedic value says something about our growing technologically driven society–or maybe I’m being a little too philosophical. Regardless, I’m glad I got to learn more about this process, now I know how to fix a running toilet AND glitch a file.

Chronicling Americ……Exhibits

As public historians, one area of study we often look at is museum display and effect. As I begin to enter this world, one thing which I have noticed is that there is no central database which stores information about exhibits which have existed in the past. This has resulted in my tracking down various tidbits of exhibits which I knew existed, often leaving me with an incomplete picture of the exhibit as compared to if I had been able to visit the exhibit myself, or the exhibit was on view in more recent times.

While this information certainly does exist, it is usually kept by large institutions which have the means to keep extensive institutional history records, such as this collection from the Smithsonian. Larger, more famous exhibits such as King Tut or The Family of Man also have a fairly extensive footprint. Smaller, less well known exhibits, and exhibits put on my smaller institutions are at a disadvantage in that they don’t often keep extensive documentation of past exhibits, as they don’t have the resources and/or don’t think to do so.

I have run into this issue with my own research which deals with visual displays, and specifically museum exhibits about the Titanic over time. It’s been incredibly hard to track down exhibits that have existed, and when I have found advertisements in newspapers for said exhibits, there is little to no institutional record of them. And while I have put in a lot of time and effort contacting various museums for any information, much of my older evidence cannot be helped by anecdotal evidence from a curator as those curators who worked on the exhibit itself are no longer alive, or have since retired.

It has not been impossible to track down exhibit information about the Titanic. Chasing leads, talking to curators, and putting in extensive footwork is certainly part of the research process. But wouldn’t it be much more efficient if this information was compiled into a digital database which researchers could just search based on keywords, date, and location such as the Library of Congress’s Chronicling America?

Audience: The main audience of this exhibit would be other academics wishing to use exhibits as a primary source in their research. That being said, this database could also prove a useful tool in expanding museum’s reach to those who don’t have the means to attend exhibits themselves due to distance or financial circumstances. Therefore, the audience of this database could go far beyond academics. It could even be potentially used in classrooms to teach museum history and at lower levels, how our perception of the past has changed over time or as a view into culture from a certain period. In this way, the database would have the potential to reach many people (in theory, of course,  as I don’t have the skill or the resources to build it on such a large scale)

Existing Projects: Large institutions like the Smithsonian have institutional collections which catalog this type of information, including which pieces from the collection where used, photographs, and why the curators made the choices they did. Such as this online record. But again, most of these records aren’t digitized, which means you would likely have to go in person to view the materials, limiting their accessibility. Online exhibits exists, as we saw with Omeka, but this is a little different than what I am trying to do. Surely physical exhibits have been recreated on Omeka, but my concept is of one central database, which is easily searchable.

What I Plan to Create: As aforementioned, obviously I won’t be able to create such a database in full, as I have neither the skill nor the funding, and certainly not the time. Therefore, for this project I will take my grand idea and scale it down, using the information I have from my Titanic research and applying it to the conceptual framework I detailed above. I will use something like Omeka to recreate the various Titanic exhibits I have found overtime, spanning from 1912 to 2018.

Plan for Outreach and Publicity: I think that a project like this would be best created via some kind of crowdsourcing. While it would be possible to compile the various institutional records that exist in one place, as I mentioned before sometimes these records don’t exists. But what does exist is people’s photos from their trip to the exhibit, like this family blog I’m using in my own research, and newspaper articles about various exhibits. Therefore things like social media could be used to engage people with the database, spread awareness, and ultimately help build it.

Evaluation Plan: In order to evaluate the success of this project, I would need to test its usefulness to researchers, which would be the primary audience of the database. Theoretically, this could be done by monitoring how many people interact with the database, institutional subscriptions to it, and its being cited in academic work. Since I am not actually creating the database to this scale, perhaps a more reasonable evaluation plan would be if someone were to be able to grasp what the exhibit was and looked like based on the information I have provided, and perhaps even make an argument about it.