To Begin…
I’m not going to try to not repeat myself too much as I’ve already a lot about Two Headlines here and here. So, before I get into the archival information package, Two Headlines is a small bit of programming that combines two headlines from the Google News API and posts them to Twitter through the Twitter API with the help of some bits of code that are freely accessible to programmers through Node.js. Two Headlines has been used to teach programmers about creating twitter bots and it is a form of social commentary. Its tweets are also funny and entertaining.
As there is software that needs to be installed involved, creating readmes that include instructions on how to install and operate the programs should be created. It does no one any good to include software that doesn’t have instructions, especially as the software is not designed to be used by people that have little to no programming experience.
Since this is just a model AIP, there are only a few files represented. The AIP will consist of three main folders, one for the bot’s source code and the software and documentation to edit that code, one for any interviews or comments about the bot, and the final one for the tweets themselves and the software that to read them and its documentation. While the file types for things like the source code for the bot and the installer files are already dictated by their creators, any new files created will be to current preservation best practices, PDF/A for the text files and .tiff for images.
- Code

This folder contains the source code of the bot, downloaded from GitHub, along with the software used to create and edit the code. The documentation for Two Headlines’ code and for the software that created it will also be included. Additional documentation for the Google News API and the Twitter API will also be added, as both APIs are used in the running of the code. A readme file with some instructions concerning installing and using the various software was created, mostly from the instructions and

other readme files associated with the programs, is also in the folder.
- Interviews
Any one that responds to questions about their interactions with the bot, its significance and influence, will have their responses preserved in this folder. News articles and blog posts will also be included here.
- Tweets
The tweets will have the third and final folder. As archiving the tweets will require special software to collect them and different software to read them, both the programs and their documentation will also be included. Another

readme file will be added so that any users know how to install and use the included software to view the tweets. It will also include metadata about the collection of the tweets, including the time and the code that collected the tweets and a record of any modification that was done to them post-collection. A few screenshots will also be provided to show the original Twitter interface that will not be archived with the tweets themselves.
Moving Forward
While this is a good start to preserving an entertaining bot, there is more work that could be done. The next steps for this project would be to actually conduct the interviews and acquire permissions for the news articles and blog posts and submit the AIP to the Internet Archive. There would also need to be a mechanism in place to collect the new tweets from the bot, as it posts every few hours, and add them to the preserved files.
This looks like an awesome project!
What part did/would you find the most challenging? For me, it’d definitely be recording the code and planning a way that it could be rendered in the future.
I noticed in your preservation intent statement that you didn’t think surveys would be effective for capturing the opinions of the user base. However, you included a section in your AIP for community and creator interviews. Did you develop any new ideas on how to record user opinion?
The code was actually fairly easy to preserve as it was on GitHub with instructions on what dependencies it needs to run. However, I’m not a programmer, so very little of the documentation makes any sense to me. So, understanding it enough to know what bits were other things that I needed to preserve and what was just how to use the thing was the most challenging.
I haven’t really changed my opinion on the survey, but I do think that any response would be valuable and provide other perspectives on why a bit of software and its outputs are significant enough to preserve.
You’ve talked a lot about the permanent relevance of the bot by having it built to identify and create with current news topics. Under what circumstances do you thin the bot could cease to be relevant?
I’ve never thought about it. News and history are subjects that always seem to be studied. I’d assume that if the bot were to cease to be relevant it would relate more to how it was programmed, and not its subject matter.