What does ‘Digital Art’ even mean?

What is art?infolio-rg.ru

Throughout the course, I was surprised by the number of different things that we were studying. ‘Art’ can be so many different things – drawings/paintings, comics, programs, among others. The digital versions of traditional arts have more freedoms and allow for greater creativity than those bound to physical space and tangible media. However, the development of digital arts is intertwined with the development of computer technology. It is nearly impossible to discuss one without also discussing the other.

MS Paint had a large role in the rise of digital images. It started as a gimmick to sell operating systems for computers, when computers did not come with Windows pre-installed. However, the ability to create and edit images soon became an important one, leading to the creation of better programs, like Photoshop, and the eventual creation of memes and webcomics. le-rage-comics(It’s been a goal of mine to be able to use a Rage Comic in these posts for a while now, and I’m very happy to finally have a good excuse to use one.)

The digital version of comics allowed the artists to push the boundaries on what it means to be a comic by allowing them greater freedom and creativity. Comics were no longer bound by 3-panels and few to no colors. The comics could be video, full-color, whatever size required, or whatever else the artist desired.


As we learned from reading Racing the Beam earlier in the semester, computer programs can be another form of digital art. Programming requires creativity to work within the confines of the system’s hardware restrictions while also bringing the concepts to life, whether it is for a game, a Twitter bot, or some other form of program.

Glitch art, on the other hand, says that the creativity comes from destroying bits of the code that made up the original piece of art, an act that can only be done with a digital image. The internet and digital works allowed everyone to create and distribute their works, like fan fiction, through the various art sharing sites and communities, without having to receive approval from someone else, such as a publisher or art dealer.

life is a glitch

Preserving the digital

In addition to the born-digital, items can become digital though a process of taking a series if digital images of the item. No longer are books the only thing that can become digital; photogrammetry allows sculptures and even walls to become digital objects that can be studied and manipulated by scholars. For those preserving these digital objects, only the images, and not the digital objects, need preservation as the 3D representation can be recreated as software and technology changes.

Digitization is all well and good for ‘saving’ a thing that is deteriorating but the new digital thing now has its own needs. How long can we count on .pdf, .tiff, and other ‘recommended formats’ to remain usable and readable? Are we creating too many digital objects to ever hope to migrate as technology changes?

In addition, at the rate that the internet and the number of digital things worth preserving are growing, how do we meet the current needs while also planning for the future? There is the argument that we should only preserve as many things as we can properly care for. However, if we do that, numerous things that are worthwhile and should be preserved that will be lost. Should we then preserve everything and hope that the time and money to properly care for the stored items will magically appear one day? (Yes, this argument was discussed on this blog several weeks ago, but it is still a question without an answer).

Bot Preservation: Two Headlines AIP

To Begin…

I’m not going to try to not repeat myself too much as I’ve already a lot about Two Headlines here and here. So, before I get into the archival information package, Two Headlines is a small bit of programming that combines two headlines from the Google News API and posts them to Twitter through the Twitter API with the help of some bits of code that are freely accessible to programmers through Node.js. Two Headlines has been used to teach programmers about creating twitter bots and it is a form of social commentary. Its tweets are also funny and entertaining.

As there is software that needs to be installed involved, creating readmes that include instructions on how to install and operate the programs should be created. It does no one any good to include software that doesn’t have instructions, especially as the software is not designed to be used by people that have little to no programming experience.

Since this is just a model AIP, there are only a few files represented. The AIP will consist of three main folders, one for the bot’s source code and the software and documentation to edit that code, one for any interviews or comments about the bot, and the final one for the tweets themselves and the software that to read them and its documentation. While the file types for things like the source code for the bot and the installer files are already dictated by their creators, any new files created will be to current preservation best practices, PDF/A for the text files and .tiff for images.

  1. Code
Folders structure of the AIP, highlighting the source code for Two Headlines
Folders structure of the AIP, highlighting the source code for Two Headlines

This folder contains the source code of the bot, downloaded from GitHub, along with the software used to create and edit the code. The documentation for Two Headlines’ code and for the software that created it will also be included. Additional documentation for the Google News API and the Twitter API will also be added, as both APIs are used in the running of the code. A readme file with some instructions concerning installing and using the various software was created, mostly from the instructions and

Folders structure of the AIP, highlighting the software
Folders structure of the AIP, highlighting the software

other readme files associated with the programs, is also in the folder.


  1. Interviews

Any one that responds to questions about their interactions with the bot, its significance and influence, will have their responses preserved in this folder. News articles and blog posts will also be included here.

  1. Tweets

The tweets will have the third and final folder. As archiving the tweets will require special software to collect them and different software to read them, both the programs and their documentation will also be included. Another

Folders structure of the AIP, highlighting the tweets
Folders structure of the AIP, highlighting the tweets

readme file will be added so that any users know how to install and use the included software to view the tweets. It will also include metadata about the collection of the tweets, including the time and the code that collected the tweets and a record of any modification that was done to them post-collection. A few screenshots will also be provided to show the original Twitter interface that will not be archived with the tweets themselves.


Moving Forward

While this is a good start to preserving an entertaining bot, there is more work that could be done. The next steps for this project would be to actually conduct the interviews and acquire permissions for the news articles and blog posts and submit the AIP to the Internet Archive. There would also need to be a mechanism in place to collect the new tweets from the bot, as it posts every few hours, and add them to the preserved files.

Preserving Two Headlines

Two Headlines is important to various user groups, as pointed out in my Statement of Significance. Due to its nature as a Twitter bot, it is made up of its code and its tweets. The commentary and news articles written about would also be important to preserve for context.

From code to tweet

The bot’s code is freely available online on GitHub under the MIT license, which has no restrictions on future use of the software, therefore allowing it to be preserved with or without the creator’s permission. However, there is only one version of the code available. If there have been revisions to the code, there would be no way to preserve older versions of it.

A bit of the code that makes up Two Headlines
A bit of the code that makes up Two Headlines

As my programming skills are limited, all I could tell without assistance is that the bot was written in JavaScript. Luckily for me, there is a readme file that provides more information about how the bot works and what to install to use it as your own Twitter bot. The main code has comments that explain how it functions.


To provide future users with a working version of the bot and not just the code, node.js and npm will also be preserved. According to its website, node.js is an “event driven framework [that] is designed to build scalable network applications”. It also says that it relies on npm, which allows developers to share and reuse “packages of code”. For preservation, only one installer will need to be acquired to have access to both programs.

The ‘packages of code’ that are used by Two Headlines are called ‘cheerio’, ‘request’, ‘twit’, and ‘underscore.deferred’. Cheerio is described as a “tiny, fast, and elegant implementation of core jQuery designed specifically for the server.” Request is a “Simplified HTTP request client”. Twit is “Twitter API client for node (REST & Streaming).” Underscore.deferred is a “jQuery style Deferreds”. I have no idea what any of that means but the webpages full of documentation for the individual code bits can be preserved fairly easily. The code can be installed to a computer through node.js.

After all the code is run, the bot can finally post to Twitter. These tweets will need to be acquired for preservation. The good news is that collecting tweets poses few legal or ethical challenges as they are posted in a publically accessible medium, negating any expectation of privacy and they are probably not copyrightable due to being too short to say anything original. Twitter’s Terms of Service allows for collections of tweets to be preserved as long as they aren’t made available online. In addition, Twitter’s Copyright Policy

mentions some types of media that might be covered under copyright, such as photos, videos, and links to allegedly infringing materials, but not tweets.

Unfortunately, there are many programs that can collect tweets and they will need to be investigated to determine which program performs the needed tasks best. The tweet preservation program will need to collect the tweets and timestamps. As all the posts will be by the bot, there is no need to have the poster’s username, unless it is a comment on the post. Also useful to have would be the followers of the bot, who it is following, and who likes which posts. The number of times individual tweets are retweeted would be nice to have, if it is possible to collect the information. All of this information would need to be in a format that is easy to access, search, and manipulate.

While the bot is creating the tweets, it checks to make sure that the tweet follows certain guidelines, including gender agreement between subjects and if a second headline is not found that matches the first. Any tweet that does not meet the guidelines is rejected and some other combination of headlines is put together. Unfortunately, there is no record of these rejected tweets, so it would be impossible to collect but they would shed some light on the bot’s tweet creation process.

How important is Twitter?

Screenshot of Two Headlines on Twitter
Screenshot of Two Headlines on Twitter

The posts on Twitter are how the majority of the world sees the bot. The program used to save the tweets will not also save the experience of using Twitter. Even though the interface is probably in several preservation collections, preserving the original experience through a few screenshots would not be difficult. Preserving the Twitter experience is not vital to preserving Two Headlines but it will provide the users with the context in which the bot was originally encountered. Creating a few screenshots will probably fall under the fair use exemption in copyright.

As the bot is reading google news to get the headlines that it mashes up, preserving them would be an interesting addition to the collection. However, it would be outside the scope of the current project and require too much additional work. The day’s news will be archived to some degree by other institutions and the internet so that the events that created the tweet can be determined and understood.

Any thoughts on the matter?

Two Headlines has had an impact on the bot creator community, in part because it is used as a teaching tool. While it would be nice to preserve the many news articles and blog posts have been written about the bot and the comments of the creator, Darius Kazemi, and other bot makers to provide some context around the importance of the bot, the effort required may make the task unfeasible to attempt to collect more than a handful. Bot makers that would have interacted with Two Headlines would need to be contacted and to answer questions about their views about and connection to bot. Surveys typically have poor response rates. Any response that is received from a survey could be added to the collection, but there is no way to estimate how many people will be contacted nor how many responses will be received.

The news articles would be trickier to preserve, as there are copyright permissions that would need to be acquired. It would also be impossible to say that all information available about the bot was found, as there are many websites on the internet and only so much time to find them.

The Plan

  1. Preserve as close to a working version of the code as possible using the code available on GitHub and the install for node.js, cheerio, request, twit, and underscore.deferred in a location that the users can access and use.
  2. Create screenshots of Twitter.
  3. Contact Darius Kazemi and any other bot creator for statements or interviews about Two Headlines and its importance.
  4. Any discovered news articles and web posts copyright holders will also be contacted to preserve their articles.


First Sketch of Digital Images

The invention of digital images

Computers started as a text-based media. The ability to render and display graphics needed to be invented; it was not a native feature of the hardware. Even after the computer became graphical, the internet and web browsers needed to also display its own graphics. Lisa Nakamura was quoted as saying:

“In 1995 Netscape Navigator, the first widely popular graphical Web browser … initiated popular use of the Internet and, most importantly, heralded its transformation from a primarily textual one to an increasingly and irreversibly graphical one”.

Traditional images were constrained by the size of the page and the colors available for printing. Those boundaries limited the preservation and storage issues that come from maintaining items for future users. Digital images have fewer restrictions. Nothing illustrates this point better than webcomics, which have evolved beyond the 3-panel comic in the newspaper to become tall, wide, many-paneled, full-colored, or even animated (The Rise of Webcomics).


You can Photoshop that, right?

Everyone knows the old saying, ‘a picture is worth a thousand words’. Images spread easier and faster than a blog post and are, therefore, more useful as a tool for social commentary (Is Photoshop Remixing the World?). Those digital images helped create something that is shaping the modern world: the internet meme.

However, there cannot exist an internet meme without the software to create said meme. One specific paint program, Microsoft Paint, was once described in one article as, “The graphics program that was most available during more than a decade of intensifying internet usage and meme production, the period from 1995–2007, was one inherited directly from the painting methods and tools of the 1980s”.

MS Paint was originally marketed solely as a way to sell more operating systems at a time when Microsoft Windows did not come standard on a computer. It was designed to get people interested in buying Windows to do more with their computer. Nowadays, MS Paint has been overshadowed by newer, more specialized image manipulation programs, such as Photoshop, Illustrator, and many others.

“The convergence of MS Paint’s ubiquity, with the rise of Nakamura’s ‘increasingly and irreversibly graphical’ internet, produced the circumstances under which MS Paint helped produce a visual, participatory, and online culture. This software was the graphics program most readily available and easy to use at the moment the internet took its graphical turn.”

But why call the program ‘Paint’? The word means many different things depending on the context. In home improvement, it means the stuff you put on wall, or other things, to change their color and make them look better. To an artist, it means to use that same material to create something wonderful that expresses something. To a visual effects artist, it means to remove something from a video. To a computer, it is how the image is created.

Illustration of how vector (top) and bitmap (bottom) images are created.
Illustration of how vector (top) and bitmap (bottom) images are created.

There are two ways to create an image on a computer – through vectors or bitmaps. A vector image is math-based compared to a bitmap image, which is pixel-based. Vector images are a series of instructions on how to re-create, or draw, the image through creating lines or arcs between set points. Bitmaps are a pixel-by-pixel record of what the individual points of an image are. Vector-based programs, usually with ‘Draw’ in the name, were marketed towards businesses due to the precise way they created the images. Bitmaps, with their free range of expression, were sold to the general public as ‘Paint’ programs.


Copy of a copy of a copy…

In addition to what is created digitally, people have enjoyed taking pictures since the first camera was invented. This article points out that digital cameras have allowed people to take more pictures in two minutes than were taken in the 1800s. Before the internet, the vast majority of the pictures taken was never seen by anyone other than the photographer and their friends and family.

Now, the internet allows one to share their images more easily, directly to people they know or through social media to the world, and photo-editing software, like Photoshop, is ubiquitous. But, with all those copies in different locations, which is the copy that should be preserved? What is the original, or final, version? Or should everything be kept? What about derivative works?

The argument between those that say ‘keep everything since storage is cheap’ and ‘curated collections’ will probably never finish. However, it has become easier to keep everything than to cull it, as there is too much stuff to go through in any amount of time (Digital Copies and a Distributed Notion of Reference in Personal Archives).

The significance of Two Headlines, the Twitter bot

What is a bot?

The modern world is driven by the internet, especially social media. The popular microblogging site, Twitter, claims to be “your window to the world”, with several hundred million active users posting millions of tweets per day. Bots, little bits of code that do a thing, are everywhere, especially posting on Twitter.There is even a ‘botifesto extolling the virtues and possibilities of bots, not just those on Twitter, and the myriad actions that they are designed to do. It tries to capture the full width and breadth of what bots are and what they could be.

On Twitter with its set 140 character limit on posts and expectations on what those posts should look like, where artists and programmers have turned those bits of code into a new form of internet-based art, it is easier to create a bot that does something interesting or different than it would be anywhere else on the web.

But, what is a Twitter bot? The most apt definition I could find was from The New Yorker:

“Twitter bots represent an open-access laboratory for creative programming, where good techniques can be adapted and bad ones can form the compost for newer, better ideas. At a time when even our most glancing online activities are processed into marketing by for-profit bots in the shadows, Twitter bots foreground the influence of automation on modern life, and they demystify it somewhat in the process.”

It mentions several reasons why someone would want to preserve a bot: to study or learn from its code, to understand what it says about modern culture and modern life. But I would add another reason: simply because they find it funny. There are already researchers studying what the bots say about modern culture, either through their posts or through those that interact with the bot.


Why this bot?

Screenshot of @TwoHeadlines
Screenshot of @TwoHeadlines

Two Headlines takes two of the news headlines from Google News and then posts that combined  result. The posts give a humorous, if slightly jumbled, look at the current, important event happening around the world, at least according to Google. In under three years, the bot has managed to post more than 20,000 times and gain over 5,000 followers. While not an internet-high, it is a respectable following for something that is not advertised, instead relying solely on word of mouth.

The creator of the bot once described Two Headlines by saying,

“Part of the reason it’s funny is it’s timely — it’s always talking about what’s in the news right now because it’s pulling from Google News. The other advantage is that, much like Twitter, news headlines have a very specific way they’re written, both within publications and across publications. … It plays with the convention of headline-writing itself and subverts those expectations. Its hit rate is very high. Probably four or five tweets a day are very funny, which is a pretty high hit rate for a bot.”

Programs and their code are always studied by other programmers and those wanting to become programmers. People will always want to know how things work. Two Headlines’ code is freely available online and it has already been commented to help and explain the parts of the program. The comments were designed to allow others to modify the program for different results, which would make understanding the program easier for those with little to no programming experience. It is already being used as a teaching tool for those that want to learn about bot creation.

Preserving the code would be valuable to people that are interested in studying programming and/or Twitter bots or those that are studying online culture. However, there is no mention of if there have been revisions to the code, so there would be no way to preserve older versions of it, if they exist without the help of the bot’s creator.


Who said what?

There is also the context and commentary surrounding twitter bots that would be useful to anyone studying bots, especially those looking at them as more than fancy bits of programming. There have been many articles written about bots, their creators, and their cultural effects, not counting the articles trying to find the most interesting bots to follow. While Two Headlines may not have gotten that much press specifically dedicated to it, or any bots dedicated to mocking or adding to it as other popular bots have, at least none that I’ve found, it is still mentioned in the media, just not as often as its creator.

Speaking of the creator, let us not forget that Two Headlines is a program and, therefore, created by a person, in this case by Darius Kazemi, who is a prolific bot creator. It says so on his twitter and website, complete with links to the other projects he is working on or has created. There is also a bio and links to several news stories on the website.

In addition to his bot creations, Kazemi has done a lot of work to help others make their own bots and is responsible for Bot Summit, a conference “where botmakers from around the world get together, both in person and online, to discuss the art and craft of making software bots”. In doing all this bot-related work, he has developed a following of fans from many different fields, such as other programmers, game developers, comedians, philosophers and even an English literature professor, at least according to one article. Ian Bogost, whom you might remember being mentioned a few weeks ago on this blog, was quoted as saying, “You have a favorite comedian or favorite artist and you look forward to what they say, because you want to see the world through their eyes. The same kind of thing is happening with Darius.”