Shaping Our World Through Digital Photos

I took my five year old twins to the park the other day, and we had a blast.  As Lennon climbed atop the playset (cast and all), she shouted down to me, “Take my picture, mommy!”  Each time I rounded a corner, camera in hand, my other twin, Carys would stop and strike a dramatic pose.  My children have been trained by the digital camera.



Then again, I suppose I have been too.  Within moments of snapping my girls’ photos, I slap a few Instagram filters on them and upload to Facebook.  Accordingly, the photos receive another 35 ‘likes’ in a matter of minutes.  It’s my way (and much of society’s way) of saying ‘look how adorable/great/smart/special my kids are!’  Facebook and other forms of social media have become the new wallet from which we pull our kids’ photos to brag.  As Elizabeth Losh explains in “Feminism Reads Big Data: “Social Physics,” Atomism, and Selfiecity,” digital photos and social media have allowed us to continue the analog scrapbooking tradition into a new digital era.


Digital Photos in Society

The ways in which society interacts with and participates in digital photo culture extend beyond mere scrapbooking, however.  J. Good’s article “How many photos have ever been taken?” indicated that by 2011, humans had taken over 3.5 trillion photos.  The number of photos has risen exponentially since the evolution of inexpensive and easily accessible digital cameras.  Digital photos have become an integral part of our social experience in the 21st century.


Silly Girls


Losh examined the phenomena of selfies through the Selfiecity project which aggregated approximately 3,200 images and compared them for similarities and emotional connections.  While Losh had some criticisms of the project especially when it came to gender considerations and time limitations of the study, Selfiecity allows for a provocative conversation on the culture of selfies and its ubiquity in society.  Can you estimate how many selfies you’ve taken or been apart of?  I know I can’t.  Just as taking pictures of my little ones on my cellphone is a natural part of my routine, I don’t often think about snapping a selfie (or more likely an “us-ie”).  It’s part of our common cultural expression.  And it’s not just Millennials creating and propagating this culture.  Losh also points to the use of selfie expressions across popular culture, even permeating into the political sphere.  Not only do these examples grant some sort of legitimacy to the practice of taking selfies, they cement the notion that this form of self expression isn’t going anywhere.  


So why is the digital self portrait so undeniable?  Just as my five year olds love cheesing it up for the camera, the human race loves to have our story captured.  We also love to be able to portray our story in a manner we control and shape.  Selfies grant us the ability to literally frame our stories just as we want them.  We choose what we want the world to see.  We choose what the message is when we snap the selfie and say “I was here!”    


Digital Photos as Artwork

Perhaps the most important part of our obsession with the digital image is the idea that we can manipulate the image to make it uniquely ours.  Digital photography, editing, and sharing is a participatory act which makes the image malleable.  Analog photography always allowed for manipulation of a photo in the editing room.  However, editing tools such as Photoshop and MS Paint have permanently changed the way we interact with the image.  L. Manioch in “Inside Photoshop and the artists in Is Photoshop Remixing the World?  Argue that Photoshop is the evolution of the paint brush.  An image can be transformed and recreated again and again, giving birth to new worlds out of previously static images rooted in reality.  


The artists and designers in The Rise of Webcomics harness both the idea of digital image creation along with interactivity, pulling in users for their contributions as well.  One of the most profound points in the video is the idea that these webcomics are given life and released in a digital ecosystem that has no gatekeeper.  Because of this, the webcomics and artwork are intensely unique and personal, just as the flood of selfies we take are.  This intimacy between the artist, their work, and the users allow for the formations of communities around oddball, cult favorites that might not have found an outlet in traditional print (POLANDBALL!).    




Digital Photos, Authenticity, and Copies…so many copies…

Until I read Catherine C. Marshall’s “Digital Copies and a Distributed Notion of Reference in Personal Archives,” I don’t believe that I gave the replication of my selfies and children’s photos much thought.  I didn’t question which copy was the authentic version because the main purpose for the exercise is to create and share the photos as quickly as possible.  What do I personally rely on to back up my reference copy?  I back everything up in the cloud, of course.  Yet, Marshall reminded us that we shouldn’t be so confident in the integrity and infallibility of the Cloud.  What happens when these files are corrupted or lost altogether?  What version of the photo do we then consider to be the authentic or even the most valuable?  Thanks to digital photo editing, our images can be tweaked and transformed countless times each time it’s replicated.  As alluded to in Is Photoshop Remixing the World?, there is no true authenticity when it comes to digital art and photos (wouldn’t Lowood be delighted?).  Marshall concurs in that we should focus on what version of the file we’d like to reference and perhaps save in perpetuity rather than the idea that the original file carries some sort of mystical quality.  It isn’t necessarily true in the brand new digital world.  Instead, we continue to mold and shape our vision of the world from the moment we click the camera to the editing decisions made through every iteration of the file.


So, here are some questions as we move forward:

Losh discussed women in relation to selfie culture.  Is it empowering for women, or is it another culture based around exploitation?  

Based on Marshall’s article, what can/should digital preservation professionals do to guide users on making sound decisions when it comes to their personal archives?  Is this a responsibility that we owe as a field to the greater community?

Just Photoshop It

According to Lev Manovich “to understand media today we need to understand media software – its genealogy (where it comes from), its anatomy (interfaces and operations), and its practical and theoretical effects.” While Manovich dives into the detail of Photoshop to explain his theory, Patrick Davison uses the same approach to examine MS Paint.


Manovich and Davison explain the background of both Photoshop and MS Paint, though from slightly angles. The authors describe how both applications have an attachment to the traditional, analog form of making art based on the tools they provide. Photoshop and MS Paint contain pencil and paintbrush tools – items that anybody would inherently understand.

The authors diverge a bit in how they describe the history and development of both applications. Manovich looks more at the history of traditional media and computer programming in general. Using the example of layers in Photoshop, Manovich explains how the idea of multiple tracks, channels and layers always existed in traditional forms of film, audio recording, and animation. He explains further how two computer scientists working on special effects for Star Trek II wrote a paper comparing the layering technique used for a digital composite in the movie to putting together separate code modules in a computer program. Davison examines the economic and political factors at Microsoft, which effected the development of MS Paint. He also considers how the growth of the Internet affected the use of the software during this time period.

Anatomy (Interfaces and Operations)

Davison goes into detail describing the difference between a raster bitmap image of a MS Paint file and a vector based file for a program like Illustrator. The bitmapped MS Paint file contained jagged edges due to the fact it was being drawn with a mouse and anti-aliasing was not available to smooth out the edges. The rough artwork of MS Paint reinforced the idea that painting programs were for the general public, while drawing programs that produced more accurate images were meant for professionals.

The Taj Mahal, winded by Photoshop
The Taj Mahal, winded by Photoshop

Throughout Inside Photoshop Manovich describes the menus and tools available to a user – there are in fact thousands of commands available. One area of commands he explores in depth however, are the filters. Many of these are based on traditional forms of producing art, but others are often based off ideas from the physical world, like wind and waves. One conclusion he reaches is that the filters based on traditional art allow a user much more control whereas those based on the physical world are more automated and generated through algorithms.

Practical and Theoretical Effects

Both Manovich and Davison express the idea that Photoshop and MS Paint certainly draw from traditional media, but that they are also completely different. Manovich writes “ . . . all media techniques and tools available in software applications are ‘new media’- regardless of whether a particular technique or program refers to previous media, physical phenomena, or a common task that existed before it was turned into software, or not.” Despite Microsoft generally ignoring MS Paint, the software still holds importance and Davison theorizes that the timing of the Internet with the marketing of MS Paint to the general public lead to its popularity online. Davison explains how “analyzing MS Paint’s ‘authentic digital aesthetics’ is valuable because it enables a consideration of digital media as an autonomous sphere of production and value.” The art director and illustrator John Huang seems to be in agreement with Davison’s assessment of digital media when he argues that using the software is not a cop out and is just as valid as traditional means of producing art.

Personally, I am struck by how prevalent software like Photoshop and other media applications are, but then you tack on all the social media sites like Facebook and Instagram, and you start to realize the sheer number of images out there in the world. Good’s blog post and Marshall’s article explore these topics, which seem to be the more practical effects of media software that Manovich and Davison introduce.

Using photography as an example, an analog-based art, which has only been around since the 1820s (the Harry Ransom Center actually holds what is considered to be the first photograph by Joseph Nicéphore Niépce) and you consider how the digitization of the medium has impacted culture today – it is a pretty impressive amount of change in what I would consider a short period of time.

This was originally an unfiltered cell phone photo I took. Then it was passed through an Instagram filter, possibly edited some, then shared on Twitter, Facebook and Instagram.
This was originally an unfiltered cell phone photo I took. Then it was passed through an Instagram filter and possibly edited some, before making the rounds on Twitter, Facebook and Instagram. Now it reappears on this blog with new caption information.

Everyone has a digital camera today and everybody’s images are passing through applications like Photoshop, or the filters on Hipstamatic (do people still use this?) and Instagram, in order to post them to a whole host of social media sites or personal websites. Marshall examines the idea of versions, or rather, versions, variations, and derived forms. Considering all these factors, as an archivist, makes your head hurt right? Good points out how more images are on Facebook, Flickr and Instagram than are in the Library of Congress, so what do we do about all these images being produced? Can they be preserved? How? Or would we even want to preserve it all and how would you go about selecting what you wanted? And how do you even begin to approach copyright and privacy concerns?




Photos and Media: The Influence of Visuals and the rise of Photoshop


A great deal has changed in the last two decades, especially in the fields of art and culture.  The information revolution brought on by the advent of powerful but affordable computers has had a huge effect on media culture as whole.  More specifically though, the role of photos and photo editing tools, particularly Photoshop, has dramatically changed and grown.

The Role of Photos in Media: Traditional and Current

…the 20th century was the golden age of analog photography peaking at an amazing 85 billion physical photos in 2000 — an incredible 2,500 photos per second. (Good)

Traditionally photos have had an important, but limited role in media.  They were primarily regulated to publications, such as magazines and newspapers, other mass produced materials, and photography as art.  While photos were used by individuals as a form of communication and expression this was highly limited due to technological. Photos used to be time consuming to make, copy, and share because the technology to rapidly make, copy, and share them did not exist.  Add in the fact that photos, while not expensive, were not cheap and the role of photos in media was limited.

– it is estimated that 2.5 billion people in the world today have a digital camera[6]. If the average person snaps 150 photos this year that would be a staggering 375 billion photos. (Good)

Information technology changed this significantly by removing the technological limitations on photo use.  Cameras are practically everywhere now and are affordable to practically everyone and easily accessed.  Additionally computers and digital technology makes copying and sharing photos almost effortless, literally only taking a press of a button.  The result of all this is that the use of photos in communication and expression has practically exploded.  According to Johnathan Good roughly 85 billion analog photos had been taken up to 2000 since the invention of the brownie camera in 1901, around 2,500 photos a second.   In comparison the estimated number of photos that will be taken is 375 billion, more than four times the number of photos taken during the 20th century, and that we have now taken 3.5 trillion photos in total.  These numbers reflect the increasing use of photos in our lives as a means of communication and expression.  Photos and images have a much higher information density than text or even audio recordings do, people get more out of seeing an image in few seconds than reading something for the same amount of time.  This makes photos and images an incredibly powerful method of communication and expression since so much can be done with them.

Photoshop and Media: The Role of Editing Tools and Software

<iframe width=”560″ height=”315″ src=”” frameborder=”0″ allowfullscreen></iframe>

In addition to the increasing role photos have in communication, expression, and general media the role of photo editing tools and techniques has grown as well.  Manipulating and editing photos using tools and techniques has been a long standing practice in media since photos started being used.  Originally this was done using the photo’s negatives and painting/coloring them to the desired effect.  This was done for largely the same reasons as it is today, improving and optimizing the final photo.  Magazines and other visual media products often used, and still use, photo editing and manipulation to create the desired end product.  However, due to the explosion of information technology the use of photo manipulation and editing in art, communication, and expression has grown tremendously.  In particular the role of Photoshop, a premier photo editing software, has grown to become a cultural/media phenomenon.

Adobe Photoshop, which was created in 1988, is a photo editing and manipulation tool.  Due to its versatility, quality, regular support, and ease of use it has become one of, if not the, de facto program/tool for photo editing and manipulation.  With the increasing availability of cameras and the ever increasing use of photos in media, communication, and expression the role of photo manipulation has expanded.  Because photo manipulation allows people to repurpose, add new meaning to existing photos/images, and even change a photo’s/image’s meaning entirely it massively increases what can be done with photos and images.  In effect photo/image manipulation tools like Photoshop removes most of the remaining limitations on photos and images as a medium.  The ability to make such manipulations allows people a level of freedom never seen before and its effect can be seen in social media.  The ability to create customized images allows for extremely fast and highly informative communication that spreads quickly.  Memes in particular are an excellent example of the influence of Photoshop and other photo manipulation tools.  They are extremely expressive and spread extremely quickly, far faster than most other forms of communication.


          In conclusion the role of photos, images, and photo manipulation tools such as Photoshop is bright.  In our ever increasing technological word where everything is connect and the creation of images/photos is cheap and accessible there role in media will only increase.  We have gone from making only a few billion photos in the last hundred years to making over three hundred billion in a single year.  The advent of cheap and available photo/imaging technology has spurred the adoption and expansion of Photo/image manipulation tools such as Photoshop.  This in turn has increased the role photos and images have in media even further.  Taking into consideration the increasing importance of information technology the importance of photos, images, and photo editing tools will only increase going forward.

First Sketch of Digital Images

The invention of digital images

Computers started as a text-based media. The ability to render and display graphics needed to be invented; it was not a native feature of the hardware. Even after the computer became graphical, the internet and web browsers needed to also display its own graphics. Lisa Nakamura was quoted as saying:

“In 1995 Netscape Navigator, the first widely popular graphical Web browser … initiated popular use of the Internet and, most importantly, heralded its transformation from a primarily textual one to an increasingly and irreversibly graphical one”.

Traditional images were constrained by the size of the page and the colors available for printing. Those boundaries limited the preservation and storage issues that come from maintaining items for future users. Digital images have fewer restrictions. Nothing illustrates this point better than webcomics, which have evolved beyond the 3-panel comic in the newspaper to become tall, wide, many-paneled, full-colored, or even animated (The Rise of Webcomics).


You can Photoshop that, right?

Everyone knows the old saying, ‘a picture is worth a thousand words’. Images spread easier and faster than a blog post and are, therefore, more useful as a tool for social commentary (Is Photoshop Remixing the World?). Those digital images helped create something that is shaping the modern world: the internet meme.

However, there cannot exist an internet meme without the software to create said meme. One specific paint program, Microsoft Paint, was once described in one article as, “The graphics program that was most available during more than a decade of intensifying internet usage and meme production, the period from 1995–2007, was one inherited directly from the painting methods and tools of the 1980s”.

MS Paint was originally marketed solely as a way to sell more operating systems at a time when Microsoft Windows did not come standard on a computer. It was designed to get people interested in buying Windows to do more with their computer. Nowadays, MS Paint has been overshadowed by newer, more specialized image manipulation programs, such as Photoshop, Illustrator, and many others.

“The convergence of MS Paint’s ubiquity, with the rise of Nakamura’s ‘increasingly and irreversibly graphical’ internet, produced the circumstances under which MS Paint helped produce a visual, participatory, and online culture. This software was the graphics program most readily available and easy to use at the moment the internet took its graphical turn.”

But why call the program ‘Paint’? The word means many different things depending on the context. In home improvement, it means the stuff you put on wall, or other things, to change their color and make them look better. To an artist, it means to use that same material to create something wonderful that expresses something. To a visual effects artist, it means to remove something from a video. To a computer, it is how the image is created.

Illustration of how vector (top) and bitmap (bottom) images are created.
Illustration of how vector (top) and bitmap (bottom) images are created.

There are two ways to create an image on a computer – through vectors or bitmaps. A vector image is math-based compared to a bitmap image, which is pixel-based. Vector images are a series of instructions on how to re-create, or draw, the image through creating lines or arcs between set points. Bitmaps are a pixel-by-pixel record of what the individual points of an image are. Vector-based programs, usually with ‘Draw’ in the name, were marketed towards businesses due to the precise way they created the images. Bitmaps, with their free range of expression, were sold to the general public as ‘Paint’ programs.


Copy of a copy of a copy…

In addition to what is created digitally, people have enjoyed taking pictures since the first camera was invented. This article points out that digital cameras have allowed people to take more pictures in two minutes than were taken in the 1800s. Before the internet, the vast majority of the pictures taken was never seen by anyone other than the photographer and their friends and family.

Now, the internet allows one to share their images more easily, directly to people they know or through social media to the world, and photo-editing software, like Photoshop, is ubiquitous. But, with all those copies in different locations, which is the copy that should be preserved? What is the original, or final, version? Or should everything be kept? What about derivative works?

The argument between those that say ‘keep everything since storage is cheap’ and ‘curated collections’ will probably never finish. However, it has become easier to keep everything than to cull it, as there is too much stuff to go through in any amount of time (Digital Copies and a Distributed Notion of Reference in Personal Archives).

Turning what’s old into new: Pride and Prejudice for the next generation

The Lizzie Bennet Diaries

“It’s a truth universally acknowledged that…” Pride and Prejudice by Jane Austen is a timeless classic. Internet giant Hank Green came up with the idea to turn classic novels into a medium that today’s generation would understand and enjoy, a video blog. Green along with a trans-media team lead by Bernie Su, now called Pemberley Digital, created The Lizzie Bennet Diaries. What makes this adaptation unique is that it crossed all realms of social media  and interaction. To create a full story they utilized YouTube, Twitter, Facebook, Tumblr, Pinterest, and more. The success of this adaptation has even spawned a book version, the irony is not lost that it was originally based off a book. It has even won an Emmy Award! Continue reading “Turning what’s old into new: Pride and Prejudice for the next generation”

Preserving Undertale: Do you want to have a bad time?

This is the trailer for Undertale, a video game playable on PC and Mac that was released in September 2015. To the uninitiated in the world of indie games, this might seem like something out of the 1990s: the graphics and soundtrack are reminiscent of older games made for the NES or Gameboy systems. The graphics and sounds are nothing new, and even playing such games on a modern system is rather old hat as emulators have grown in popularity.

However, Undertale has made quite the splash in the gaming world. On the online distributor Steam, Undertale is owned by more than one million people. The game has been reviewed as being the game of 2015 by IGN and best game ever by the GameFAQs community. This game is significant simply because of its popularity, but such accolades after a few months on the market, and as an indie game made by essentially two people, raise a few questions: why is Undertale so exceptional, and how did it elicit such an intense fan response? Also, as an archivist, how and what should we try to capture from Undertale’s moment in video game history?

[The following contains some mild spoilers to the plot of Undertale. Be forewarned].

The Power to “Save.”

Undertale follows some standard features of a role-playing game. The player controls one character, a child who has fallen into the underground, a large cave-like area below a mountain, where monsters have been exiled after a ware with humans. They encounter monsters, some of whom are friendly, and some of whom are not, and they talk, battle, and trade with them, going through different towns and regions on their journey to return to the surface.

This game acts as a commentary on the RPG genre as a whole in a number of ways. First, you can play the game without killing anyone. Toby Fox, the creator of Undertale, has built in a system where the player can “Act” in a number of ways to dissuade creatures from attacking them. This is known as a “True Pacifist” run of the game. Alternatively, the player can kill everything and everyone they encounter, a method many gamers follow in other RPGs– this is known as a “Genocide” run.

Another interesting feature of the game is that it remembers what users do, even if they do not save. For instance, if the player accidentally kills the first main “boss,” and then attempts to go back to their previous save file, and then saves the first boss, the following dialogue happens directly afterwards.

Such features have not been common in previous games, even those made by major gaming companies, as things such as improved graphics and battle mechanics have been valued over more introspective games. When commentary on the genre has been used, it typically is incorporated in dialogue, with characters breaking the fourth wall and discussing RPG tropes. Toby Fox, instead, has chosen to comment on the genre in the very mechanics of the game itself.

This is what is often praised in articles and reviews, and so for that reason, these reviews and articles are significant to Undertale; they highlight the moment in gaming when Undertale was released.

Underminers and Source Code

Downloading a copy of Undertale is not terribly challenging, and so presumably an archive could obtain a copy of it and maintain hardware to open it with. However, as has been noted by Matthew Kirschenbaum in his book Mechanisms, there are things we do not see that are important to how the game is functioning, and these are things that are important to not only scholars of gaming, but to the community itself. Because the game remembers what players do, there has been an entire section of the Undertale wiki devoted to “Consequence Avoidance,” so users can TRULY erase saved data, which requires some level of going into temporary codes/scripts.

Toby Fox had said on Twitter that he did not want people data mining originally, at least for the first year of the game, although he has had a less strong stance on this since January. Because Fox made the game in an application called Gamemaker, a lot of the data can be extracted without having the source files themselves. There is a Reddit community called Underminers who have gone through many of the games files, and have found new and interesting secrets about the game. Because Fox created a game with such depth to it, where certain actions could be remembered and trigger future, different dialogues and interactions between characters, such information is invaluable to those trying to understand Undertale without playing through it multiple times.

The Fan Community: Memes, Art, and Games

The fans of this game have been incredibly active, and incredibly creative in their own rights. Particularly with data from Underminers, many fan games have come about. These all build on the plot and lore present in the original Undertale, often allowing the player to battle existing characters in the game who were not fought in the original, or introducing new characters who talk about their interactions with other characters. Most of these fan games are in the format of a battle, where the player is able to chose whether to not hurt their opponent, or kill them, highlighting that this is one of the mechanics gamers thought was most important, along with the detailed plot of the game.

Another major addition made by the fan community has been in the form of art and memes. Undertale has a number of memorable and repeated lines, and these have been used by a number of people in the gaming community to create memes and art pieces. These are significant and useful because, as a casual gamer myself, I have noticed that Undertale references have become abundant; people will say things like “[Insert anything here] fills you with determination,” “You’re gonna have a bad time,” or “Get dunked on!” These memes help highlight the phrases’ location/associations in the game, their usage, and their significance.

The fan response to this game has been immense, and is spread all over the internet. To document it, and to archive it in some way would allow users to see similar works, when and certain derivative games, stories, and images became part of the meta made by Undertale fans. This could also potentially become a growing trend for indie games. While major gaming companies provide games with expansive stories, requiring 40+ hours to complete, with lots of images, characters, and extra stories beyond the “main plotline,” Undertale is rather confined to one plotline with a few sidequests, and the game can be completed in under 10 hours. The fans of this game have filled in beyond this, and this could be a trend for future indie games.

Which Path to Take?

All of these aspects of Undertale are valuable, and could be documented/archived in some way. Their significance goes beyond Undertale itself, and the information they provide helps users understand Undertale, the gaming industry, gaming communities, and ultimately how the internet has affected so many aspects of how gamers and game creators interact.

Right now, I think that perhaps the most interesting route to take is in data mining and through the Underminers. This path, in my mind, highlights a number of the significant features of Undertale. By looking at certain aspects of Fox’s code available through data mining, and attempting to archive that content in a useful way, we can highlight the mechanics used by Fox that were so crucial to the game’s success. This type of data would presumably be useful for future game creators, something Fox seemed to be interested in fostering.

Also, this type of project highlights a moment in gaming history, where indie games are springing up because of programs like Gamemaker and RPG Maker that allow game designers to work efficiently, but also allow gamers to find out secrets about their games before playing them. Because of this, in many ways working with data mining and Undertale can include some social history elements as well, including perhaps a section on Fox’s opinions on data mining, and the Underminers’ reaction to such opinions.

Unlike the game Undertale, I think all routes here could lead to happy, fruitful conclusions.

The significance of Two Headlines, the Twitter bot

What is a bot?

The modern world is driven by the internet, especially social media. The popular microblogging site, Twitter, claims to be “your window to the world”, with several hundred million active users posting millions of tweets per day. Bots, little bits of code that do a thing, are everywhere, especially posting on Twitter.There is even a ‘botifesto extolling the virtues and possibilities of bots, not just those on Twitter, and the myriad actions that they are designed to do. It tries to capture the full width and breadth of what bots are and what they could be.

On Twitter with its set 140 character limit on posts and expectations on what those posts should look like, where artists and programmers have turned those bits of code into a new form of internet-based art, it is easier to create a bot that does something interesting or different than it would be anywhere else on the web.

But, what is a Twitter bot? The most apt definition I could find was from The New Yorker:

“Twitter bots represent an open-access laboratory for creative programming, where good techniques can be adapted and bad ones can form the compost for newer, better ideas. At a time when even our most glancing online activities are processed into marketing by for-profit bots in the shadows, Twitter bots foreground the influence of automation on modern life, and they demystify it somewhat in the process.”

It mentions several reasons why someone would want to preserve a bot: to study or learn from its code, to understand what it says about modern culture and modern life. But I would add another reason: simply because they find it funny. There are already researchers studying what the bots say about modern culture, either through their posts or through those that interact with the bot.


Why this bot?

Screenshot of @TwoHeadlines
Screenshot of @TwoHeadlines

Two Headlines takes two of the news headlines from Google News and then posts that combined  result. The posts give a humorous, if slightly jumbled, look at the current, important event happening around the world, at least according to Google. In under three years, the bot has managed to post more than 20,000 times and gain over 5,000 followers. While not an internet-high, it is a respectable following for something that is not advertised, instead relying solely on word of mouth.

The creator of the bot once described Two Headlines by saying,

“Part of the reason it’s funny is it’s timely — it’s always talking about what’s in the news right now because it’s pulling from Google News. The other advantage is that, much like Twitter, news headlines have a very specific way they’re written, both within publications and across publications. … It plays with the convention of headline-writing itself and subverts those expectations. Its hit rate is very high. Probably four or five tweets a day are very funny, which is a pretty high hit rate for a bot.”

Programs and their code are always studied by other programmers and those wanting to become programmers. People will always want to know how things work. Two Headlines’ code is freely available online and it has already been commented to help and explain the parts of the program. The comments were designed to allow others to modify the program for different results, which would make understanding the program easier for those with little to no programming experience. It is already being used as a teaching tool for those that want to learn about bot creation.

Preserving the code would be valuable to people that are interested in studying programming and/or Twitter bots or those that are studying online culture. However, there is no mention of if there have been revisions to the code, so there would be no way to preserve older versions of it, if they exist without the help of the bot’s creator.


Who said what?

There is also the context and commentary surrounding twitter bots that would be useful to anyone studying bots, especially those looking at them as more than fancy bits of programming. There have been many articles written about bots, their creators, and their cultural effects, not counting the articles trying to find the most interesting bots to follow. While Two Headlines may not have gotten that much press specifically dedicated to it, or any bots dedicated to mocking or adding to it as other popular bots have, at least none that I’ve found, it is still mentioned in the media, just not as often as its creator.

Speaking of the creator, let us not forget that Two Headlines is a program and, therefore, created by a person, in this case by Darius Kazemi, who is a prolific bot creator. It says so on his twitter and website, complete with links to the other projects he is working on or has created. There is also a bio and links to several news stories on the website.

In addition to his bot creations, Kazemi has done a lot of work to help others make their own bots and is responsible for Bot Summit, a conference “where botmakers from around the world get together, both in person and online, to discuss the art and craft of making software bots”. In doing all this bot-related work, he has developed a following of fans from many different fields, such as other programmers, game developers, comedians, philosophers and even an English literature professor, at least according to one article. Ian Bogost, whom you might remember being mentioned a few weeks ago on this blog, was quoted as saying, “You have a favorite comedian or favorite artist and you look forward to what they say, because you want to see the world through their eyes. The same kind of thing is happening with Darius.”

Preserving and clarifying collaborative contexts

The launch of YouTube in 2005 was quickly recognized as a watershed moment in the growth of social media and user-contributed content online. The ease of uploading and embedding video provided by YouTube made it accessible to a much wider non-specialist audience. Kutiman’s 2009 music and video project ThruYou builds on the subsequent explosion of homemade video content, using YouTube as its source material. Kutiman (aka Ophir Kutiel), an Israeli musician and producer, combed through YouTube, tracking down dozens of clips, musical and non-musical—homemade guitar lessons, piano recitals, amateur freestyle raps, random people screwing around with Theremins or synthesizers. He then used this raw material to create a set of seven original songs, looping and layering audio and video clips from a dozen or more sources to create each song and its accompanying video.

Continue reading “Preserving and clarifying collaborative contexts”

The heart and soul of an archive


As we’ve already discovered this semester, the performing arts have a long history of documentation, so in this sense my project will be nothing new. But the readings we’ve had thus far have mostly covered how the performing arts deals with archiving works anchored in the temporal, not how it deals with the digital aspects of those temporal works.

My project this semester is going to focus on exploring avenues for archiving all the different production and design elements, the paperwork and properties that go into creating and running a theatre show. I am going to use a specific musical I worked on a few years ago as a case study. I picked this show because I was more involved in the design process than I usually am as a master electrician, since the load-in was especially complicated and I also ended up assisting by programming the show for the lighting designer, but I also recently discovered that the theatre company in question actually lost a good amount of their archival material on the musical while they were in the process of archiving their own copies, so it also serves as a good object lesson in what can be lost.

The production in question is a bit of an adaptation of an adaptation: the 1988 movie Big was adapted into a musical for Broadway in 1996, and this is the Theatre for Young Audiences (TYA) version. Yeah, this wouldn’t be my first choice for a TYA production either, but there’s also a TYA version of Avenue Q, so here we are. And the libretto isn’t really why we’re here, though we’ll archive that too. I’m interested in the more technical aspects.

Big was a bit of a game-changer for Adventure Theatre, since they had recently acquired a new lighting product, to be implemented on this show, and used in subsequent shows: flexible LED tape, that had red, blue, and green LEDs on it, allowing for near-infinite color mixing. This low-profile ‘tape’ could be attached directly to set pieces, so there was a high amount of coordination between the scenic designer and the lighting designer, and in fact reviewers often attributed the LED tape more to the scenic designer than the lighting designer. It also had the unintended consequence of making the lighting programming so complicated that we actually ran out of internal memory on the lighting console before we could finish building the show. The lighting console which was several generations out of date, ran on DOS, and only took floppy drives as external memory.

This was compounded (compounded!) by timeline issues: IMG_0187we had to find a board that would read the existing show file and execute it in the same manner as we didn’t have time to rewrite the whole thing, and the show was so fast-moving that there was no pause in the cue sequence long enough to swap disks during the run (the load process was estimated at 2 minutes, there wasn’t a single page on the script that didn’t have cues). The LED tape was being controlled by programming boxes made from scratch by the (amazing) technical director, so documentation was minimal and fixes were only accomplishable by that one individual, and I believe that to still be the case to this day (especially in terms of documentation). Other digital elements include the projections, the basic CAD files for the set and the ‘regular’ part of the lighting, and the sound cues, which were run entirely through a digital program. The sound designer and the lighting designer often worked together to time lighting cues or adjust the length of sound effects so they would complete together.

These are essential elements that were born digital and must stay digital in order to maintain their essential qualities. Focusing on the preservation of these elements and exploring what resources are out there to support them that are aimed at or affordable for the non-profit community would allow not only for better archiving of cultural history, but for sharing innovation as well — the digital equivalent of reaching over someone’s shoulder and typing in code from memory.

The stakeholders obviously include the theatre company, the designers and actors, but also potentially those interested in studying theatre on a variety of levels: the work, the design, or the designers. It also includes the general public.

The theatre company: Theatre companies will use items from past productions for many reasons: moving or still images can be used in advertisements for the theatre as a whole or in promotional or fund-seeking material for the company; the company may need the design elements if they want to stage a revival; certain set or props pieces may need to be re-worked for another show, or a tricky effect or certain board pre-sets may be re-used by a designer from an earlier show they worked on. Good records of a show and how it works are also important during the run — for example, if an actor is injured or the stage manager needs to be replaced (an actual emergency that happened mid-tech on this show).

Designers and actors: Portfolios are an integral part of a designer’s self-promotional arsenal, they act as visual supplements to a resume or CV. Photography is generally discourage during live theatre, both to prevent the actors from being distracted, and to ensure the design integrity. Promotional photography will usually be taken during one of the last few dress rehearsals, with set specific moments if called for afterwards. This guarantees that production stills will be of the best quality, and designers and actors alike can get professional images of their craft, to promote it to other talent-seekers. Designers will have their copy of the paperwork submitted to the company, but may also receive (if they desire) the plot work for the finished pieces, which account for any differences or adjustments that may have happened between basically the first draft and the finished product.

Researchers: Theatre research tends to be either script-based (studying a playwright’s oeuvre), or methodology-based (Stanislavski method, Alexander technique), but the history of the physical craft of theatre has its investigators as well. Available materials, techniques, and design influences can all be read longitudinally through a theatre company’s collective archive.

General Public: Some theatre archives, like the TOFT archive at the NYPL, require users to prove that they are in the industry, but not all film and tape archives have that requirement, and even then, if you are in the performance industry, or a student of it, you can still watch something just for entertainment. Also, having these archives available for designers to work from helps build a better production for audiences in the future to enjoy.

Brendan DeBonis as Billy and Greg Maheu as Josh in Big, The Musical TYA. Photos by Bruce DouglasThe ‘magic of theatre’ is, most of the time, just endless hours of manual labor and seat-of-your-pants improvisation to get the show up and running, and to keep it that way, especially amongst smaller theatres that don’t have the same budget as Broadway or the Kennedy Center or Disney World. But they still want to put on a good show. Big is about finding out you’ve bitten off more than you can chew, and discovering what’s great about what you are. Discovering things you didn’t know you had the capacity to do is exactly the kind of goal theatre archives are here to serve.

Moving Still Art: Rob and Nick Carter’s “Transforming”

A traditional painting is static to the human eye, despite the imperceptable movements of the atoms or the refresh rate of the screen if it displayed or created digitally. The husband and wife duo, Rob and Nick Carter, artist collaborators, looked to challenge the notion of how static these pieces need to be as part of their series called “Transforming.” Delving into a new venture between 2009 and 2013, they worked with the English visual effects firm, Motion Picture Company (MPC), to create a series of computer based digital paintings in a reimagining of still paintings from the Golden Age of Dutch art, Renaissance, and 18th century Germany.

Four of these works are presented as films on Mac screens or iPads with traditional portrait frames, each ranging from approximately two to three hours in length that loop and repeat again. Each piece, slowly and often imperceptibly, changes over the course of the playback, employing databases of insect movements and plant life cycles, algorithms, and traditional computer animation. The intentions of their pieces are to promote sustained engagement with the paintings in contrast to the six seconds on average that a museum goer looks at an artwork.

Transforming Vanitas Painting

Transforming Still Life Painting

Transforming Diptych

Transforming Nude Painting

Significance and Communities

Groups interested in the survival of these works are art scholars across various concentrations. To those studying the original works of inspiration, these new pieces serve as a vital link to understanding the impact and tracing their influence over time. Rob and Nick Carter’s work is also an important example of remixing or reuse and serve as important pieces to document the influence of the original artwork along with the new work itself. Ultimately, preservation of the digital paintings also means allowing for further transformation as the digital files and code are much easier to transform than their analog counterpart. Thus, these works are part of the social memory creation surrounding both the original works and the genres they represent.

Another group that would want these art pieces preserved are those studying new media art and its history. Kate Bryant of the Fine Art Society of London claims that these are the world’s first digitally rendered paintings (old paintings entirely recreated with a computer), making it an important to preserve as documentation of the establishment of a new genre or technique. While the approach of a modern day homage to earlier forms of art was innovative, I believe the work of Rob and Nick Carter is conservative compared to some new media art which can be quite jarring from traditional paintings.

The conventional elements may have made the work palatable to more traditional galleries such as The Frick Collection and The Mauritshuis which exhibited some of these works alongside centuries old still life paintings (in fact it is apparently the first digital work exhibited at The Frick). The works of “Transforming” are therefore important to understanding how the genre of still life is being adapted to contemporary society due to changes in technology and how new media is making its way into older traditions. I think the intersection of old and new is important to document and will be interesting to users in the future.


At the same time, their work is using cutting edge technology in animation, coding, and display, which will interest computer art and design historians. Additionally, since Rob and Nick Carter worked with a visual effects firm, the works also will interest those who want to understand how corporate entities are involved with art, especially those facilitating digital art for those who may not have the technical skills to realize their vision.

Finally, these pieces are part of the contemporary attempts of creators and producers to foster user engagement with media content. With the ever growing amount of exposure to media on a daily level, the public often devote only a small amount of time to the images that pass before their eyes. These artworks represent a response to this moment, a clear commentary on the need to focus, and how undivided attention can be rewarded. Therefore, documenting “Transforming” means documenting the cultural conversation around media consumption in the early 21st century.