The invention of digital images
Computers started as a text-based media. The ability to render and display graphics needed to be invented; it was not a native feature of the hardware. Even after the computer became graphical, the internet and web browsers needed to also display its own graphics. Lisa Nakamura was quoted as saying:
“In 1995 Netscape Navigator, the first widely popular graphical Web browser … initiated popular use of the Internet and, most importantly, heralded its transformation from a primarily textual one to an increasingly and irreversibly graphical one”.
Traditional images were constrained by the size of the page and the colors available for printing. Those boundaries limited the preservation and storage issues that come from maintaining items for future users. Digital images have fewer restrictions. Nothing illustrates this point better than webcomics, which have evolved beyond the 3-panel comic in the newspaper to become tall, wide, many-paneled, full-colored, or even animated (The Rise of Webcomics).
You can Photoshop that, right?
Everyone knows the old saying, ‘a picture is worth a thousand words’. Images spread easier and faster than a blog post and are, therefore, more useful as a tool for social commentary (Is Photoshop Remixing the World?). Those digital images helped create something that is shaping the modern world: the internet meme.
However, there cannot exist an internet meme without the software to create said meme. One specific paint program, Microsoft Paint, was once described in one article as, “The graphics program that was most available during more than a decade of intensifying internet usage and meme production, the period from 1995–2007, was one inherited directly from the painting methods and tools of the 1980s”.
MS Paint was originally marketed solely as a way to sell more operating systems at a time when Microsoft Windows did not come standard on a computer. It was designed to get people interested in buying Windows to do more with their computer. Nowadays, MS Paint has been overshadowed by newer, more specialized image manipulation programs, such as Photoshop, Illustrator, and many others.
“The convergence of MS Paint’s ubiquity, with the rise of Nakamura’s ‘increasingly and irreversibly graphical’ internet, produced the circumstances under which MS Paint helped produce a visual, participatory, and online culture. This software was the graphics program most readily available and easy to use at the moment the internet took its graphical turn.”
But why call the program ‘Paint’? The word means many different things depending on the context. In home improvement, it means the stuff you put on wall, or other things, to change their color and make them look better. To an artist, it means to use that same material to create something wonderful that expresses something. To a visual effects artist, it means to remove something from a video. To a computer, it is how the image is created.
There are two ways to create an image on a computer – through vectors or bitmaps. A vector image is math-based compared to a bitmap image, which is pixel-based. Vector images are a series of instructions on how to re-create, or draw, the image through creating lines or arcs between set points. Bitmaps are a pixel-by-pixel record of what the individual points of an image are. Vector-based programs, usually with ‘Draw’ in the name, were marketed towards businesses due to the precise way they created the images. Bitmaps, with their free range of expression, were sold to the general public as ‘Paint’ programs.
Copy of a copy of a copy…
In addition to what is created digitally, people have enjoyed taking pictures since the first camera was invented. This article points out that digital cameras have allowed people to take more pictures in two minutes than were taken in the 1800s. Before the internet, the vast majority of the pictures taken was never seen by anyone other than the photographer and their friends and family.
Now, the internet allows one to share their images more easily, directly to people they know or through social media to the world, and photo-editing software, like Photoshop, is ubiquitous. But, with all those copies in different locations, which is the copy that should be preserved? What is the original, or final, version? Or should everything be kept? What about derivative works?
The argument between those that say ‘keep everything since storage is cheap’ and ‘curated collections’ will probably never finish. However, it has become easier to keep everything than to cull it, as there is too much stuff to go through in any amount of time (Digital Copies and a Distributed Notion of Reference in Personal Archives).
8 Replies to “First Sketch of Digital Images”
I was interested by Marshall’s discussion early in her article about the benign neglect that causes people to put copies of their pictures in multiple sharing platforms and the wish that there was a way for the casual or even the interested amateur to sync their edits across platforms so that the work would be uniform, the metadata coordinated, and the authoritative version of the image recognized by the creator. If, at some point, every social media platform is owned by facebook or google, the problem will more or less solve itself.
However, I don’t think the “Just save everything because it’s cheaper to save it” is the right answer. Call it a professional bias, I would argue that with digital material its even more important to have curation because there is so much greater scope for corruption of the content. Even if the data never becomes recoverable, curation would ensure that you know what you’ve lost.
Also, Going Postal covers the potential negative ramifications of this work scenario.
Even if all social media is owned by one company, there will still be specialized platforms, one for life updates, one for photo sharing, etc., and even if the same picture is posted on all of them, they will still acquire different comments and likes because not everyone will be using all the platforms.
While I do agree that curation is better than no curation, if only the items that can be curated are taken in, then many things will be lost due to not being preserved at all. If items that are not curated are taken in, there is at least a hope that someone might come along and curate them before they become unrecoverable.
Going Postal is available from the UMD Performing Arts Library:
But what about the EU-CERN example? CERN has so much digital storage space that they are playing host to all of the EU’s digital collections, items being dumped, we’re told, with zero curation.
The “long term retrieval strategy” as Marshall would call it is that CERN is incredibly safe and well-funded and its own data protection algorithms for it’s terribly important scientific materials are more than enough to take care of the drop in the bucket that is the entire history of the European Union. But as we learned in Dr. Punzalan’s class this data isn’t being cared for. To take an intentionally polemical stand: aren’t we just shoving materials into the back of a hypothetical fridge and hoping that instead of mold it grows metadata?
Yes, exactly. But, to continue the metaphor, the alternative is to leave everything on the counter while we carry on with life and hope they don’t spoil before we have a chance to create the metadata and then put them in the fridge where they belong. There is more time to create the metadata if the item exists.
I think there’s also an MPLP aspect to this kind of situation, an idea that for the moment it’s most important to preserve the materials, and then later we can go through and edit/arrange as (and if) we have time.
Another aspect might be a Google-style faith in the ability of search engine functionality to ultimately sort through and find whatever we need. Gmail is a great example: Google deemphasizes deleting and organizing your email, instead encouraging you to simply archive everything and then use the search function to find whatever you need. From personal experience this does work most of the time, but definitely has shortcomings for some cases, and certainly conflicts with archival concepts of appraisal, selection, and arrangement.
The approach also seems reliant on creating a big enough of a problem that a significant stakeholder community will be interested in a solution to parsing large amounts of data. If we save all of this stuff versus curating now maybe one day market forces will coalesce to produce a tool that allows us to sort through it all because it has become so untenable. It reminds me of the idea that highly adopted file formats will be safe in the future because there is such a high stakeholder interest that someone will eventually figure out a way to forward migrate or open them. These ideas seem logical but it relies as much on future uncertainty as trying to select and appraise what users will be interested in accessing in the future.
I might be thinking about the wrong sort of thing here, but I think we’re already moving towards this, although not for preservation, but for business to market things to us. Facebook has access to all of the photos we put on their website, and can automatically identify us even without seeing our faces, apparently. Such technologies, while kind of creepy… might help us sort through massive amounts of data down the line, certainly if we can do such things with faces… it might be possible to do for other information we need, both in the photo and stored outside as well (e.g., finding closely related files, even if they are not exact replicas, and figuring out the differences). I suppose the actions are not entirely syllogistic, but I think that there could be potential there for adaptation, rather than the need to develop a whole new system.
(So, to continue the analogy, we put things in the back of the fridge until we have a robot smart enough to label them quickly while we get on with other things).