Folklore and the Fear Factor: The Evolution of Legends in the Era of Reddit

In the era of technology, modern medicine, and science, the concept that people still believe in, share, and adhere to folklore might sound absurd. Take, for instance, the story of the Pied Piper of Hamelin. The story of a colorfully dressed rat catcher, hired by the town of Hamelin, who plays his flute, entrancing the pests and leading them out of the town. When the town refused to pay for his services, however, the Piper used his flute to lure a new set of victims: the town’s children. Lured by his tune, the children left town and vanished never to be seen again. By today’s standards, this story sounds more than a little odd, the type of tale that would be unlikely to pass the test of time as it once did. However, if you dig more deeply into that story, a truth unfolds.

Pied Piper of Hamelin rendition, copied from the glass window of the Market Church in Hamelin.

While the rats were a later addition to the story, one common truth remained: a stranger came to town, and left with the children. In 1227, approximately 50 years prior to the story in Hamelin, the Holy Roman Empire and Denmark fought in a battle that pushed back Danish borders. Colorfully dressed Roman salesmen, often called “locators,” travelled the land to find skilled men and women to move north to protect the Empire’s new borders. For obvious reasons, this was a hard sell. For towns like Hamelin, losing skilled laborers could put the town at risk. As a result, it was common practice to sell or give away children to this cause when locators came into town. For Hamelin, the tracing of surnames to new towns proves the less savory version of this folktale: a town made the collective decision to sell their children to locators to ship off to new towns. From there a collective story was constructed as a way to cope with their actions for years to come, and the Pied Piper was born.

Much like those that came before us, humans still tell stories to make sense of the world. Most especially, we continue to be drawn in by stories of tragedy, of what hides in the dark, or what steals our children. Our modern legends can be traced in figures such as the Slender Man. Slender Man, an unnaturally thin and tall humanoid creature, is said to stalk, abduct, and traumatize it’s victims, usually children or young adults. His story began on the Something Awful forum, with a couple of doctored photos, but those on the forum (and on other forums, such as Reddit and 4chan) began adding narrative and visual art, building a mythos of Slender Man.

The legend increased in popularity, showing up first in video games, blending into traditional popular culture, and then movies. Unfortunately, much of this limelight was a result of a 2014 tragedy, when two 12 year old girls lured their friend into the woods and stabbed her as an “offering” to Slender Man. Their actions, as awful as it may seem, continue to show the pervasive power of folklore in the modern era.

Film poster for Slender Man Movie, released 2018

While the original Slender Man story proliferated on a pre-Reddit site, there is little doubt that Reddit has become a breeding ground for modern day folkore. Subreddits such as r/creepypasta, r/nosleep, r/letsnotmeet, and more have acted as a space for entire communities built around the purpose of creating, sharing, and commenting on scary stories.

For now, my primary question remains: when we compare these stories against more traditional folklore, what role does a medium such as Reddit or TikTok play in the creation and proliferation of folklore? And in the era of science and technology, are we somehow more beholden to these stories than ever before?

In my project, I am hoping to explore some of the most popular subreddits and examples of modern folklore, examining how the medium of social media plays a part in the creation and proliferation of folklore. Without our knowledge, have these stories become even more important to our societies than the folktales we believe we have left behind?

For now, I will look at examples such as Slender Man (and other creepypasta figures) and trends such as Randonautica to track how they show up in social media (most likely using tools such as Voyant, Google n-gram, and topic modeling programs where possible). From there, I will attempt to assess the role these platforms play in the potency of the stories told, as well as assessing the lasting power of the legends in the context of “virality” and the fleeting nature of trends online.

Citations:

Blank, Trevor J., and Lynne S. McNeill. “Introduction: Fear Has No Face: Creepypasta as Digital Legendry.” In Slender Man Is Coming: Creepypasta and Contemporary Legends on the Internet, edited by Blank Trevor J. and McNeill Lynne S., 3-24. Logan: University Press of Colorado, 2018. Accessed February 24, 2021. http://www.jstor.org/stable/j.ctv5jxq0m.4.

Manhke, Aaron hosts, “A Stranger Among Us,” Lore (podcast). December 28, 2015. Accessed February 24, 2021. https://www.lorepodcast.com/episodes/24

photos:

https://www.cinematerial.com/movies/slender-man-i5690360/p/fwdpcmpf

https://en.wikipedia.org/wiki/Pied_Piper_of_Hamelin

My Life as a Warning to Others

Hello, everyone. My name is Andy Cleavenger, and I am beginning my fourth year of this two year program.

My life up to this point has been spent as a photographer and multimedia specialist at a government contractor. I work in their Communications department. My interest in this class stems from my role as the sole caretaker of our department’s image collection. For over 17 years I have been the only one capable of performing image searches, and the only one concerned with the preservation of those images. I’m in the Digital Curation track to learn how to effectively turn my collection into a self-service resource available to all employees. And I’m in this class specifically to make sure I’m doing everything possible to ensure the long-term preservation of our image collection.

I must admit that the first axiom listed in Owens – “A repository is not a piece of software” – just about made me stand up from my chair and shout “see, I told you!” at my former boss. We have always treated the image collection as a problem that can be solved with a magic-bullet purchase of DAM software.

“We bought it… we’re done!”

This is of course, extremely common. Like most offices, they forget about the systems that will come after the present one, or the unceasing march of technological progress that dictates both the increasing complexity of the images as well as the expanding diversification of their use. This was nicely summed up in Owens’ last axiom: “Doing digital preservation requires thinking like a futurist.”  I fear that they may regret some of the decisions they’ve made such as stripping all filenames from their videos, throwing everything into a single directory, and then depending on an external proprietary catalog file to save all related metadata.

We are now married to that system… and it’s failing us.

The remaining articles on either side of the digital dark age debate made some equally compelling points. Ultimately, I felt that Lyons and Tansey both came closest to hitting the mark on what form a digital dark age would take, as well as the forces that would drive it. Lyons frames the problem as one of cultural blindness. That is to say that institutions that exist within and serve a particular society tend to have difficulty in recognizing the value in – or even being aware of – the records of other communities. As such, the digital dark age will manifest itself in the silence of these socio-politically disadvantaged communities within the archival record.

This is not an unfamiliar argument, but I tend to think the motivations for its reality are less a conspiratorial omission than they are due to a sad pragmatism driven by extremely finite resources. This point was reflected well in Tansey. She makes the point that the long trend of cuts to budgets and staff force institutions to set priorities that obviously leave gaps in the archival record. In other words, even if an institution has an awareness of fringe communities, and possibly even has a sympathetic collections policy for including those records, the pragmatism of limited resources may still dictate their omission as the institution focuses on its highest priorities.

I have certainly seen this in my position in the Communications department. I’m curious if others in class have seen examples like this in their own workplaces?

Some Thoughts on Visualization in the Humanities or the worst blog post title ever (sorry)

This week’s readings expressed a wide and deeply conflicted range of attitudes regarding the assorted uses of computers, computer modeling and the data-ization of the humanities. The authors were all for it, but some of the arguments against the idea they discussed were interesting – and valid. This validity is incredibly important; having been discussing diversity and cultural inclusion on LBSC 631 this week, I found myself hyper-aware of the attitudes some of the authors were displaying to their techno-tentative brethren. However, this is a blog and I’m am going to make some grand and sweeping statements – which I will then try to back up… hopefully using memes.

Grand Statement #1: Let’s not be that guy.

You know the guy I mean.

Grand Statement #2: “Computers allow you to go further.”

If there is to be a rallying cry for the Digital Humanities, this might well be it. Yesterday, I was whining to a mechanical engineer of my acquaintance about Underwood’s observations of the reluctance in the Academy to embrace digital technologies, how they fear a total seismic shift* in their world.

I would like to assuage those fears. According to my mechanical engineer, “Computers allow you to go further. They don’t take away the work.” I was scrambling for a pen here so the next bit is a paraphrase: computers make more work and they make what you’ve got more accessible.

Take the work done with MALLET, Blevins describes how computer modeling validates itself. The Ballard diary, chosen in part one assumes for its completeness, shows how well the computer can model. Blevins even relates how surprised he was that it initially worked so well. But it worked. The tool did the thing it was designed to do. That’s great! And now there’s all this data to play with. If you wanted to only focus on the number of babies born when the crocuses were in bloom, then it’s a simple matter of correlating your data. If you want to take up the argument discussed in Graphs, Maps, and Trees, that there is no such thing as a “gothic novel” and dissolve that grouping from his chart of genres to see what the effects are, you can do so. Vistas abound, new peaks arise to be surmounted.

Tumblr

The problem, I think, is that Humanists see things like “correlating data” and “data manipulation” and they freak out because these are STEM things. That they are not scientists, but humanists. Theirs is a world of logic and rhetoric. Well, yes, fine, but notice how scientists get all the grants?

Science vs. Humanities… ›› See... and my family thought I was crazy for being a Humanities major.

We don’t need to spend years waiting on graduate students to count everything by hand. We can load that puppy, or important literary work, into a computer and run analysis, any analysis, all analyses. And then tomorrow, we can do it again, go further and deeper. Instead of relying on grad students, you can partner with other academics on the other side of the world as easily as in the next building over a la Graham, Milligan, and Weingart. Where privacy and exclusivity are a concern, there is no need to make work public as they did, but this opens a work up to more input, catching mistakes earlier, and examining multiple points of view since no one publishes the book to make money anyway. You publish the book to get tenure.

Grand Statement #3: This is not the Singularity.

Technology is moving at a brisk clip, but we’re not in any danger of be replaced by robots today or tomorrow or the next day. For whatever reason, and I’m going to guess it has a lot to do with not being good with computers 10+ years ago, some humanists aren’t on board with putting the digital into their work. This is a massive disappointment for the rest of us because the kind of work that they’re doing, work like breaking down the linguistic anachronisms in Downton Abbey (a point of much personal vindication for me) and examining the Ballard diary, is really interesting. And doing it with graphs means that people who don’t have PhDs can understand it too. Perhaps therein lies the fear; that if outsiders can see – and understand – what we’re doing, we’ll all revert to the seventh grade and get made fun of by the popular kids for liking to evaluate complexly and dig a little deeper. So how do we embrace our intelligence, how do we share the fruits of our enthusiasm in the best possible way? I would argue that charts and graphs – visualizations of complex data – are the way forward.

 

 

 

*To be fair, the idea of a seismic shift as representative of a complete overhaul of any working system was no doubt active before Mr. M. Watkins published his article “How Managers become Leaders,” but it is from him that I got the idea so I have linked to it in Google Scholar: Michael D. Watkins, “How Managers Become Leaders,” Harvard Business Review 90 (June 2012), 65-72.

 

The “True” Corpus of American English

In an analysis of the Corpus of Historical American English (COHA) and Google Books, Professor Mark Davies of Brigham Young University studies the effectiveness of both engines and their ability to properly read the English language. Both are corpuses of the American English language, but Google Books has 500 billion words compared to COHA’s 400 million. Davies argues that although COHA has a significantly smaller database, the trending patterns still mirror those of Google Books. Because of this, Davies argues that COHA is actually the more effective corpus—a smaller database means far fewer data to sift through and that means quicker search results and faster information.

COHA’s “toys” are what make it a more useful database in Davies’s eyes. While Google provides the same basic function (showing the frequency of word usage throughout the decades), COHA is able to track concepts, related words, and changes in meaning. Whereas Google tends to have a one-track mind, just like it’s general search engine, COHA manages to “think” about relations for the words being placed into the search. Looking for things such as relation, form, root words, or even cultural shifts, the searches are much more comprehensive.

Design is a huge issue for some researchers, and with this in mind, Google definitely has the upper hand. True, as Davies puts it, COHA is able to effectively portray the same statistics, but I had a genuinely hard time navigating the site. Bar graphs are nice for portraying how many dark-haired, blue-eyed people are in a class, but for a corpus of American English, I found them rather ineffective. Google Books had a much more pleasing site, nicer to the eye, and easier to follow the pattern of the language. Yes, COHA’s tables are nice for their alternative searches, but as I said, traversing the site is actually rather difficult. Google provides a much more streamlined site, and provides actions that are easy to follow

Unfortunately for Google, I think the amount of words in the database have made it impossible to create proper analyses for the grammar, word meanings, and word foundations that COHA is successfully able to analyze. If Google Books created an efficient way to sift through all of that information quickly enough, then it would immediately become the preferred site. However, because of its inability to process the large amounts of data (largely its own fault) has rendered it ineffective to be a “true” corpus of the American English language.