The web is inherently made up of networks and interactions among its users. But what is the nature of these interactions – participatory? collaborative? exploitative? These questions play out when cultural heritage institutions take to the web and attempt to engage the vast public audience that is now accessible to them. Crowdsourcing is a means to allow everyday citizens to participate and become more involved with historic materials than ever before. Similarly, these volunteer projects can overcome institutional monetary and time constraints to create products not possible otherwise. What most interested me in the readings is the motivations of those involved in these projects. Why do citizens choose to participate? Why are institutions putting these projects out there? How do they play on the motivations of their users? These questions link back to the overarching general ideas about the nature of interactions on the web.
Why Wasn’t I Consulted?
Paul Ford describes the fundamental nature of the web with the phrase “Why wasn’t I consulted” or WWIC for short. Ford claims that feedback and voice on content is what the web is run on. By giving people a voice, even through the basest form of expression in likes, favorites, +1’s, or “the digital equivalent of a grunt,” users are satisfied that they were consulted and that they can give their approval or disapproval.
User experience, in Ford’s mind, is centered on their emotional need to be consulted. Additionally, the expression of approval is what feeds other users to create content, receiving a positive emotional response from those who consume their work. Organizations create spaces that shrink the vast web down into communities where the WWIC problem can be solved. Essentially, these structures create a glass case of emotion.
Libraries, archives, and museums have to deal with users’ emotions when creating their crowdsourcing ventures. How do we create places where the users will feel consulted and desire to participate? Like Ford, Causer & Wallace in describing the Transcribe Bentham project of University College London, and the Frankle article on the Children of Lodz Ghetto project at the United States Holocaust Memorial Museum, emphasize understanding users and volunteers as well as finding the appropriate medium is important in these undertakings.
Causer & Wallace identify a much more detailed set of motivations of their user groups than Ford’s WWIC idea. Many of their participants claimed they had interests in the project such as history, philosophy, Bentham, or crowdsourcing in general. Other than these categories, the next biggest reasoning for joining the project was a desire to be a part of something collaborative. The creators of Transcription Bentham failed to create an atmosphere where users felt comfortable collaborating which may have been why the project decreased in popularity over time. The Children of Lodz Ghetto project, on the other hand, is much more collaborative with administrators guiding researchers through each step of the process. Eventually they hope to have advanced users take over the role of teaching newcomers. The Holocaust Museum’s project is a much more sustainable model that could lead to lasting success.
Crowdsourcing (For Members Only)
While collaboration and having an interesting topic is a key factor in motivating participation, how do online history sites get the attention of the public to join in the first place? The push for the openness of both the internet and cultural institutions is something I greatly support, but I think motivating the populace to get involved in these projects needs a return to exclusivity. There is still a prevailing notion that archives and other cultural organizations are closed spaces that only certain people can access. In many European institutions this is still the case. Why don’t we use the popular notions of exclusivity to our own benefit?
Hear me out. What these articles lacked was the idea that many people desire what they cannot get or what only few can. I’m not advocating putting collections behind a paywall or keeping collections from being freely available online. Instead, I think participation in crowdsourcing projects should be competitive or exclusive in order to gain the initial excitement needed to gain a following and spur desire for inclusion.
Other social media platforms such as early Facebook and more recently Ello or new devices such as Google’s Google Glass, have made membership or ownership limited, creating enormous desire for each. In these examples, the majority of the populace is asking why wasn’t I consulted? and therefore want to be included. Thus, having the initial rounds of participation be limited to a first-come, first-serve, invite-only platform would spark desire for the prestige of being the few to have access to the project.
In Edson’s article, he wrote about the vast stretches of the internet that cultural institutions do not engage, what he called “dark matter.” While there are huge numbers of people out there who are “starving for authenticity, ideas, and meaning,” I think the first step should be creating a desire to participate and then growing the project. Without something to catch the public’s attention, create a community, and grow an emotional desire to participate, another crowdsourcing website would simply be white noise to the large number of internet users in the world. The users, who are visiting the websites looking for a way into the projects but denied, could discover the free and open collections which are there right now. After this first limited period, once the attention is there, I think scaling up would be easier. Of course these ideas will only work if the institution has created a place that understands the emotional needs of its users and provides a collaborative and social environment where users are comfortable participating.
9 Replies to “A Glass Case of Emotion: User Movitivation in Crowdsourcing”
I think your point of creating exclusivity in crowdsourcing is a good one- but instead of having participation exclusive from the start, there needs to be some kind of hook to get people to want to do it in the first place, and I don’t think that being able to look at collections will cut it. I think of it like the strategy of free samples in supermarkets- give them a taste for free, and then maybe they’ll commit to buying. Crowd sourcing sites could have a kind of leveling design, like a video game, where the first or second contribution is free to all, and then further contributions require vetting or an invite of some kind. Or it could be that the easiest actions are free to all, but if you want to contribute something more sophisticated, it requires vetting or invitation. This could help site administrators control the quality of the data, reward users who contribute a lot, and create a desire for the outsiders to migrate to the elite circle.
I noticed they tried to do something like this in Transcribe Bentham by showing the top transcription stats publicly, and some users commented that it successfully kept them competing. But I can imagine that it also discouraged others, and an approach with more gradation and a stronger sense of community could def. work better.
Perhaps it is because my first two experiences with crowd sourcing were, respectively, a plot point of
The Last Starfighter and the DARPA red balloon challenge, I think it’s wise to take any crowd-sourced project with a grain of salt.
In that regard, how can we regulate who is sourcing the crowd? I’m basically behind DARPA using games to test defense theories because it’s harmless to the participants and saves lives later; but the ethical dilemma posed by Starfighter is just a small hop over a very fine line.
For those not familiar with this cinematic gem, a kid from the boonies happens to be a whiz at an arcade game called Starfighter – in fact he’s so good that he gets kidnapped by the aliens whose plight is the plot of the game IRL to fight their evil alien adversaries. The game was designed and distributed specifically to find qualified pilots on Earth. Spoiler, the kid defeats the baddies. Great, but he was still press-ganged into service via this game because the protagonists were crowdsourcing talent.
Recognizing that Starfighter was not a documentary, there is nonetheless a valid point. The kid in the film had no idea who he was actually supplying data to or for what purpose or what the consequences would be; isn’t that wrong? So much of the internet works on trust and especially crowd-sourced projects; where do we, as the professionals dedicated to this field, draw the line? Outside of professional responsibilities, is it even our right to interfere?
Well, this isn’t about history, but this story illustrates part of the problem with crowdsourcing, specifically referring to vandalism on the musician Beck’s Wikipedia page. Beck as you may or may not know won the Album of the Year Grammy the other night, which many people including Kanye West expected to be won by Beyonce. Yet by the same token, the vandalism was spotted quickly and corrected. Wikipedia now has set both Beck’s and Beyonce’s pages to “semi-protected” status, which means that there are restrictions on who can edit the pages. Interestingly though, Beck’s page says that it will have that status until Feb. 16th due to vandalism, while Beyonce’s says that it is semi-protected to “promote compliance with the policy on biographies of living people,” which makes me think (without having time to investigate right this moment) that her page has had some ongoing editing issues.
I am intrigued by the suggestion that we somehow limit admittance into crowdsourcing projects in order to lend an air of exclusivity and therefore draw more interest. I can’t tell you how many times I’ve thought, “if only there was a way for project managers of crowd sourcing projects to somehow vet their volunteers…” However, after reading “Building a Volunteer Community: Results and Findings from Transcribe Bentham,” I worry that making these projects more exclusive will keep volunteers from participating. According to Causer and Wallace, of Transcribing Bentham’s 1,207 registered volunteers, 7 “Super Transcribers” completed 70% of all manuscripts transcribed for the project and only one of those was with the project from its beginning. Doing work that will viewed by others can be very intimidating. You don’t even want to know how long it has taken me to compose my thoughts into this blog comment. What if making our public projects more exclusive prevents potentially long-term volunteers from registering?
@Jaime, I agree that simply giving the ability to look at collections will not spur a large amount of people to participate in these projects. However, I do believe that making the projects exclusive at first would motivate some who like the prestige (something like beta testers). Following with this idea, I like your idea about a video game style interface as many of these platforms have begun to “gamify” their tasks in order to make them more fun for the user (see NYPL’s Building Inspector or Metadata Games). Having top users progress to the next level of difficulty (leveling-up) would certainly mirror this game approach and motivate usage.
@Marian I definitely agree that exclusivity would force some users away. In fact, many in the Bentham project seemed turned off by competition and public stats. I think that is why the type I propose in the article should be limited to the earlier stages. Maybe an additional friendlier environment would be appealing to the users you mention that may have misgivings about participating publicly.
It seems like what we are all talking about is the user experience. The users who desire or are turned off by the issues you both raised, I think, could coexist together. Crowdsourcing projects have the difficult task and the possibility of providing multiple concurrent spaces for the diverse range of motivations of their users.
Couple things. First, in the Transcribing Bentham article, the authors point out that part of the problem they had in recruiting a motivated set of volunteers with sustained participation was in the perceived difficulty of the task. Volunteers were not only asked to decipher and transcribe Bentham’s terrible handwriting, they were also supposed to encode the document for machine readability. While the handwriting was challenge enough on its own, many potential volunteers who may not have previously done anything similar before likely looked at the encoding part and said “no way,” particularly when there’s no immediate feedback telling you that you’re doing it right or not. The authors recommended keeping crowdsourced tasks as simple as possible to not only encourage people to join in the project and stick with it, but also to avoid giving people the perception that their limited time is being wasted.
I think that the New York Public Library’s approach to crowdsourcing What’s on the Menu? by keeping tasks simple and separate provides a good example of how recruiting volunteer labor for digital history projects can work ( see my discussion of WOTM? here if you’re so inclined). They allow users to complete three simple tasks: transcribe menu items, review transcribed menus for accuracy, and geotag complete menus. Volunteers can do one, two, or all of these things depending on the availability of work, their own interests, and how much time they have (not that any of these things really takes much time). I think the potential for variety in tasking keeps it more interesting. The Bentham transcription might benefit from separating the transcription part from the encoding part, so that perhaps when the document transcription is complete, it gets sent to a queue for encoding, and people who are more comfortable with that aspect of the work can tackle that.
I really like the idea of crowdsourcing, but like so many other aspects of engaging visitors online, it can be a tough nut to crack. Ultimately, I think it has to be a case by case decision as to whether it’s a good idea at all; if the organization does decide to crowdsource projects, they have to tailor that participation to their own needs and style of reaching out to the public. For example, I really liked the example that Stephanie blogged about this week. The “What’s on the Menu?” project looks very well thought out, as it fulfills a need that the NY Public Library has to digitize their menu collection, and it fulfills the need the public feels to be involved in the library’s collections.
I think there’s something very important about institutions whose very point is to engage the public, like museums and public libraries, extending that engagement to their Internet presence. As I’ve said before, I think of the web as a public space where people gather, a digital extension of civilization. To that affect, Paul Ford is right when he says the web is a customer service platform (at least in part), so institutional web presence needs to keep the visitor in mind. I love the idea of open crowdsourcing for certain projects, because it allows collections to do something they can’t do in real life. We can’t let just anybody traipse through a collections room in a museum, library, or archive, but we can let them digitally “touch” objects and feel like they’re contributing to the institution’s mission of preserving history in the public trust.
However, I absolutely get Joe’s point and agree that people often don’t value something they can do for free at any time. In fact, many museums add a nominal fee to their programs ($5 or so) in order to add value to the event, because they’ve found that people are more likely to sign up and show up if they pay even a little bit to be there. I’m all for finding a way to increase the draw or prestige of digital crowdsourcing projects to attract users that will find them fulfilling, and who truly want to connect to the institution.
Another issue I wanted to raise that I don’t really recall seeing discussed is has anybody actually asked potential volunteers about what kinds of projects/tasks they might be interested in working on before actually coming up with a project and hoping that people decide to help out? Easier said than done, I know. But in the context of a cultural heritage institution such as a museum, they could for example survey people who already have some sort of engagement or personal investment with the organizations such as paying members or registered users on their websites/blogs and ask them about ways in which they might be willing to deepen their involvement. Maybe provide some examples of what the work is that needs to be done in terms of specific tasks or type of records/artifacts or projects that are already somehow in the works. Then use that information to design projects that harness the power of the already interested volunteer crowd to accomplish the work and the mission of the organization. It just seems like if you ask the user “How can we help you to help us?” sooner in the process of project creation it could prove to be beneficial to all parties.
@sharry, I really agree with/like your idea about asking users first about what tasks they’d be interested in before beginning crowdsourcing projects. Obviously in any institution, it can be extremely challenging to accurately collect user surveys–and even harder at times to put those survey answers into useful practice. For crowdsourcing, and interactive digital history tools, however, I think it could be particularly beneficial, not only for figuring out what kind of technical level users are comfortable with, but also what materials/topics they’re most interested in getting their hands on. Those 7 “Super Transcribers” who completed 70% of all manuscripts transcribed for the Transcribe Bentham project obviously had some vested interest in that topic or those manuscripts, and individuals like that should be consulted to see how they can be further engaged—especially if we know users feel this need to be consulted and approved of.
For smaller institutions, I think it might also be useful to incorporate some sort of physical training/interaction regarding these projects to go along with the digital work. For instance, if small historical societies often have genealogical users, could projects be designed that consult these family historians on what kinds of material they would be most interested in working with, and then offer hands-on training or sessions to learn more about digital projects and/or crowdsourcing? I know I’m getting off track a little here, and obviously one of the major points of crowdsourcing is to save time and accomplish more tasks, but if garnering more users (and working to make them feel more valued) is just as important, should we not also work to engage users physically to encourage more digital participation? Does that make sense?