Executive Summary of the Crowd-Sourcing Scoping Study
This project sought to establish a credible definition for, and the current state of the art of, crowd-sourcing in the humanities. The questions included what the humanities have learned from other research domains, where crowd-sourcing is being exploited, what the results are, why academics are motivated to undertake such activities, and why members of the public are willing to give up their time, effort and knowledge for free. We conducted a survey, supplemented by a set of follow-up interviews, of contributors’ motivations, which received 59 detailed responses with qualitative and quantitative information about why people contribute to humanities (see Appendix A). The project identified and reviewed 54 academic publications of direct relevance to the field, and a further 51 individual projects, activities and websites which document or present some application of humanities scholarship making use of crowd-sourcing (see Appendix B). Two workshops were held, one for academics making use of crowd-sourcing, and one for contributors to those projects.
Academics in the humanities undertake crowd-sourcing projects for a variety of reasons: to digitize content, to create or process content, to provide editorial or processing interventions, and so on. Judging the current value of crowd-sourcing in the humanities is therefore extremely difficult, even before issues of trust, reliability and academic rigour are accounted for. However, one common factor is that humanities crowd-sourcing succeeds where vibrant and interacting communities of contributors are created. Whilst the motivations of crowd-sourcing contributors are every bit as diverse as those of academics, passion for the subject (a characteristic shared with academics) is the dominant factor which draws them together into communities. These communities develop and perpetuate internal dynamics, self-correct, provide mutual support, and form their own relationships with the academic world. Despite the great diversity of humanities crowd-sourcing, it is possible to observe patterns in which such communities thrive: these patterns are dependent on the correct combinations of asset type (the content or data forming the subject of the activity), process type (what is done with that content) task type (how it is done), and the output type (the thing produced) desired. In this report, we propose a high-level typology which describes different instances of each of these, and identifies the combinations that are, on present evidence, most successful in achieving projects’ aims.
The final report is available here. Please send us any comments.
The report of the May workshop is now available. Please click here to download it as a PDF. Comments are welcome, and can be made to Stuart Dunn.
The project’s first workshop was held on May 28th and 29th at KCL.
List of Participants
The following key questions were identified:
- How do we address the supposed dichotomy of professionalism “versus” amateurism? How should the two spheres interact?
- How do we cross the ‘digital divide’? We must avoid assuming that everyone who may wish to contribute to a crowd-sourcing project has unlimited internet access.
- What types of question are particularly amenable to crowd sourcing approaches
- Does crowd-sourcing best address closed or open ended questions?
- What motivates people to contribute?
- How do motivations vary with different types of activity?
- How do we (as researchers) capture and document motivations? This has been tried before in several projects, but approaches are tailored to particular types of contributor community.
- How can funders collaborate with researchers in getting the most out of academic crowd-sourcing?
- Issues of data quality are extremely important – how can we ensure quality, and what does quality mean?
- How can crowd-sourcing projects, and the data they create, be sustained? How do we preserve the effort people have put in?
Position papers and slides of the presenters:
Nick Stanhope, HistoryPin: position paper; presentation
Kimberly Kowal, British Library
British Library Georeferencer: Crowdsourcing Map Data
position paper; presentation
Tim Causer, UCL
Transcribe Bentham: A Participatory Initiative
position paper; presentation
Philip Brohan, Met Office
New Uses for Old Weather
Stella Wisdom and Andrew Gray, British Library
Crowd-Sourcing Activities at the British Library
Erin Sullivan, Shakespeare Institute
Shakespeare’s Global Communities
Anthony Masinton, Archaeology Data Service
Human Guinea Pigs and Casual Collaborators: Crowd Sourcing Data for Archaeology
position paper; presentation
We are gathering links relevant to humanities crowd sourcing on a Delicious stack: , and would be happy to know of any URLs not listed that should be added.
Also, along with colleagues from the British Library, we have recently started a crowd sourcing discussion group: see where it goes from there.
In our first networking meeting to be held in May, the following questions concerning humanities projects involved with crowd sourcing will be addressed:
* What are the objectives and/or research area of your project(s)?
* How many contributors are engaged in your project? Has this number changed over the course of the project?
* What does ‘engagement’ mean in that context?
* Do you offer incentives (e.g. a ranking system, prestige, recognition, material rewards etc) to your contributors? If not, what interests motivate them?
* What value has crowd sourcing bought to your content/project? Is this value measurable?
* What do you consider to be the main research outputs that crowd sourcing has enabled (or will enable)?