3 May 09:00
- 17:00

Workshop on Citizen Science and Crowdsourcing

Crowdsourcing, a term coined by Jeff Howe, is when you take a large task, break it down into smaller - micro - pieces and send it out to a large group of people to perform each microtask in parallel for a monetary reward. Microtask crowdsourcing traditionally covers a different set of scenarios. Tasks primarily rely on basic human abilities, including visual and audio cognition, as well as natural language understanding and communication (sometimes in different languages) and less on acquired skills (such as subject-matter knowledge). 

As such, a great share of the tasks addressed via microtask platforms like MTurk or CrowdFlower could be referred to as ‘routine’ tasks – recognizing objects in images, transcripting audio and video material and text editing. To be more efficient than traditional outsourcing (or even in-house resources), the tasks need to be highly parallelized. This means that the actual work is executed by a high number of contributors in a decentralized fashion; this not only leads to significant improvements in terms of time of delivery, but also offers a means to cross-check the accuracy of the answers (as each task is typically assigned to more than one person) and reward the workers according to their performance and productivity.

In this workshop, you will

  • Learn about the fundamentals of citizen science and crowdsourcing
  • Perform hands-on experiments on microtask crowdsourcing platform Figure Eight
  • BE the crowd! Take part in a citizen science campaign
  • Learn about the successful applications of crowdsourcing in different domains

This workshop funded by Universiteitsfonds Limburg: https://ufl-swol.nl/en/request/scientific-projects/.


9:00-9:15 Welcome with Coffee & Tea, Short introduction of the workshop and the organizers
9:15-10:00 Introduction to Human Computation: Citizen Science and Crowdsourcing
10:00-11:00 Fundamentals of microtask crowdsourcing 
11:00-11:15 Coffee & Tea break
11:15-12:30 Hands-on I: designing a microtask on Figure Eight
12:30 - 1:30 Lunch
1:30 - 2:30 Hands-on II: MIA: Crowdsourcing medical image annotation task
2:30 - 3:30 Hands-on III: EMO Annotation
3:30 - 4:00 Coffee & Tea break
4:00 - 4:45 Applications of crowdsourcing, summary and conclusion
4:45 - 5:00 Q&A, wrap-up and feedback



Amrapali Zaveri is a postdoctoral researcher at the Institute of Data Science, University of Maastricht. Her research interests include data integration, quality and analysis. She evaluated the feasibility of using crowdsourcing as a means of quality assessment for DBpedia (the structured version of Wikipedia). Now she is designing microtasks for employing workers towards quality assessment of biomedical metadata. The challenge is to design tasks such that non-experts, those who do not have a biomedical background, can contribute towards the quality assessment.

Deniz Iren is an Assistant Professor at the Center for Actionable Research of Open University (CAROU). His work at CAROU covers creating business solutions using Machine Learning / Artificial Intelligence. He uses crowdsourcing as a means to acquire quality data to train machine learning models and to develop human-in-the-loop systems.