Shepherding the Crowd Yields Better Work

Steven P. Dow, HCI Institute at Carnegie Mellon University
Bjoern Hartmann, CS Division at UC Berkeley
Anand Kulkarni, Industrial Engineering and Operations Research at UC Berkeley
Scott R. Klemmer, Department of CS at Stanford University

One of the biggest challenges of micro-task crowdsourcing is quality control. Many recent algorithmic approaches address this problem by identifying and filtering out low-quality work (and workers), We decided to take a human-centered approach: teach workers how to do better work.

Study conditions for experiment on crowd feedback

Apprenticeship, critique and other training methods are common in traditional organizations, but few of today’s crowdsourcing platforms offer such opportunities. My research with Bjoern Hartmann, Anand Kulkarni and Scott Klemmer for CSCW 2012 examines how offering concrete feedback affects crowd workers.

We ran a controlled experiment in which workers from Mechanical Turk wrote multiple reviews for products they own (such as their smart phone). We had three experimental conditions. Workers in the “No Assessment” condition received no feedback (in line with current practices on Mechanical Turk). Workers in the “External Assessment” condition received feedback on each review they wrote. To offer this feedback, we built Shepherd, a system that notifies experts via a visual dashboard and routes the expert’s assessments back to the workers. Workers in the “Self-Assessment” condition rated their own work using the same feedback form. We added this third condition to see if crowd work improved without the significant overheard of employing additional experts.

Our weeklong experiment on MTurk generated about 540 useful product reviews for about $200. We found that both external and self-assessment led to higher rated product reviews than getting no feedback at all. Overall ratings were similar for participants that self-rated and those who received external assessment. Both types of assessment also led to a greater increase in expert ratings over the course of several pieces of work, indicating that workers learned. Moreover, many of the workers took time to edit their prior work even though they were offered no more payment.

In summary, we found that shepherding the crowd through feedback and self-assessment leads to better overall work, improved skills, and more perseverance on tasks. Read more about the details of this work in our CSCW paper.