Strategies for Crowdsourcing Social Data Analysis

Wesley Willett, University of California, Berkeley
Jeffrey Heer, Stanford University
Maneesh Agrawala, University of California, Berkeley

Social data analysis – in which large numbers of collaborators work together to make sense of data – is increasingly essential in a data-driven world. However, existing social data analysis tools don’t consistently produce good discussion or useful analysis. Our recent work shows how analysts can enlist paid crowd workers to systematically generate explanations for trends and outliers at scale and demonstrates techniques for encouraging high-quality contributions.

Analysis Workflow

In our workflow, analysts automatically or manually select charts showing trends and outliers in a dataset, then use crowd workers to iteratively generate and rate the best candidate explanations for those features.

Making sense of large datasets is fundamentally a human process. While automated data mining tools can find recurring patterns, outliers and other anomalies in data, only people can currently provide the explanations, hypotheses, and insights necessary to really understand the data. As analysts consider larger and larger quantities of data, they need to be able to leverage bigger pools of contributors to help make sense of them. Over the past half-decade, a wide array of social data analysis systems have attempted to address this problem. These systems, including research platforms like Sense.us, Pathfinder, and Many Eyes, and commercial products like the now-defunct Swivel.com and Verifiable.com, were built under the assumption that people would use them to analyze and explore data in parallel.

Experiments with these systems have shown how motivated groups of users can visualize, share, and discuss data using them, especially when the right social incentives are present. However, the vast majority of the visualizations in these social data analysis tools yield very little discussion. Even fewer produce high-quality analytical explanations that are clear, plausible, and relevant to a particular analysis question.

To see how sites are used in practice, we recently surveyed Many Eyes, arguably the most successful and long-lived social data analysis platform.  We found that from 2006 to 2010, users published 162,282 datasets via the service, but generated only 77,984 visualizations and left just 15,464 comments. We sampled 100 of the visualizations that contained comments and found that just 11% of the comments included a plausible hypothesis or explanation for the data in the chart. If we extrapolate, that’s less than than one good explanation for every fifty visualizations. When comments do appear, they’re often superficial or descriptive rather than explanatory (as in the figure below), and many are actually spam. Higher-quality analyses sometimes take place off-site but tend to occur around limited (often single-image) views of the data curated by a single author.

A Many-Eyes visualization with two low-quality comments.

Typical comments on social data analysis sites like Many Eyes add little value for analysts seeking to make sense of a dataset.

To an analyst looking for help exploring a big dataset, small numbers of ad-hoc contributions from volunteers offer relatively little value.  Instead, the analyst needs a more systematic approach – one that can predictably engage large numbers of people and encourage them to generate good analytic contributions.

Our work demonstrates one approach for systematically enlisting crowds in social data analysis. Specifically, we enlist paid workers to perform some of the core human tasks in the sensemaking cycle – generating hypotheses and explanations. We introduce an analysis workflow (top) in which an analyst automatically or manually selects charts that show outliers, trends and other features in the data, then uses crowd workers to generate and rate possible explanations for them.  This allows us to quickly generate good candidate explanations like the one below, in which a worker discusses several policy changes that may explain changes in Iran’s oil output. Such analytical explanations are extremely rare in existing social data analysis systems.

A chart from our system along with a high-quality crowdsourced explanation.

Our workflow uses paid workers to generate high-quality hypotheses and candidate explanations for outliers and trends in data. (Emphasis added.)

However, simply asking workers with varying skills, backgrounds, and motivations to “Explain why a chart is interesting” can result in irrelevant, unclear, or speculative explanations of variable quality.  As a result, we also developed a set of strategies that address these problems and improve the quality of worker-generated explanations. Our seven strategies are to: (S1)use feature-oriented prompts, (S2) provide good examples, (S3) include reference gathering subtasks, (S4) include chart reading subtasks, (S5) include annotation subtasks, (S6) use pre-annotated charts, and (S7) elicit explanations iteratively.  We applied these strategies within our workflow and used it to generate 910 explanations from 16 different datasets. We found that the majority of those responses (63%) provided high-quality explanations.  Moreover, we found that crowd workers can reliably rate these explanations, allowing the analyst to consider only the best responses.

We’re excited to have shown how paid workers can be used to systematically generate high-quality explanations. Our findings suggest that, going forward, we can develop tools that allow analysts to interactively marshal the analytic power of hundreds of workers in a predictable way – leveraging human judgment and understanding to explore, question, and make sense of data at a scale not previously possible.

For more, see our full paper (to appear at CHI 2012) and video demo.