Talking in Circles: Selective Sharing in Google+

Sanjay Kairam, Stanford University
Michael J. Brzozowski, Google
David Huffaker, Google
Ed H. Chi, Google

As people move an ever-increasing proportion of their social interactions online, the consequences of ‘over-sharing’ in a new medium have been brought sharply into focus. If users are aware of the consequences of over-sharing, then why do they continue to share so much? This question constitutes what Barnes has called the ‘privacy paradox‘. In this paper, we study Google+ to provide a first empirical study of behavior in an online social network (OSN) designed to facilitate selective sharing.

Continue reading

Strategies for Crowdsourcing Social Data Analysis

Wesley Willett, University of California, Berkeley
Jeffrey Heer, Stanford University
Maneesh Agrawala, University of California, Berkeley

Social data analysis – in which large numbers of collaborators work together to make sense of data – is increasingly essential in a data-driven world. However, existing social data analysis tools don’t consistently produce good discussion or useful analysis. Our recent work shows how analysts can enlist paid crowd workers to systematically generate explanations for trends and outliers at scale and demonstrates techniques for encouraging high-quality contributions.

Analysis Workflow

In our workflow, analysts automatically or manually select charts showing trends and outliers in a dataset, then use crowd workers to iteratively generate and rate the best candidate explanations for those features.

Continue reading

Is This What You Meant? Promoting Listening on the Web with Reflect

Travis Kriplean, Computer Science & Engineering, U. Washington
Michael Toomim, Computer Science & Engineering, U. Washington
Jonathan Morgan, Human Centered Design & Engineering, U. Washington
Alan Borning, Computer Science & Engineering, U. Washington
Lance Bennett, Political Science, Communication, U. Washington
Andrew Ko, The Information School, U. Washington

Communication is about listening as much as speaking. Unfortunately, our web interfaces have thus far paid scant attention to supporting listening, creating a feedback gap and likely contributing to the scorched earth nature of our web dialogue. We have designed Reflect, a simple interface for encouraging listening, and deployed it on Slashdot. As a by product of their acts of listening, commenters open up new possibilities for crowd-sourced discussion summarization.

Continue reading

Understanding Experts’ and Novices’ Expertise Judgment of Twitter Users

Q. Vera Liao, University of Illinois at Urbana-Champaign
Claudia Wagner, DIGITAL-Institute for Information and Communication Technologies
Peter Pirolli, Palo Alto Research Center
Wai-Tat Fu, University of Illinois at Urbana-Champaign

Twitter users choose to follow “insiders” of fields they are interested in knowing, even though they do not necessarily know them in person. External recommendation system that suggests expert accounts of a particular field for users to follow is a useful tool for this purpose. However, we questioned the accuracy and overall user experience of such system by analyzing the topical relevance based on users’ published contents (aka, Tweets), which is the major trend of Twitter recommendation system in use.

Continue reading

Social Desirability Bias and Self-Reports of Motivation: A Study of Amazon Mechanical Turk in the US and India

Judd Antin, Yahoo! Research
Aaron Shaw, UC Berkeley & Berkman Center

In this study we show evidence that self-reports of motivation for doing work on Amazon Mechanical Turk are subject to social desirability bias. We employ a quasi-experimental survey method which mitigates that bias, and reveals different patterns of motivation and social desirability among MTurk workers in the US and in India.

Continue reading

Asking the Right Person: Supporting Expertise Selection in the Enterprise

Svetlana Yarosh, Georgia Institute of Technology
Tara Matthews, IBM Research — Almaden
Michelle Zhou, IBM Research — Almaden

Expert finding systems help enterprise workers find a person who may be able to help with a project or question, but it is challenging to select the best expert for a specific task. This leads the user to waste time by asking an expert who may not be able to answer the question or who might not take the time to respond (we call this a “strikeout”). Through a lab study of realistic expert-finding tasks with 35 participants, we found that the current leading expert finding approach leads to a 28% strikeout rate, but our improved interface reduces this rate to 9%.

Continue reading

#EpicPlay: Crowd-sourcing Sports Video Highlights

Anthony Tang, University of Calgary
Sebastian Boring, University of Calgary

While watching live sports events, many fans use social media (e.g., Twitter) to report on their thoughts about the event as it unfolds. We investigated using this information source to semantically annotate live broadcast sports games (here: American Football) to select video highlights that are appropriate for an individual fan rather than for a general audience – showing plays that are (1) interesting, (2) controversial, or (3) skillful.

Continue reading

Distributed Sensemaking: Improving Sensemaking by Leveraging the Efforts of Previous Users

Kristie Fisher, University of Washington & Microsoft Research
Scott Counts, Microsoft Research
Aniket Kittur, Carnegie Mellon University

Making sense of digital information one gathers from the web can be a daunting task. Numerous other researchers have explored the process of sensemaking for individuals as well as for groups of people working collaboratively towards a specific goal. In the work we present in our paper, we explore the viability of a distributed sensemaking system to assist users in sensemaking. That is, we tested whether a user’s sensemaking efforts can improve in quality and efficiency when she is able to leverage the sensemaking efforts of others with whom she is not explicitly collaborating with and does not even know.

Continue reading

Deploying Crowdsourced Monolingual Translation in the Wild

Chang Hu, University of Maryland
Philip Resnik, University of Maryland
Yakov Kronrod, University of Maryland
Ben Bederson, University of Maryland

In our previous post, we introduced MonoTrans, a system that does crowdsourced translation without bilingual people. In our research papers we presented some promising results using it to translate children’s books and Haitian Earthquake SMS messages. An interesting problem is: beyond experiments, how do systems like this work in the real world?

Continue reading