As people move an ever-increasing proportion of their social interactions online, the consequences of ‘over-sharing’ in a new medium have been brought sharply into focus. If users are aware of the consequences of over-sharing, then why do they continue to share so much? This question constitutes what Barnes has called the ‘privacy paradox‘. In this paper, we study Google+ to provide a first empirical study of behavior in an online social network (OSN) designed to facilitate selective sharing.
Social data analysis – in which large numbers of collaborators work together to make sense of data – is increasingly essential in a data-driven world. However, existing social data analysis tools don’t consistently produce good discussion or useful analysis. Our recent work shows how analysts can enlist paid crowd workers to systematically generate explanations for trends and outliers at scale and demonstrates techniques for encouraging high-quality contributions.
Travis Kriplean, Computer Science & Engineering, U. Washington
Michael Toomim, Computer Science & Engineering, U. Washington
Jonathan Morgan, Human Centered Design & Engineering, U. Washington
Alan Borning, Computer Science & Engineering, U. Washington
Lance Bennett, Political Science, Communication, U. Washington
Andrew Ko, The Information School, U. Washington
Communication is about listening as much as speaking. Unfortunately, our web interfaces have thus far paid scant attention to supporting listening, creating a feedback gap and likely contributing to the scorched earth nature of our web dialogue. We have designed Reflect, a simple interface for encouraging listening, and deployed it on Slashdot. As a by product of their acts of listening, commenters open up new possibilities for crowd-sourced discussion summarization.
Twitter users choose to follow “insiders” of fields they are interested in knowing, even though they do not necessarily know them in person. External recommendation system that suggests expert accounts of a particular field for users to follow is a useful tool for this purpose. However, we questioned the accuracy and overall user experience of such system by analyzing the topical relevance based on users’ published contents (aka, Tweets), which is the major trend of Twitter recommendation system in use.
In this study we show evidence that self-reports of motivation for doing work on Amazon Mechanical Turk are subject to social desirability bias. We employ a quasi-experimental survey method which mitigates that bias, and reveals different patterns of motivation and social desirability among MTurk workers in the US and in India.
Expert finding systems help enterprise workers find a person who may be able to help with a project or question, but it is challenging to select the best expert for a specific task. This leads the user to waste time by asking an expert who may not be able to answer the question or who might not take the time to respond (we call this a “strikeout”). Through a lab study of realistic expert-finding tasks with 35 participants, we found that the current leading expert finding approach leads to a 28% strikeout rate, but our improved interface reduces this rate to 9%.
While watching live sports events, many fans use social media (e.g., Twitter) to report on their thoughts about the event as it unfolds. We investigated using this information source to semantically annotate live broadcast sports games (here: American Football) to select video highlights that are appropriate for an individual fan rather than for a general audience – showing plays that are (1) interesting, (2) controversial, or (3) skillful.
Making sense of digital information one gathers from the web can be a daunting task. Numerous other researchers have explored the process of sensemaking for individuals as well as for groups of people working collaboratively towards a specific goal. In the work we present in our paper, we explore the viability of a distributed sensemaking system to assist users in sensemaking. That is, we tested whether a user’s sensemaking efforts can improve in quality and efficiency when she is able to leverage the sensemaking efforts of others with whom she is not explicitly collaborating with and does not even know.
In our previous post, we introduced MonoTrans, a system that does crowdsourced translation without bilingual people. In our research papers we presented some promising results using it to translate children’s books and Haitian Earthquake SMS messages. An interesting problem is: beyond experiments, how do systems like this work in the real world?