Asking the Right Person: Supporting Expertise Selection in the Enterprise

Svetlana Yarosh, Georgia Institute of Technology
Tara Matthews, IBM Research — Almaden
Michelle Zhou, IBM Research — Almaden

Expert finding systems help enterprise workers find a person who may be able to help with a project or question, but it is challenging to select the best expert for a specific task. This leads the user to waste time by asking an expert who may not be able to answer the question or who might not take the time to respond (we call this a “strikeout”). Through a lab study of realistic expert-finding tasks with 35 participants, we found that the current leading expert finding approach leads to a 28% strikeout rate, but our improved interface reduces this rate to 9%.

Imagine that you have come across some legacy Fortran code in your current project. You are not an expert on Fortran so you decide to find someone to sit with you for an hour or two and help you make sense of it. You type in “Fortran” as a query to the company Expertise Recommender system. Out comes a list of more than 100 potential experts. So, whom should you actually ask? Figure 1 shows how two potential experts (anonymized for privacy) might appear in the search results. Should you contact Eric Expert, Steve Skillful, or somebody else from the long list?

The original expert-finder interface, showing the information displayed about potential experts.

Figure 1. The original expert-finder interface, showing the information displayed about potential experts.

We asked 35 participants in various positions at IBM (e.g., inventors, programmers, personal assistants, physicists) to try to find an expert to help with scenarios similar to the one above. We selected tasks appropriate for each participant through a two-week diary study of his or her people-finding needs. Each participant was asked to find an expert using two different systems: using the standard system above or using our version of the system. Though both systems provided the same search results in the same order, our system included additional information about each expert (see Figure 2 below for an example).

The improved interface showing additional information in the search results

Figure 2. The improved interface showing additional information in the search results.

We collected the additional information about each expert from multiple external and IBM-internal sources. In both conditions, the participants were encouraged to look for additional information about experts in whatever way they wanted (so for example, they could try looking up Steve Skillful on Facebook to see if they had common friends). After each participant decided on the expert that they would most likely contact for help, we sent each expert a short questionnaire describing the participant’s query and asking (1) how likely the expert would be to reply to an email from this participant, (2) how likely is it that the expert is the right person to ask this question, and (3) how likely is it that the expert could point them to somebody else who may be able to help.

We found that presenting extra information in the search results really helped our users. First of all, the participants decided on an expert to contact more quickly in the condition with extra information and they were able to better articulate the reasons for their decision (we go into more detail in the paper). But most interestingly, the participants were also less likely to strikeout by choosing an inappropriate expert for their query. We define a strikeout as an expert who either (1) said that they were unlikely to respond to the participant’s contact or (2) said that they were unlikely to be the right person for this question AND unlikely to be able to point the participant to the right person. In other words, a strikeout is when the expert you contacted gets you no closer to the answer! Using the system without any extra information in the search results, the participants struck out a whopping 28% of the time.  With the extra information, the strikeout rate was only 9%. This was a marked improvement considering that the two systems were identical (interface, search results, results order) except for a few lines of extra information.

Presenting extra information relevant to each expert in the result list may seem like an obvious change, but it’s not something that has been incorporated into current Expertise Recommender systems. In the paper, we go into more detail about what kind of information may be most helpful to show. Much of this data may already be up for grabs in the various internal and external profiles used by your organization, but we also discuss information that currently cannot be easily acquired that might present interesting new directions for data analytics and crowdsourcing approaches.

For more, see our full paper, Asking the Right Person: Supporting Expertise Selection in the Enterprise.

This entry was posted in CHI 2012 by Lana Yarosh. Bookmark the permalink.

About Lana Yarosh

Lana Yarosh is a researcher at AT&T Research Labs and a recent graduate of the Ph.D. program in Human-Centered Computing at the Georgia Institute of Technology. Her research interests fall primarily in the area of Human-Computer Interaction, particularly Ubiquitous and Social Computing. She has a passion for empirically investigating real-world needs that may be addressed through computing applications. After identifying these opportunities, she designs and develops technological interventions, evaluating them using a balance of qualitative and quantitative methods. Most recently, she has designed a media space system supporting synchronous remote communication between children and parents to help address the needs of divorced families.