Crowds Helping Make City Sidewalks More Accessible

Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments; however there are currently few mechanisms to determine accessible areas of a city a priori (e.g., before a person leaves their home). This is a significant problem: according to the most recent U.S. Census data, 30.6 million individuals have physical disabilities that affect their ambulatory activities. In our recent work, we investigated whether crowd workers from Amazon Mechanical Turk (turkers) could accurately find, label, and assess the physical accessibility of sidewalks from Google Street View (GSV) imagery.

Accessibility problems labeled by turkers

Figure 1: In our recent work, we propose and investigate the use of crowdsourcing to find, label, and assess sidewalk accessibility problems in Google Streetview (GSV) imagery. The GSV images and annotations above are from our experiments with Amazon Mechnical Turk crowd workers (turkers).

Our overarching research goal is to build and investigate new, scalable mechanisms that combine crowdsourcing, HCI, and computer vision for determining the accessibility of the physical world. We plan to use data gathered from these approaches to build a suite of accessibility-aware mapping tools that inform governments, policy makers, and pedestrians alike about the inaccessible parts of their cities. Just as modern mapping services provide route recommendations based on traffic conditions or even an area’s historical crime rates, we aim to provide similar “smart routing” services for mobility impaired pedestrians tailored to their specific abilities.

Although some tools and mechanisms exist to report on street-level issues, most of these are reactive rather than proactive. For example, SeeClickFix.com allows concerned citizens to report potholes, street lamp outages, and other municipal problems via a website or mobile application. Similarly, many cities in the US offer 311 non-emergency municipal services to capture and track reported issues. Our approach is different in that we are actively building a knowledge-base of physical world accessibility rather than relying on reported problems. We note, however, that our approach is complementary and can work in concert with existing tools.

For our preliminary study, we showed turkers a manually curated set of 229 static images from GSV and asked them to perform three tasks: (i) identify the location of the sidewalk accessibility problem in the GSV image; (ii) categorize the type of the problem; and (iii) evaluate how severely the problem may obstruct a person’s path. For a detailed explanation of the outlined tasks, please see the instructional video we provided above.

Examples of correctly labeled image (left) and image with mistakes (right)

Figure 2: As with most crowdsourcing studies, we encountered issues with work quality. For our tasks, accuracy was often contingent on the particular GSV scene and salience of the accessibility problems therein. For example, in (a) above, the two cars were correctly identified as “obstacles in path” by most turkers. In contrast, (b) illustrates a scene that many turkers struggled to label—notice the stop sign, the pole, and tree were all labeled as obstacles but are not even on the sidewalk.

Our study demonstrated that turkers can identify sidewalk accessibility problems in static GSV imagery with 81% accuracy without any quality controls and 93% accuracy with the addition of simple quality control schemes. As readers of this blog are likely familiar, worker quality can vary significantly. In Figure 2a above, for example, turkers provided highly consistent labels marking cars as obstructing the path. On the other hand, in Figure 2b, many turkers mislabeled objects such as trees, signs, and poles as obstacles even though they are not directly in the pedestrian pathway. We tested multiple quality control methods to filter out such low-quality work. We are currently exploring methods to provide active feedback to our turkers through injected “ground truth” images.

The work we describe above is the first study for my on-going Ph.D. thesis project and is setting the foundation for future works such as:

  • Collecting accessibility data beyond sidewalks that include streets, building fronts, and bus stop environments
  • Incorporating computer vision algorithms to automatically locate sidewalks and sidewalk accessibility problems
  • Making a volunteer based web application where people can work on accessibility assessment tasks and help make their neighborhood more accessible
  • Building accessibility-aware map tools such as an accessibility score index visualization and accessibility-aware routing algorithms

For more, see our full paper, Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems.

Kotaro Hara, University of Maryland, College Park
Victoria Le, University of Maryland, College Park
Jon E. Froehlich, University of Maryland, College Park

10 thoughts on “Crowds Helping Make City Sidewalks More Accessible

  1. I think more training might be a good avenue to explore to improve quality as well. We know that for complex visual identification tasks, cartoons and decision tree type instructions can make a tremendous difference in recognition rates (see http://wexler.free.fr/library/files/biederman%20(1987)%20sexing%20day-old%20chicks.pdf). I wonder if you could ‘lower the noise floor’ by introducing more rigorous training. I’d be interested to see how interleaving gold standard examples improves performance as well.

    Forgive me for being a bit sci-fi speculative, but accessibility-aware routing algorithms could be a complete game-changer. Imagine augmenting a guide dog with an agent that has accessibility information. The combination of a dog’s ability to sense and adapt to immediate changes with an intelligent agent doing the neocortex’s work of route planning and prediction is really compelling. I can foresee mobile applications that might help people formulate a route that meets some accessibility constraints while at the same time using the camera symbiotically to receive updated data about sidewalk conditions. Realtime crowdwork could then take the updated findings and actively maintain a map of conditions that might even be seasonal or based on day-to-day weather.

    • Hi Jeffrey,

      Thank you for your comments.

      > I think more training might be a good avenue to
      > explore to improve quality as well.
      A great point and I agree. We are currently working on making interactive tutorials (aka onboarding which you can see in many of modern video game tutorials) to (i) quickly make turkers get used to the task and to (ii) force them to take the tutorial. We developed a prototype of onboarding in another project that I am working. Though it is a different project, I think the complexity of the task is equivalent to our sidewalk accessibility project but I feel like turkers are performing better when they are instructed with onboarding. We need to investigate more rigorously though.

      > Imagine augmenting a guide dog with an agent
      > that has accessibility information.
      Great idea :) We should also let dogs wear Google Glasses to capture image or video of every corner of city sidewalks and analyze it!

  2. Very interesting and helpful work!

    I noticed that your video example for turkers was heavily framed around making sidewalks accessible for wheelchairs. Did you consider testing the factor of ‘altruism’ in addition to quality control schemes? I know some previous work (eg. Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets, http://arxiv.org/pdf/1210.0962v1.pdf) has shown that, while altruistic motivations may not increase quality, they can increase the quantity of work that a worker performs – perhaps you could ask workers to label multiple images in one task?

    • Hi Erin,

      Thank you for your reply.

      Great point. We did not investigate the effect of how we frame the task, so I cannot really say whether it changed number of tasks done by turkers. I agree it is important to investigate what is the best way to frame the task so turkers can be most productive.

      > perhaps you could ask workers to label
      > multiple images in one task?
      We actually did :)

  3. I’m curious about the inverse problem — would it be easier to have crowds label accessibility _problems_, or is it easier to get them to label accessible regions? For example, instead of labeling sidewalks without ramps, what about providing a tool to label all ramps? Or have them mark paths that can be followed by someone in a wheelchair, and the path stops when there’s a car on the sidewalk?

    • At first I was thinking that this could be an issue of updating. I could foresee trouble if a pathfinding system directed a person to an area marked accessible that has now become blocked. Choosing a less optimal path because blockages that used to be present are now gone seems like the better alternative.

      I bet you’re on to something, though. I wonder if the affirmative is just easier to get the crowd to label. It’s the problem in proving a negative through exhaustive search versus a positive by seeing it happen. I wonder if we could augment such positive data with actual usage. Just like we instrument cars/roads to monitor traffic flow, we could instrument accessibility aids to collect anonymous aggregate data about what is actually usable in real time. If people are at the geolocation, then it’s passable?

        • Hello Michael and Jeffrey,

          Thank you for your comments!

          Michael.
          Great point.

          > For example, instead of labeling sidewalks
          > without ramps, what about providing a tool
          > to label all ramps?
          Great point. I want to mention that we asked turkers to label both positive and negative accessibility attributes and much more detailed question in our pilot study (e.g., if something is blocking the way, is it a tree or a pole.) In our pilot study with MTurk, however, it turned out that the tasks was a little too challenging and not quite accurate. So we decided to simplify the task and asked turkers to only label negative accessibility attributes (e.g., objects blocking paths.), because (i) it would be easier for them to complete the task correctly and (ii) for applications such as a accessibility aware navigation system, it wouldn’t matter if it is a tree or a fire hydrant blocking a sidewalk.

          I agree that it is interesting to compare labeling positive attributes and negative attributes. I imagine for some sidewalk attributes it could be cognitively easier to find and mark if it is accessible.

          Jeffrey,
          It is a great point. I agree it is “safer” to report something that is not a problem as a problem so the wayfinding application will show less optimal route.

          We also discussed about implementing wheelchairs with GPS devices and track where they traveled to collect data about accessible route. But this may not be the best solution because it is hard to distinguish between (a) route where wheelchair users did not travel because sidewalk was not accessible and (b) route they just didn’t travel.

          We discussed about crowdsource people who are physically located near the potential sidewalk problems to audit the them. We don’t know what is the best way to motivate them participate in such auditing process, but we definitely want to build a mobile game so if users are near the expected accessibility problem (which we bootstrap from our crowdsourcing method) we can ask them to go check and see if the problems still exist :)

  4. Hi Erin,

    Thanks for your comment. I agree that tasks with some social appeal or clear social benefit would likely improve turker effort/recruitment (thanks for the pointer to work on this). We are starting to explore task framing in our current work and plan to explore this in more detail in the future.

    Jon

  5. Kotaro (et al.),
    This is great work. Truly inspiring. My 2 cents re:
    “We also discussed about implementing wheelchairs with GPS devices and track where they traveled to collect data about accessible route. But this may not be the best solution because it is hard to distinguish between (a) route where wheelchair users did not travel because sidewalk was not accessible and (b) route they just didn’t travel.”:

    Just a thought: if you’re already thinking down the route of putting GPS devices on wheelchairs, how about attaching to that a simple device with a button (or an app on their cell) where they can indicate: I have a block *here and now*. (and maybe add a couple of words if they are so inclined). In correlation with the GPS data, this can potentially help you distinguish the two states.

    I’m sure there is much more complexity to this… Will look forward to see this project fly.

    (another thing you probably thought of: cooperating with openstreetmap.org? )

Comments are closed.