This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Using Computer Simulations to Investigate the Potential
Performance of ‘A to B’ Routing Systems for People with Mobility Impairments

Reuben Kirkham Monash UniversityMelbourneAustralia reuben.kirkham@monash.edu  and  Benjamin Tannert benjamin.tannert@hs-bremen.de Hochschule Bremen, City University of Applied SciencesBremenGermany
(2021)
Abstract.

Navigating from ‘A to B’ remains a serious problem for many people with mobility impairments, due to the need to avoid accessibility barriers. Yet there is currently no effective routing tool that is regularly used by people with disabilities in order to effectively avoid accessibility barriers in the built environment. To explore what is required to produce an effective routing tool, we have conducted Monte-Carlo simulations, simulating over 460 million journeys. This work illustrates the need to focus on barrier minimization, instead of barrier avoidance, due to the limitations of what can be achieved by any accessibility documentation tool. We also make a substantial contribution to the concern of meaningful performance metrics for activity recognition, illustrating how simulations can operate as useful real-world performance metrics for information sources utilized by navigation systems.

Accessibility; Disability; Navigation; Routing
copyright: rightsretainedjournalyear: 2021conference: 23rd International Conference on Mobile Human-Computer Interaction; September 27-October 1, 2021; Toulouse & Virtual, Francebooktitle: 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI ’21), September 27-October 1, 2021, Toulouse & Virtual, Francedoi: 10.1145/3447526.3472056isbn: 978-1-4503-8328-8/21/09conference: Mobile HCI ’21: ACM International Conference on Mobile Human-Computer Interaction; Sept 27 - Oct 01, 2021; Toulouse, Francebooktitle: Mobile HCI ’21: ACM International Conference on Mobile Human-Computer Interaction, Sept 27 - Oct 01, 2021, Toulouse, Franceprice: 15.00isbn: 978-1-4503-XXXX-X/18/06ccs: Human-centered computing Accessibility

1. Introduction

Accessibility in the built environment remains a perennial problem for the hundreds of millions of people worldwide who have mobility impairments (Organization et al., 2011). A particularly pertinent challenge is how to navigate from ‘A’ to ‘B’ in a reasonable time, whilst avoiding accessibility barriers (e.g. (Ding et al., 2007; Wu et al., 2020; Kamaldin et al., 2019; Völkel et al., 2008; Harriehausen-Mühlbauer and Roth, 2016)), i.e. ‘barrier avoidance’. One difficulty is the present lack of effective documentation of the built environment, meaning that the existence of accessibility barriers is not known in advance, even for permanent barriers that are often a longstanding feature of the local landscape. Automated tools have limited accuracy even when focused on identifying well-defined accessibility barriers, whilst the impact of a given accessibility barrier can vary greatly depending on an individual’s disability and individual circumstances (Gupta et al., 2020; Kirkham et al., 2017). This means that outside of areas that have already been subject to expert documentation (which is prohibitively expensive in most circumstances), there is insufficient information available to effectively navigate the built environment in a reliable fashion. The wider implication of this is to have a considerable negative impact on the lives of many people with mobility impairments (Bromley et al., 2007; Hara et al., 2016), and thus limiting their inclusion in wider society.

There has been an increasing amount of research aimed at automatically (or sometimes semi-automatically) documenting the built environment (e.g. (Gani et al., 2019; Hara et al., 2014; Iwasawa et al., 2015a; Kirkham et al., 2017; Mascetti et al., 2020; Mourcou et al., 2013; Saha et al., 2019)). However, the recognition performance of these accessibility documentation systems has not been directly connected with the navigation task that they are supposed to ultimately assist with. At the same time, drafting appropriate descriptive performance metrics is challenging, as the required documentation performance depends on (i) the navigation needs of the end user (which can widely vary depending on their disability and personal circumstances), (ii) the degree of inaccessibility in the environment (which again, is person specific) and (iii) the topology and layout of the built environment itself.

We present and apply a (suitably general) simulation framework that enables us to connect the recognition performance of an accessibility documentation system to the performance of an ‘A’ to ‘B’ navigation system that relies on the data generated by it. This means our work is the first that uses simulations as performance metrics for a human activity recognition system. Our bespoke simulation framework was developed in order to enable the simulation of hundreds of millions of journeys in a reasonable time and itself represents an important contribution to accessibility documentation. By running a wide range of Monte-Carlo simulations using our framework (over 460 million journeys in total), we provide answers to important strategic questions which in turn can help shape the important research agenda of accessibility documentation. We find that even a perfect accessibility documentation tool can only offer a reasonable experience in a city that is already largely accessible. Our wider results suggest a different ‘end game’ for accessibility documentation tools: rather than focusing on barrier avoidance, the emphasis should instead be on minimization and accepting the inevitable imperfection of these navigation systems. We also explain how our simulation approach could support this process, and thus provide candid information to end users about the likelihood a route will be inaccessible, thereby providing meaningful feedback to end users. Finally, we explain how our simulation approach could be adapted for a wide range of navigation scenarios, including navigation scenarios that do not involve people with mobility impairments.

2. Background

2.1. The ‘A to B’ problem and its implications for people with mobility impairments

The ‘A to B’ routing problem (sometimes called the sidewalk accessibility problem) involves finding the best route in the built environment to undertake a journey from one location (‘A’) to another (‘B’). It is distinct from the ‘within A/B’ problem which addresses the scenario as to whether a given location itself is accessible (as opposed to an accessible route to getting there in the first place). For many people with disabilities, pre-existing navigation tools are often inadequate, because they do not reliably accurately provide routes that are accessible to the end user (Gupta et al., 2020; Tannert et al., 2019). In the real world, accessibility barriers often permeate the built environment, with a recent study reporting averages of over 1 accessibility problem for every 100m travelled in some cities (Froehlich et al., 2020), whilst another investigation showed one US city had 80% of curbs that were not ADA compliant (Bagenstos, 2020). Providing effective and accurate navigation tools for people with mobility impairments is thus an important yet difficult challenge (Froehlich et al., 2019), given the large number of pre-existing barriers.

For people with mobility impairments, taking any journey can be burdensome, especially if attempting to do so in a reasonable time, with strategies aimed at somewhat addressing this including ‘reccies’ to avoid barriers (Hara et al., 2016), or avoiding travelling as a pedestrian where possible (Bromley et al., 2007). The overall effect is to “depriv[e] disabled individuals of social opportunities (because they cannot visit friends, family, and places of entertainment or recreation), and [often cause] serious physical injuries when wheelchair users tip over obstacles or are hit by cars when forced to travel in the streets” (Bagenstos, 2020). Other examples include longer term health damage due to taking suboptimal routes, e.g. to manual wheelchair users by way of cause damaging whole-body vibration (Bowtell, 2015; Rice et al., 2018), or in using inappropriately designed ramps that require undue amounts of force to access (Velho et al., 2016). At the same time the range of barriers (and thus implications) is diverse, with many different types of concerns being overlooked or underemphasized in existing studies (Gupta et al., 2020). This leads to two types of concerns in respect of accessible navigation: (i) barrier avoidance (aimed at ‘barrier-free’ routes, e.g. as in (Harriehausen-Mühlbauer and Roth, 2016; Kamaldin et al., 2019; Wu et al., 2020)) and (ii) a ‘quality aware’ approach (Siriaraya et al., 2020), which simply seeks to minimize the number of barriers encountered.

2.2. Accessibility Documentation Systems

An effective navigation tool needs to be informed by accurate information about the accessibility of the built environment. This is easier said than done: indeed, even defining accessibility can be challenging (Church and Marston, 2003; Saha et al., 2021) and there is a wide range of different types of physical barriers (whose effects in turn depend on an individual’s specific disability). Nevertheless, there are various types of accessibility documentation systems that have been designed in an effort to provide this type of information, typically by identifying specific types of physical features. Some of these approaches are human-driven and rely on individuals to collect the relevant information. One of these approaches is to use experts, however, whilst sufficiently accurate, this approach is expensive and results in very limited coverage due to these exercises being “laborious and time consuming” (Froehlich et al., 2019), meaning that most cities do not actually have anything approaching an accurate map of accessibility barriers (Bagenstos, 2020; Bromley et al., 2007). Another strategy that has been tried is ‘geocrowdsourcing’ (Rice et al., 2018) yet this has encountered significant problems in respect of their ability to accurately document barriers (Rice et al., 2018) and has also lacked effective engagement and thus coverage (Froehlich et al., 2019; Mascetti et al., 2020).

Given the limitations of human-driven approaches, a growing body of accessibility documentation research has focused on using automated or semi-automated approaches (e.g. (Hara et al., 2014; Kirkham et al., 2017; Saha et al., 2019; Weld et al., 2019)), with (Lange et al., 2021) providing a body of principles for designing appropriate automated accessibility documentation systems. To date, the most successful automated example is based on Project Sidewalk data (Saha et al., 2019), which used computer vision (and RESNET) to achieve a f-score of 0.85 in a best case scenario (Weld et al., 2019) (albeit with some variation across classes, with weaker performance on ‘surface problems’). Taking into account the difficulty of documentation this is a strong recognition performance, especially given the variety of forms that physical barriers can take, and the disagreement amongst people with mobility impairments as to the real-world impact of given barriers on accessibility (Kirkham et al., 2017).

We also note that there are numerous papers that claim to offer a viable system for using inertial sensors mounted on mobility aids (e.g. wheelchairs) to automatically measure the accessibility of the built environment (Iwasawa et al., 2015b; Iwasawa and Yairi, 2012; Iwasawa et al., 2016; Kurauchi et al., 2019; Yairi et al., 2019). Unfortunately, the evaluations of these initial sensor based systems are typically problematic in two different ways: they are not naturalistic (or ecologically valid) which means that the study’s findings do not translate to realistic activity recognition problems (Poppe et al., 2007) and they fail to use the appropriate leave-one-out metric of evaluation (Hammerla and Plötz, 2015) (in some cases, participants were left out, but they all followed the same route, which is a different variant of the same mistake). The recent work of (Mascetti et al., 2020) showed that there is a stark difference in reported performance between a leave one out approach and a k-fold evaluation in the context of obstacle detection (with a f-score 0.616 for leave one out and 0.84 for the inappropriate k-fold evaluation). Accordingly, whilst there are systems that purport to have an f-score above 0.9, these systems have not been shown to have a realistic chance of working effectively in the real world.

2.3. The Fair and Meaningful Metrics for Activity Recognition Problem

This paper is also a different take on the meaningful performance metrics for activity recognition problem, as existing documentation tools are often activity recognition systems (where the ‘activity’ is some sort of interaction with a potential barrier). At present, we are unaware of any cases where the performance of these automatic documentation systems has been directly connected to real navigation tasks. There is an important gap, in that there are no current performance metrics that convert the documentation performance into navigation performance. More generally, the determination of appropriate performance metrics is a matter of active investigation in respect of activity recognition (Hammerla and Plötz, 2015; Ward et al., 2011). Our work accordingly examines how we can translate documentation performance into navigation performance, using computer simulations to do so.

At the same time, the meaningful metrics problem is also of concern in respect of ‘fair AI’ more generally. Whilst Fair AI has primarily focused on the protected attributes of gender and ethnicity, the need to ensure that people with disabilities has been mostly overlooked, being “largely omitted from the AI-bias conversation” (Whittaker et al., 2019). In particular, there is a real need to ensure that the diverse range of considerations in respect of disability are considered: the variety of disabilities makes training a AI system based on machine learning more challenging (Morris, 2020), whilst meaning that inherent biases are less likely to be detected. In the context of accessibility documentation tools, this might arise by inadvertently taking a narrow view of disability, and thus excluding subtle instances of accessibility barriers. If an accessibility documentation system is reliant on using the motion of a disabled person, then a particular impairment (e.g. an unusual gait) may also reduce recognition performance (Morris, 2020), and thus the reliability of documentation. Understanding the real world implications of a given systems recognition performance is thus an important consideration for ensuring people with disabilities are not disadvantaged by the use of AI systems, including to document inaccessibility in the built environment: indeed it is very similar to design engineering, which has also been recently advanced for more effective technology design for people with disabilities by conducting simulations (Kristensson et al., 2020).

3. Connecting Documentation and Navigation

3.1. An ‘End-to-End’ system for a diverse range of concerns

An end-to-end documentation-routing system involves three different interrelated stages:

  1. (1)

    The documentation of the built environment and whether a given path segment is accessible.

  2. (2)

    The design of a routing algorithm that proposed the most appropriate routes (and indicates its confidence in its own predictions)

  3. (3)

    An interface that sits on top of the routing algorithm so the user can apply the results to navigate from ‘A to B’

The difficulty is that these steps have been siloed from one another, when in reality they are closely interconnected. A routing algorithm can only operate on the accessibility information it is given. And the perceived performance of a navigation ‘interface’ in turn depends on how good the results provided by its underlying algorithm (as well as how that information is presented). At the same time, better documentation can also come from the reporting of system errors and mistakes by end users of navigation systems: but only if they are satisfied enough with a navigation system to engage with it. The result (Figure 1) is that these navigation and documentation challenges are heavily interconnected: nevertheless, in practice, they are studied in silos that need to be brought together. Notably, user studies deal with only the interface itself, which is several layers removed from the underlying accessibility documentation system.

Refer to caption
Figure 1. Illustration of feedback loops that influence the design and performance of an accessibility documentation tool.
\Description

This illustrates the relationship between each element of the feedback loop in the design of a navigation tool for people with mobility impairments. The process starts with the built environment, which is observed by the ’accessibility documentation system’. The data from the ’accessibility documentation system’ is then used by a ’routing algorithm’ to propose routes. The results of the ’routing algorithm’ are then presented to the user in the ’user interface’. The ’user interface’ in turn can provide reports of inaccessibility (which feed into the documentation system, improving its performance), as well as allowing the selection of ’routing preferences’, which influence the decision making of the ’routing algorithm’. At the same time, this is difficult to directly study - a user experience study only sees the interface in a given built environment, rather than the wider eco-system.

Adding to that complexity is the inherently person-specific nature of the accessibility documentation problem: different people have different disabilities and personal circumstances, and thus different needs and expectations. Yet as explained in (Gupta et al., 2020), there is a tendency for navigation support approaches to be focused on one disability group, even though there are a wide range of impairments that a fully inclusive end-to-end navigation system needs to support. Although existing navigation systems often assume otherwise, different disabled people can have different objectives: one user might be interested in purely avoiding barriers (and thus is concerned with ‘barrier avoidance’ approach), whereas another might be satisfied with simply encountering them appreciably less often (i.e. the ‘quality aware’ routing approach), as these barriers are discomforts, rather than hard-edged dangers or obstacles. As such, any framework needs to be suitably abstract and flexible to deal with a broad range of navigation concerns and objectives: there is no fixed optimal solution given the diversity amongst disabled people. At the same time, the output representation also has to be optimized to suit the preferences and needs of end users, but determining how to best do this is tricky if studies are confounded by errors in the underlying accessibility documentation tool.

A real world understanding of what an accessibility documentation tool can accomplish is needed to inform the most optimum approach for designing navigation tools. In effect, this is a ‘design engineering’ [24] problem, wherein simulations can help determine the most appropriate form of system. The distinction is that we are using these simulations to create real-world performance metrics, that provide a clear illustration of the navigation performance of a system based on a documentation tool with a given recognition performance. These metrics in turn describe the bounds on what can be accomplished with a navigation system in the real world, thus helping to determine the best strategy for using the information available to it based on an imperfect documentation of the built environment. In turn, these answers inform expectations for any feedback loop that might improve performance.

3.2. Using Monte-Carlo Simulation to Connect Routing and Documentation

To connect the performance of an accessibility documentation system to a navigation system that relies upon the information provided by it, we conducted extensive Monte-Carlo simulations, with over 460 million runs (or simulated journeys) in total. Our simulations were constructed to focus on the cases where a person is seeking to engage in barrier avoidance, as this is the stated goal of a substantial proportion of work that focusses on accessible navigation, with a view towards investigating the viability of this approach. A map contains a set of edges (or path segments) connected at nodes (or junctions): a proportion of these edges/path segments will have accessibility barriers (Figure 2). The Inaccessibility Rate is the proportion of edges that are inaccessible in our simulation for our hypothetical person with a mobility impairment. Our starting point is that an accessibility documentation system has both a True Positive Rate (TPR) which reflects the proportion of edges containing accessibility barriers (for that person) that it accurately detects, and a True Negative Rate (TNR), which indicates the proportion of accessible edges it correctly labels as being such. The Inaccessible Rate, the TNR and the TPR are all weighted (linearly) by distance, so that longer segments are more likely to have an accessibility barrier. On a map, this translates into marking a specific list of edges as being inaccessible from the perspective of the documentation tool – unless the tool has a perfect TPR and TNR, then this perspective will sometimes be inaccurate, potentially leading to the person in question being routed inappropriately (see Figure 3 for examples of this and the different types of difficulties it can cause). A navigation tool will then automatically choose the shortest route based on the barriers identified by the documentation tool.

Refer to caption
Figure 2. Illustration of a map, where inaccessible edges or path segments are marked in Red.
\Description
Refer to caption
Figure 3. Illustration of different routing scenarios and the consequences of different types of documentation error. A shows the most optimal route in the real world. B illustrates a shorter route being taken that encounters an undocumented barrier (a false negative), whilst C illustrates a longer route being taken due to an attempt to avoid a non-existent accessibility barrier.
\Description

To enable us to simulate such a large number of journeys, we structured our simulations with a bespoke design that enabled computational efficiency (Figure 4), developing our own novel approach for doing this based on ablation and pre-computation. After extracting regions from OpenStreetMap, we then generated a stratified sample (by distance) of different location pairs (‘A’ and ‘B’) up to a ‘crow flies’ distance of 1.2km (very short routes less than 300m were excluded): we selected 60 journeys in total. For each journey, we then pre-computed a list of all possible routes (and the distance) between ‘A’ to ‘B’ up to a limit of 1.5km total edge length using the CPU’s on an HPC cluster. Each simulated journey involved generating a list of inaccessible segments on the map. We then generated a list of segments actually determined to be inaccessible by the documentation tool (based on the information available to it). The route to be followed was then computed using an ablation approach on a GPU (with a speed increase of around 1000 compared to a single CPU core). This ablation approach involves identifying all the edges perceived as being inaccessible and deleting them from the network, which computationally means deleting the precomputed routes that contain one or more such edges and selecting the shortest remaining route, if it exists111If no route exists, then the system would report it is not possible to complete the journey. (the deletion is a matrix operation that can be done rapidly on the GPU). Then the relevant performance metrics, including whether an accessibility barrier was counted and if the system sent a person on an unduly long route were computed based on the selected route, thus enabling a report on the different types of errors and in turn, a cost benefit analysis to be conducted.

Refer to caption
Figure 4. Overview of Process for Simulating Journeys. The list of all possible routes is generated by CPU’s on a high-memory HPC node (¿100GB), whilst the ablation step is computed on a NVidia P100 GPU. Each Route List is used 7.68 Million times in our simulations, thus making the simulations computable in a reasonable amount of time.
\Description

3.3. Our Simulations in More Detail

3.3.1. Datasets

The performance of a routing algorithm depends partially on the geometry or structure of the city. This is because the consequences of an error will differ depending on factors such as the time taken to ‘go around’, as well as the availability of different options. We therefore extracted the geometries of three different cities (Canberra, Prato and Seattle) from Open Street Map. Each city was chosen due to having substantially different geometries, both in terms of the average number of edges connected to each node, and the differing volume of street coverage.222To give some examples, according to OSMnx (Boeing, 2017), the regions we selected of Canberra, Prato and Seattle have average number of nodes per intersection of 3.30, 2.84 and 3.71 respectively, whilst the edge density (per km2) was respectively 20512, 18454 and 28554. These geometries included all pedestrian pathways available in the region. For our analysis, this geometry was converted into a graph comprising a list of edges, their lengths, and which nodes they are connected to at each junction.

For each city, we selected 20 routes from A to B (up to 1.2km apart in ‘crow-flies’ distance) between different landmarks in the city, with a view towards creating a representative ‘bag of journeys’ for our simulations. Very short routes (less than 300m) were excluded, given that they are unlikely to be the use case for a navigation tool. This led to a total of 60 journeys in our ‘bag of routes’. The distance range we selected has the advantage of making our simulations tractable, as well as being in accordance with the types of journeys typically taken by pedestrians, with the great majority of journeys being included in this range (see e.g. (Yang and Diez-Roux, 2012) for summary of trip statistics amongst the general population). We chose accessible bathrooms as landmarks for the reasons given in (Tannert et al., 2019), namely this is a commonly required trip by people with disabilities.

3.3.2. Modelling Distribution of Accessibility Barriers

The impact of errors of a documentation system have more or less impact depending on how many accessibility barriers there are in the built environment (for a given individual). Our modelling approach is deliberately abstract, in that we are not concerned about the specific form of barrier (given the variety of different barriers and the different views that each disabled person can have in respect of their importance). Instead, we focus on there being an inaccessibility rate which is unique to a given individual in a given city, and represents the likelihood of that person encountering a barrier. We further assume that accessibility barriers are randomly distributed within a given city and model different probabilities of an average length path segment (i.e. a graph edge) being inaccessible. We assume no priors in distributing accessibility barriers: given the subjective nature of accessibility barriers to an individual and the variety of factors that influence barriers (e.g. level of maintenance, the age of infrastructure and so forth), there were no specific priors we could use, and attempting to do so (e.g. by making major roads less likely to have accessibility barriers) would have introduced a further parameter into our simulation, thus reducing its tractability. Moreover, a city that has launched an extensive enough improvement campaign for barriers to be less randomly distributed will doubtless have used the ‘expert’ documentation approach to document barriers to begin with, yet it is highly unusual for this to be done (with the automated and crowdsourced documentation systems which we model in this work being aimed at overcoming the cost of expert documentation).

In our approach, the probability of a segment being inaccessible was linearly weighted by segment length – for instance, if a segment was twice the length of the average one, then it would be twice as likely to be inaccessible. This makes sense for most types of accessibility barriers, as they are uniformly distributed (e.g. a longer street is more likely to have trip hazards due to surface wear and tear). For our simulations, our inaccessibility rate ranged from 0 to 0.3 (inclusive), with increments of 0.02 (yielding 16 different values). We excluded values above 0.3 as our pilot simulations indicating that these scores would produce very few accessible routes, thus making a navigation tool infeasible.

3.3.3. Modelling Performance of Accessibility Documentation System

As explained in (Tannert et al., 2019) (see also Figure 3), there are two main types of documentation error that can be made by an accessibility documentation system. The first is to state that a street segment (i.e. an edge) is accessible when it is not (i.e. a false positive). The second is to mark an accessible edge as incorrectly being inaccessible (i.e. a false negative). For our purposes, this leads to two parameters which describe the performance of our (hypothetical) accessibility documentation systems:

True Positive Rate (TPR): The proportion of inaccessible segments correctly labeled as being such, weighted by distance. In other words, this is the probability of an average length inaccessible segment being correctly labelled.

True Negative Rate (TNR): The proportion of accessible segments correctly labelled as being such, weighted by distance. In other words, this is the probability of an average length accessible segment being correctly labelled.

As with the inaccessibility rate, both the TPR and TNR are linearly weighted by distance. For both TNR and TPR, the performance of the documentation tool was on the range of 0.7 to 1 (inclusive), with increments of 0.01 (leading to 31 different values for each of them, or 961 combinations of TPR and TNR). This performance range includes the best performing existing systems and takes into account that a system that performed worse that 0.7 under either metric is unlikely to be useful (which we observed when conducing pilot simulations), as well as the fact that there are systems that have results substantially above 0.7 (e.g. (Weld et al., 2019))

3.3.4. Routing Approach

We compute the shortest accessible route (i.e. with no reported accessibility barriers) as perceived by the documentation system. If there no accessible route available, the system will report the route as being ‘impassible’. To put this into practice, we compiled a list of all possible routes (with a route being a list of edges) from ‘A’ to “B’ up to a maximum of 1500m route length, and then sorting each route ascending by distance. This list of routes was computed using an HPC cluster, using 96GB RAM and 11 CPU cores for each journey. 333As the cluster made available 65 CPU cores for us, we could run five such jobs at one time. The other CPU’s were used to support the GPU’s.

Computing the perception of the system is done in two stages. The first is generating the ‘ground truth’ of where the accessibility barriers really are: a list of edges is randomly generated in line with the inaccessibility rate. The second is randomization of errors made by the documentation system based on its performance, where the relevant proportions of errors are applied to the ground truth in line with the TPR and TNR. After both steps are complete, the navigation system has a list of edges which it perceives contain accessibility barriers. As these are Monte-Carlo simulations, this step is repeated 500 times for each combination of TPR, TNR and inaccessibility rate, meaning 7.688 million journeys are simulated for each pair of locations ‘A and B’ (we have 60 pairs in total).

The system then selects the optimum route. We do this by ablation444This is ablation, because it operates by creating a list of edges that are deemed inaccessible, and in effect, deleting them from the network., where we identify the subset of all possible routes that do not contain an inaccessible edge (from the perspective of the navigation system): i.e. the routes that are deemed accessible. The shortest accessible route is selected. This is done by way of a matrix operation on a Nvidia P100 GPU within our HPC cluster, which allowed us to use 8 GPU’s at the same time.

3.3.5. Performance Measures

Measuring performance requires the identification of the mistakes that can be made by a routing tool. There are three main types of navigation error (Table 1) that are of concern: (i) providing unnecessarily longer routes, (ii) heading down a route segment that contains a barrier and (iii) reporting that there is no accessible route when one exists.

Table 1. Types of Navigation Error that can be made by a navigation system and the relevant implications of each type of Navigation Error.
Error Type Description Real world cost
A Reporting a journey is ‘impassible’ when an accessible route exists. The journey is made by alternative transport (e.g. a Taxi) or not made at all by the person with a disability.
B Providing an unnecessarily longer route. The person has to travel this additional distance. Alternatively, they choose to use alternative transport (e.g. a Taxi) or not to make the journey at all.
C Providing a journey that contains an inaccessible segment. The person encounters an accessibility barrier. At the least, they have to reroute around the barrier. Depending on the nature of the barrier, there could be wider consequences (e.g. if the barrier in question is an unexpected trip hazard).

In line with (Tannert et al., 2019) we calculate performance of a tool on a given route by using a distance measure, which is the distance of the route selected, with a further 500m penalty for each accessibility barrier encountered. This estimate is based on a realistic amount of time to select and travel an alternative route that does not contain a barrier. For example, a route of 1100m length where there are two accessibility barriers is counted as 2100m (actual distance of 1100m plus 2 accessibility barriers of a 500m penalty). Our results are reported in relative terms, with three comparators in mind:

  1. (1)

    Perfect World: This is the performance relative to a world with no accessibility barriers in it (i.e. an inaccessibility rate equal to 0). It represents the experience of someone without any relevant mobility impairment.

  2. (2)

    Perfect System: This is a routing algorithm which has a TPR and TNR both equal to 1. It represents the best possible performance of a routing algorithm and always chooses the shortest accessible route (if one exists).

  3. (3)

    Oblivious System: This system has no knowledge of accessibility barriers and thus assumes every segment is accessible (which is equal to a TNR of 1 and a TPR of 0).

These comparators are chosen as they relate to practical scenarios. A comparison between a ‘perfect world’ and a ‘perfect system’ places a bound on how useful any accessibility documentation tool can be. At the same time, comparing any system to an ‘oblivious system’ provides a positive indication of the utility of an accessibility documentation system in terms of an improvement relative to the status quo of not having any accessibility information. In what follows, we present an exploratory analysis, which illustrates what an accessibility documentation tool is capable of in respect of differing inaccessibility rates, as well as the TPR and TNR of any given tool.

4. Results

4.1. What is an Ideal Documentation Tool Capable of?

We first present our simulation results concerning an ideal documentation tool, which has perfect knowledge of each and every accessibility barrier, as these places a limit of how much benefit can be gained from any documentation tool. Figure 5(i) illustrates the influence of the inaccessibility rate, whilst Figure 5(ii) illustrates the increase in distance required for the remaining routes. As can be seen, a city that has an increase by even a moderate volume of accessibility barriers offers a severely impaired navigation experience. There is also a (relatively small) difference between different cities in our ‘bag of routes’, with our simulation of Prato being the most challenging, and Canberra being somewhat less affected by an increased inaccessibility rate (this difference is limited is despite the fact that these cities have markedly different structures, especially in terms of numbers of intersections per node555Prato likely performs worse overall due to the relatively reduced number nodes per intersection and its lower street density. Further investigation (beyond the scope of this paper) is needed to indicate the full implications of geometry on the relative (in)accessibility of a city.). These results clearly justify our focus on routes with a relatively low proportion of accessibility barriers, as with an inaccessibility rate above 0.15, it is unlikely that even a perfect tool will be able to offer a sufficiently useful navigation experience, given the increase in distance and (more markedly) the proportion of routes that are unavailable.

Refer to caption
Figure 5. On the left is Figure 5(i), which illustrates the proportion of routes that are actually impassible, whilst on the right is Figure 5(ii), which illustrates the increase in distance (as a proportion of the route length) for the shortest accessible route. .
\Description

4.2. Accurate and Inaccurate Reporting of Impassible Routes

We consider that a route is impassible if there is no possible way to travel from A to B without interacting with at least one accessibility barrier, otherwise we call it navigable. A perfect tool (where TPR=TNR=1TPR=TNR=1) will always accurately report on impassible routes. However, a tool that is inaccurate can wrongly report routes to be navigable, when they are not, or vice versa. Figure 6(i) illustrates the effect of the TNR upon navigable routes that are falsely reported to be impassible in the case where the TPR=1TPR=1. As can be seen, a high TNR is necessary if a large proportion of accessible routes are not to be wrong reported as being impassible. In a realistic scenario of an inaccessibility rate of 0.2, a TNR of 0.95 would still wrongly report nearly 30% of all accessible routes as being impassible, whilst a TNR of 0.9 would lead to over 50% of routes being falsely reported as such. Even in environments where there are no accessibility barriers at all (i.e. an inaccessibility rate of zero), a substantial proportion of routes can falsely be reported as being impassible, even with respect to relatively high TNR’s (e.g. at a TNR=0.9, nearly 15% of routes are falsely reported as being impassible).

Refer to caption
Figure 6. On the left in Figure 6(i) is the illustration of missing routes performance when the TPR is fixed to 1, with differing TNR values and inaccessibility rates. In Figure 6(ii) is the case where the TNR is fixed to being 1, where we can see the influence of the TPR on routes being falsely reported as being navigable, when they are in fact impassible.
\Description

A somewhat lesser problem is a risk of false positives, namely routes are declared to be navigable when in fact they are not. As can be seen in Figure 6(ii), a lower TPR can lead to this issue occurring quite often: for instance, at a TPR of 0.85 and an inaccessibility rate of 0.2, this happens nearly 20% of the time. It is also worth noting that the inaccessibility rate is also a particularly influential factor, with the increase being non-linear, and this occurrence being rare at lower inaccessibility rates, even with a relatively low TPR – for example with an inaccessibility rate of 0.08, and a TPR of 0.7, this still happens less than 10% of the time.

4.3. Performance of Navigation Tool versus an Ideal Navigation Tool on Navigable Routes

We now present the performance of a navigation tool for the cases where it reports a navigable route and the reported route is indeed navigable. We do this by applying the distance measure set out in Section 3.3.5, looking at the relative increase in distance relative to a perfect navigation tool and taking into account the accessibility barriers encountered. Noting that an ideal tool does not encounter accessibility barriers, this is as follows:

score=(disttool+500×nbarrierstool)distperfectdistperfectscore=\frac{(dist_{tool}+500\times nbarriers_{tool})-dist_{perfect}}{dist_{perfect}}

In that formula, disttooldist_{tool} is the distance of the route selected by the routing tool, distperfectdist_{perfect} is that of the journey selected by an ideal tool (i.e. where TNR=TPR=1TNR=TPR=1) and nbarrierstoolnbarriers_{tool} is the number of edges with accessibility barriers in the journey proposed by the navigation tool. All distances are in meters. The resulting performance is illustrated by surface plots in Figure 7.666Note that the plots in Figures 7-9 are fitted surfaces generated using MATLAB TM’s Curve Fitting Toolbox (and fourth degree polynomials). It is clear that the higher the inaccessibility rate, the worse the given score as a proportion of distance. Both TNR and TPR have an impact on performance, however lower TNR’s have less of an impact (on routes already reported as being passible), with this being particularly clear when one compares charts with the lower inaccessibility rates. At higher inaccessibility rates, the impact of lower TPR’s (and TNR’s) is particularly pronounced, with many journeys being substantially increased in relative length. It can be seen from Figure 8(i) that much of this arises from barriers being interacted with, as opposed someone being simply sent on a longer route.

Refer to caption
Figure 7. Relative Score increase relative to a perfect tool in respect of different inaccessibility rates, for tools with different TNR and TPR values. As can be seen, at lower inaccessibility rates, the TNR has a lesser influence than the TPR. Higher inaccessibility rates also have a marked influence on the effect of lower TPR and TNR’s.
\Description

4.4. Performance of Navigation Tool versus an Oblivious Navigation Tool

The previous three subsections considered the performance of a navigation tool versus an ideal routing tool. We now consider the performance of a routing tool with different TNR and TPR relative to an oblivious navigation tool (i.e. one with TNR=1TNR=1, but TPR=0TPR=0), for the cases where a navigation tool reported an accessible route (notably an oblivious tool will always report there being an accessible route). As can be seen from Figure 8(ii), even with relatively moderate inaccessibility rates, e.g. 0.1, the increase in accessibility barriers encountered on an average route is over 1 (i.e. on each journey, one would expect to encounter an accessibility barrier over and above that captured by the navigation tool): by contrast, even with a TPR of 0.7 and an inaccessibility rate of 0.2, one would not necessarily encounter a barrier any more than 50% of the time (per Figure 8(i)). Thus, even relatively poorly performing tools can have a substantial impact on barrier avoidance and provide a markedly better experience. A strongly performing tool would avoid barriers most of the time. It is also important to consider performance by way of distance. We compute a relative score as follows:

score=(disttool+500×nbarrierstool)(distoblivious+500×nbarriersoblivious)distbestscore=\frac{(dist_{tool}+500\times nbarriers_{tool})-(dist_{oblivious}+500\times nbarriers_{oblivious})}{dist_{best}}

All the above have the same meaning as with the previous score, whilst distobliviousdist_{oblivious} is the distance of the route proposed by the oblivious tool (i.e. TPR=0TPR=0,TNR=1TNR=1), and nbarriersobliviousnbarriers_{oblivious} is the number of barriers encountered. The formula is weighted in proportion to the best possible performance. The result of the exercise is provided in Figure 9 (with negative distance increases being better than an oblivious system). Asides cases where there is a relatively poor TPR and TNR, the oblivious tool is always outperformed, even at low inaccessibility rates. At an inaccessibility rate of just over 0.08 or more, even with TPR=TNR=0.7TPR=TNR=0.7, the performance is still better than the oblivious tool. In most cases with inaccessibility rates above 0.1, even todays systems (with a TNR=TPR=0.80.85TNR=TPR=0.8-0.85) still greatly outperform the oblivious tool, thereby offering a markedly improved experience (albeit an imperfect one).

Refer to caption
Figure 8. Illustration of average barriers encountered per journey under different conditions. On the left, Figure 8(i) illustrates the barriers encountered due to lower TPR’s (with TNR fixed to 1). On the right Figure 8(ii) illustrates the effect of different inaccessibility rates relative to an oblivious tool (i.e. where TPR=0). As can be seen, even a relatively poorly performing routing tool is an improvement on no-tool at all, but a relatively high TPR is required to avoid barriers on most occasions
\Description

.

Refer to caption
Figure 9. Performance Plots comparing the navigation tool to an oblivious tool with a TPR-0 (and TNR-1): negative scores are improvements compared to the ‘oblivious’ tool, and the lower the score the better. As can be seen, at very low inaccessibility rates, an oblivious tool performs slightly better than a tool with higher TPR and TNR (asides as very high TPR and TNR), however, this advantage quickly dissipates as soon as the inaccessibility rate somewhat increases. Indeed, at realistic levels of performance, an informed routing tool quickly becomes markedly improved upon an ‘oblivious’ tool.
\Description

5. Discussion

5.1. What is a navigation tool capable of?

Our simulations give a clear indication of what can be achieved with a navigation-documentation tool, both with today’s performance, as well as likely future improvements in practice. The overall real-world effect of our simulations is as follows:

Barrier Avoidance is not realistic in today’s built environment: Our results show that barrier avoidance is unlikely to be a feasible approach towards providing a navigation tool, at least in today’s built environment. As such, the stated goal of barrier avoidance of many works in accessible navigation is practically impossible in today’s circumstances, even with a perfect documentation tool (and thus a fully informed navigation system). This is because even a moderately inaccessible city will lead to a significantly degraded navigation experience – a substantial proportion of journey’s will be impossible, and those journeys which remain possible will be a lot longer. For example, at what we expect is a relatively optimistic 20% (or 0.2) inaccessibility rate777It is highly likely for some groups, such as electric wheelchair users, that the inaccessibility rate is over 0.3 in most cities. See (Froehlich et al., 2020), which provides the number of each type of barrier per 100m: in their best case city (Seattle), there was 0.3 obstacles per 100m, and 0.6 surface problems per 100m. Given the average segment length in Seattle was 71.61m, even half these values would produce an inaccessibility rate well beyond 0.3 (i.e. (0.3+0.6)×71.61/100=0.64)(0.3+0.6)\times 71.61/100=0.64)., this leads to 42.6% of all journeys being inaccessible on average (i.e., there is no accessible route that can be taken at all), with an average journey distance increase of 15% on those remaining routes which are navigable. Even at an inaccessibility rate of 10% (an unlikely scenario), 13.25% of journeys are impossible on average, and the average journey distance increase for the remaining journeys is 7.5%, thus being far from being an ideal experience.

Barrier Minimization and ‘quality aware’ navigation is feasible: Against this context, we argue that the present goal of an accessibility navigation tool should be the more modest task of barrier minimization, rather than avoidance, which might well change if documentation becomes far more accurate (i.e., substantially above 0.95 for the TPR and TNR). Indeed, it is more realistic under normal circumstances – as can be seen, even a relatively poorly performing accessibility documentation tool can greatly improve upon an oblivious one, as long as the limitation is clear. An important implication for design flows from this: accessibility navigation tools should be expressly clear in that they are likely to be inaccurate, and promoted as reducing the chance accessibility barriers will be encountered. Once we design systems based on realistic expectations, we will have made tangible progress towards them being deployed in the day-to-day life of people with mobility impairments.

Better navigation would potentially be possible with a strategic effort to improve the built environment: We assume that accessibility barriers are randomly distributed. However, there are a few cities who do make a concerted and strategic effort to provide accessible spaces in the right places, which may produce better navigation performance for a given inaccessibility rate. This strategy would align with work already done by city planners in respect of public transport services (Ceder, 2016; Victor and Ponnuswamy, 2012) (and separately, the provision of cycling routes (Winters et al., 2013; Larsen et al., 2013)), who carefully configure the spatial layout of routes to optimise performance, suggesting a related approach could potentially work with accessibility as well. For instance, it might be that if some accessible ‘trunk routes’ were to be deliberately built and identified in advance (perhaps in a grid formation as suggested for public transport in (Ceder, 2016; Pemberton, 2020)), it would presumably be possible to improve substantially upon this performance, but in that case, there would be a lot of (expensive) building work required by a city or local council. Notably, the results of different approaches towards embedding accessibility in the built-environment can be simulated using our framework (by making the relevant edges always accessible and automatically detected by the documentation tool as being such) with a small modification, meaning these experiments can be done in the future.

5.2. Implications for Human Experience

The likely performance of a navigation tool across our range of scenarios set out in our results also has a range of important consequences on a human level, thereby leading to wider implications for the design of navigation systems for people with mobility impairments.

Navigation is not a panacea: Our results show that even providing a perfect solution to the documentation of the built environment does not solve the navigation problem (although it would help in identifying barriers to be addressed). Instead, the built environment needs to be made a lot more accessible, in order to provide a reasonable experience for people with many mobility impairments. There is a real risk that the capability of tools to provide barrier minimization and thus a substantially better experience for many people with mobility impairments, leads to fewer physical improvements being made. For example, barrier minimization would likely lead to a lot better experience for manual wheelchair users (perhaps even a nearly equal one to people without mobility impairments) who would then be less concerned about physical improvements, but it would not address the problem for most electric wheelchair users (for whom physical improvements are necessary to give them a reasonable experience). Given the complicated politics around disability and the built environment (see e.g. (Kirkham et al., 2017; Saha et al., 2021)), the risk is that the most disabled people are disadvantaged by navigation systems, by less resources being spent on fixing the built environment, thus marginalizing them even further.

Transparent and explainable navigation tools: By acting as implicit performance metrics, our simulations can be used to manage expectations to provide better navigation systems: this works by being clear as to the likely navigation performance of a given system, which can be done provided there are estimates available of key variables (i.e. the inaccessibility rate, as well as the TPR / TNR of a documentation tool). Specifically, someone can be informed of the chance or probability of encountering one or more accessibility barriers – for example, it would be possible to say ‘On average you might encounter 0.3 accessibility barriers on your journey taking this route’, or ‘Your journey will normally take between 15 and 25 minutes, considering the likelihood of having to re-route around accessibility barriers’. Being transparent in this manner may provide a better experience for end users of these systems (e.g. journey planning can better consider the risk of being misdirected), as well as ensuring that these systems are used with realistic expectations in mind, because the expected performance under given circumstances can be directly quantified, especially in cities when the rough inaccessibility rate for different types of mobility impairments is known.

Whilst the need for this has already been identified (see e.g. (Tannert et al., 2019)), we are unaware of any widely used accessible navigation system that presents options and routes in terms of risk. The simulations we have presented allow for this risk to be estimated and presented to end users in the context of a navigation system, thus overcoming this hurdle if the accuracy of the documentation tool can be estimated, likewise with the distribution of accessibility barriers in a city. By being upfront with people with mobility impairments, we can expect that in return they would participate in the reporting of barriers, thus enabling a tool to become gradually more accurate and informative as it is used, thereby closing this important feedback loop.

Studying Navigation Tools: Another implication of our results is that relatively small changes in inaccessibility rate can lead to markedly different user experiences (as the relationship is not linear). This has substantial implications for user experience studies of these tools. In particular, studies based on past experience (e.g. as with (Gupta et al., 2020; Hara et al., 2016)) of using existing tools present this difficulty, because it would be difficult to separate out the performance of the tool from the level of accessibility in the built environment. In the immediate term, this may mean that these investigations should be done in respect of controlled simulations (rather than real world studies), so the performance of the tool and the variance across locations can be disaggregated from the interface design problem. These studies are particularly important going forward, to help determine which types of error should be avoided and for which groups of people. For instance, some people might be less concerned about a tool falsely reporting a journey is impossible, than by being sent on a journey that contains accessibility barriers. The right balance to be struck will likely vary substantially across different populations of people with mobility impairments, as well as being governed somewhat by individual preferences.

5.3. Expanding the role of simulations in Mobile Navigation

The simulation approach that we have developed is potentially applicable in a wide range of circumstances, with some adaptions, making this work be of general import to the field of Mobile HCI beyond the accessible navigation. There are three main domains in which this work can be expanded.

Other accessibility navigation problems: The modelling approach that we have developed can also be applied to other accessibility related problems, with some modifications or different parameters. For example, a city planner could use this same approach to determine the impact of addressing specific accessibility barriers on the accessible navigation of their city (e.g. by setting purpose-built accessible edges/street segments to be always accessible and marked as such to a documentation tool, which would in turn be 100% accurate in identifying it), thus optimizing the use of scarce public resources to make accessibility improvements. Similarly, our approach could also be applied to optimize the performance of a navigation tool (especially in terms of trade-offs between TNR and TPR) to provide the best experience for individual users, depending on their preferences and needs.

With further expansion, it would also be possible to modify our approach to include user weights for different barriers: our simulations took the view that accessibility barriers should be always avoided (with the result of demonstrating the infeasibility of a barrier avoidance approach) and thus gave an infinite weight to an accessibility barrier, but this value can be changed. For instance, a manual wheelchair user might simply wish to somewhat reduce the amount of times they have to ‘curb hop’ due to a missing dropped curb, and would benefit from a system that gave a potential dropped curb a penalty (but a smaller one than automatically avoiding a given route) This would involve adding a distance weight (based on user preferences) for a perceived barrier of this nature and then picking the shortest route with this weight added. This approach can be expanded to include different weights for different types of barriers, depending on the person in question.

Navigation beyond accessibility: Navigation in the real world has been a long-standing topic of interest in the wider MobileHCI community, including the design of routing interfaces. The minimization of the risk of error is one such concern that arises in a range of circumstances, be it as-the-crow-flies navigation (Savino et al., 2020), missing GPS information (where there is a degraded signal) (Ranasinghe et al., 2019; Burigat and Chittaro, 2011), or the provision of ‘scenic’ or more comfortable routes (Johnson et al., 2017). These types of problems could also benefit from the identification of street elements that likely contain problems, be it a lack of a scenic feature, an area that is likely to have degraded mobile or GPS signals, or have other features which make navigation more challenging. Just as with accessibility barriers, there is likely to be imperfect information about these features, and thus a risk of encountering a suboptimal route. This is important for at least two reasons: first, a user can be given a choice of avoiding this risk, and secondly the risk of encountering a particular barrier can be quantified (which in turn could feed into interface design of a ‘quality aware’ navigation system (Siriaraya et al., 2020)). At the same time, the concept of an accessibility related feature can be a wide one if accessibility is considered in a holistic manner: for instance, some people seek to avoid areas of crime or poor weather (Johnson et al., 2017), which may be sometimes be for reasons arising from a mental health condition, or another impairment, but is not generally considered as the primary concern.

Monte-Carlo Simulations as Practical and Implicit Performance Metrics: Performance metrics are descriptive statistics used to summarise the result of a machine learning system, including a plethora of mobile sensing systems (especially activity trackers). Yet providing meaningful metrics is an important challenge, both for evaluating algorithms (by researchers) but also representing results to the public. Our simulation approach is important because it is distinct from the static and hard-edged performance metrics typically used for measuring classifier performance, and instead provides a practical measure of real-world performance focused on the relevant problem at hand. Given that accessibility documentation is similar to activity recognition problems (and indeed systems using inertial sensors and the motion of users to detect barriers are activity recognition systems) this is thus a new approach towards addressing the ‘performance metrics’ for activity recognition problem (see e.g.(Hammerla and Plötz, 2015; Ward et al., 2011) for a discussion of this problem).

As such, this work demonstrates how to use simulations to construct implicit performance metrics, that meaningfully translate static performance metrics into problem-focused ones. Whilst existing performance metrics are helpful when comparing the relative performance of algorithms, these simulations answer a different question: what performance is ‘good enough’ (in respect of a given real-world problem)? These implicit performance metrics are therefore an important complement to existing metrics traditionally used to measure classifier performance, and provide a platform for further exploration of real-world performance. This approach could in principle be adapted more widely across the domain of Mobile HCI, with Monte-Carlo simulations being used to provide information that is more relevant to end-users in respect of any system that depends on mobile-sensing.

5.4. Limitations

Whilst our work clearly shows the benefits of using simulations to provide an indication of real-world performance, it should be observed that the simulations we have conducted are somewhat exploratory in nature. Our goal was to demonstrate some of the opportunities of following such an approach for the accessibility navigation/documentation problem, as well as illustrating how this can be practically implemented: this has been successful in raising a number of important concerns for the design of these systems and the advancement of this important research agenda. Nevertheless, given the complexity of the issue (and the simplifying assumptions we necessarily made), there are various parameter changes that potentially could lead to different results, including our choice of the bag of routes and city structures, our choice of navigation strategy, and our decision to constrain the length of routes, as well as smaller decisions in the design and implementation of our experiments. Our simulation assumed that the main objective was avoidance, with the effect that encounters with barriers were heavily penalized: some people with disabilities might be less demanding in their expectations (especially if their goal is to minimize discomfort). This means that there is perhaps an element of pessimism in our arguments, as a weaker tool might be useful for a minority of disabled people (even though this would be far away from the ambitions of the accessibility documentation agenda). Given the wide lacuna that this work fits into, we hope to see future investigations and simulations that explore alternative navigation methodologies and objectives.

6. Conclusion

This work illustrates the potential benefit of carefully designed computer simulations in advancing the agenda of accessibility documentation and navigation. In this work, we contributed a tractable approach for simulating a very large number of journeys in a reasonable time. We used this approach to demonstrate the performance boundaries placed on navigation tools for people with mobility impairments, thereby clearly illustrating the circumstances in which such tools can be useful and to what extent. The resulting implicit performance metrics led to clear design principles that respond to the boundaries placed on the performance of any such navigation tool, thus enabling an optimum approach towards the design of this specific type of navigation tool. Our simulations are also a supportive framework that enable the effective design of better navigation tools for people with mobility impairments and provide a means to evaluate their likely performance in a wide range of circumstances, thus enabling the management of the expectations of an end user. At the same time, with some adaption, this work can be used to design simulations that could address other types of navigation challenges, including those that are outside of the space of accessibility for people with mobility impairments.

Acknowledgements.
This research was supported in part by the Monash eResearch Centre and eSolutions-Research Support Services through the use of the MonARCH HPC Cluster.

References

  • (1)
  • Bagenstos (2020) Samuel R Bagenstos. 2020. Towards an Urban Disability Agenda. Fordham Urban Law Journal 47, 5 (2020), 1335–58.
  • Boeing (2017) Geoff Boeing. 2017. OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. Computers, Environment and Urban Systems 65 (2017), 126–139. publisher: Elsevier.
  • Bowtell (2015) James Bowtell. 2015. Assessing the value and market attractiveness of the accessible tourism industry in Europe: a focus on major travel and leisure companies. Journal of Tourism Futures 1, 3 (2015), 203–222.
  • Bromley et al. (2007) Rosemary DF Bromley, David L Matthews, and Colin J Thomas. 2007. City centre accessibility for wheelchair users: The consumer perspective and the planning implications. Cities 24, 3 (2007), 229–241. publisher: Elsevier.
  • Burigat and Chittaro (2011) Stefano Burigat and Luca Chittaro. 2011. Pedestrian navigation with degraded GPS signal: investigating the effects of visualizing position uncertainty. Proceedings of the 13th international conference on human computer interaction with mobile devices and services, 221–230.
  • Ceder (2016) Avishai Ceder. 2016. Public transit planning and operation: Modeling, practice and behavior. CRC press.
  • Church and Marston (2003) Richard L Church and James R Marston. 2003. Measuring accessibility for people with a disability. Geographical Analysis 35, 1 (2003), 83–96. publisher: Wiley Online Library.
  • Ding et al. (2007) Dan Ding, Bambang Parmanto, Hassan A Karimi, Duangduen Roongpiboonsopit, Gede Pramana, Thomas Conahan, and Piyawan Kasemsuppakorn. 2007. Design considerations for a personalized wheelchair navigation system. 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 4790–4793.
  • Froehlich et al. (2019) Jon E Froehlich, Anke Brock, Anat Caspi, João Guerreiro, Kotaro Hara, Reuben Kirkham, Johannes Schöning, and Benjamin Tannert. 2019. Grand challenges in accessible maps. ACM Interactions 26, 2 (2019), 78–81.
  • Froehlich et al. (2020) Jon E Froehlich, Michael Saugstad, Manaswi Saha, and Matthew Johnson. 2020. Towards Mapping and Assessing Sidewalk Accessibility Across Socio-cultural and Geographic Contexts. Data4Good: Designing for Diversity and Development Workshop at AVI 2020, (2020).
  • Gani et al. (2019) Md Osman Gani, Vaskar Raychoudhury, Janick Edinger, Valeria Mokrenko, Zheng Cao, and Ce Zhang. 2019. Smart surface classification for accessible routing through built environment: A crowd-sourced approach. Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 11–20.
  • Gupta et al. (2020) Maya Gupta, Ali Abdolrahmani, Emory Edwards, Mayra Cortez, Andrew Tumang, Yasmin Majali, Marc Lazaga, Samhitha Tarra, Prasad Patil, Ravi Kuber, et al. 2020. Towards More Universal Wayfinding Technologies: Navigation Preferences Across Disabilities. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13.
  • Hammerla and Plötz (2015) Nils Y Hammerla and Thomas Plötz. 2015. Let’s (not) stick together: pairwise similarity biases cross-validation in activity recognition. Ubicomp 2015, 1041–1051.
  • Hara et al. (2016) Kotaro Hara, Christine Chan, and Jon E Froehlich. 2016. The design of assistive location-based technologies for people with ambulatory disabilities: A formative study. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 1757–1768.
  • Hara et al. (2014) Kotaro Hara, Jin Sun, Robert Moore, David Jacobs, and Jon Froehlich. 2014. Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning. Proceedings of the 27th annual ACM symposium on User interface software and technology, 189–204.
  • Harriehausen-Mühlbauer and Roth (2016) Bettina Harriehausen-Mühlbauer and Jonas Roth. 2016. WheelScout-Barrier-Free Navigation. Proceedings of SAI Intelligent Systems Conference, 1056–1063.
  • Iwasawa et al. (2015a) Yusuke Iwasawa, Koya Nagamine, Yutaka Matsuo, and Ikuko Eguchi Yairi. 2015a. Road sensing: Personal sensing and machine learning for development of large scale accessibility map. Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, 335–336.
  • Iwasawa et al. (2015b) Yusuke Iwasawa, Kouya Nagamine, Ikuko Eguchi Yairi, and Yutaka Matsuo. 2015b. Toward an automatic road accessibility information collecting and sharing based on human behavior sensing technologies of wheelchair users. Procedia Computer Science 63 (2015), 74–81.
  • Iwasawa and Yairi (2012) Yusuke Iwasawa and Ikuko Eguchi Yairi. 2012. Life-logging of wheelchair driving on web maps for visualizing potential accidents and incidents. Pacific Rim International Conference on Artificial Intelligence, 157–169.
  • Iwasawa et al. (2016) Yusuke Iwasawa, Ikuko Eguchi Yairi, and Yutaka Matsuo. 2016. Combining Human Action Sensing of Wheelchair Users and Machine Learning for Autonomous Accessibility Data Collection. IEICE Transactions on Information and Systems 99, 4 (2016), 1153–1161.
  • Johnson et al. (2017) Isaac Johnson, Jessica Henderson, Caitlin Perry, Johannes Schöning, and Brent Hecht. 2017. Beautiful… but at what cost? An examination of externalities in geographic vehicle routing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (2017), 1–21. publisher: ACM New York, NY, USA.
  • Kamaldin et al. (2019) Nazir Kamaldin, KEE Susan, KONG Songwei, LEE Chengkai, Huiguang Liang, Alisha Saini, TAN Hwee-Pink, and TAN Hwee-Xian. 2019. Smartbfa: A passive crowdsourcing system for point-to-point barrier-free access. 2019 IEEE 44th Conference on Local Computer Networks (LCN), 34–41.
  • Kirkham et al. (2017) Reuben Kirkham, Romeo Ebassa, Kyle Montague, Kellie Morrissey, Vasilis Vlachokyriakos, Sebastian Weise, and Patrick Olivier. 2017. WheelieMap: An Exploratory System for Qualitative Reports of Inaccessibility in the Built Environment. Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, 38.
  • Kristensson et al. (2020) Per Ola Kristensson, James Lilley, Rolf Black, and Annalu Waller. 2020. A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–11. https://doi.org/10.1145/3313831.3376525 event-place: Honolulu, HI, USA.
  • Kurauchi et al. (2019) Yuki Kurauchi, Naoto Abe, Hiroshi Konishi, and Hitoshi Seshimo. 2019. Barrier Detection Using Sensor Data from Multiple Modes of Transportation with Data Augmentation. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC) 1, 667–675.
  • Lange et al. (2021) Marvin Lange, Benjamin Tannert, and Reuben Kirkham. 2021. Strategically using Applied Machine Learning for Accessibility Documentation in the Built Environment. Interact 2021.
  • Larsen et al. (2013) Jacob Larsen, Zachary Patterson, and Ahmed El-Geneidy. 2013. Build it. But where? The use of geographic information systems in identifying locations for new cycling infrastructure. International Journal of Sustainable Transportation 7, 4 (2013), 299–317. publisher: Taylor & Francis.
  • Mascetti et al. (2020) Sergio Mascetti, Gabriele Civitarese, Omar El Malak, and Claudio Bettini. 2020. SmartWheels: Detecting urban features for wheelchair users’ navigation. Pervasive and Mobile Computing (2020), 101115.
  • Morris (2020) Meredith Ringel Morris. 2020. AI and accessibility. Commun. ACM 63, 6 (2020), 35–37. publisher: ACM New York, NY, USA.
  • Mourcou et al. (2013) Quentin Mourcou, Anthony Fleury, Pascal Dupuy, B Diot, C Franco, and Nicolas Vuillerme. 2013. Wegoto: A Smartphone-based approach to assess and improve accessibility for wheelchair users. IEEE EMBS 2013, 1194–1197.
  • Organization et al. (2011) World Health Organization et al. 2011. World report on disability 2011. World Health Organization.
  • Pemberton (2020) Steve Pemberton. 2020. Optimising Melbourne’s bus routes for real-life travel patterns. Case Studies on Transport Policy 8, 3 (2020), 1038–1052. publisher: Elsevier.
  • Poppe et al. (2007) Ronald Poppe, Rutger Rienks, and Betsy van Dijk. 2007. Evaluating the future of HCI: challenges for the evaluation of emerging applications. In Artificial Intelligence for Human Computing. Springer, 234–250.
  • Ranasinghe et al. (2019) Champika Ranasinghe, Nicholas Schiestel, and Christian Kray. 2019. Visualising location uncertainty to support navigation under degraded gps signals: A comparison study. Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, 1–11.
  • Rice et al. (2018) Matthew T Rice, Dan Jacobson, Dieter Pfoser, Kevin M Curtin, Han Qin, Kerry Coll, Rebecca Rice, Fabiana Paez, and Ahmad Omar Aburizaiza. 2018. Quality Assessment and Accessibility Mapping in an Image-Based Geocrowdsourcing Testbed. Cartographica: The International Journal for Geographic Information and Geovisualization 53, 1 (2018), 1–14.
  • Saha et al. (2021) Manaswi Saha, Devanshi Chauhan, Siddhant Patil, Rachel Kangas, Jeffrey Heer, and Jon E Froehlich. 2021. Urban Accessibility as a Socio-Political Problem: A Multi-Stakeholder Analysis. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–26. publisher: ACM New York, NY, USA.
  • Saha et al. (2019) Manaswi Saha, Michael Saugstad, Hanuma Teja Maddali, Aileen Zeng, Ryan Holland, Steven Bower, Aditya Dash, Sage Chen, Anthony Li, Kotaro Hara, et al. 2019. Project sidewalk: A web-based crowdsourcing tool for collecting sidewalk accessibility data at scale. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14.
  • Savino et al. (2020) Gian-Luca Savino, Laura Meyer, Eve Emily Sophie Schade, Thora Tenbrink, and Johannes Schöning. 2020. Point Me In the Right Direction: Understanding User Behaviour with As-The-Crow-Flies Navigation. 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, 1–11.
  • Siriaraya et al. (2020) Panote Siriaraya, Yuanyuan Wang, Yihong Zhang, Shoko Wakamiya, Péter Jeszenszky, Yukiko Kawai, and Adam Jatowt. 2020. Beyond the Shortest Route: A Survey on Quality-Aware Route Navigation for Pedestrians. IEEE Access 8 (2020), 135569–135590. publisher: IEEE.
  • Tannert et al. (2019) Benjamin Tannert, Reuben Kirkham, and Johannes Schöning. 2019. Analyzing Accessibility Barriers Using Cost-Benefit Analysis to Design Reliable Navigation Services for Wheelchair Users. Interact 2019.
  • Velho et al. (2016) Raquel Velho, Catherine Holloway, Andrew Symonds, and Brian Balmer. 2016. The effect of transport accessibility on the social inclusion of wheelchair users: A mixed method analysis. Social Inclusion 4, 3 (2016), 24–35.
  • Victor and Ponnuswamy (2012) D Johnson Victor and S Ponnuswamy. 2012. Urban transportation: planning, operation and management. Tata McGraw-Hill Education.
  • Völkel et al. (2008) Thorsten Völkel, Romina Kühn, and Gerhard Weber. 2008. Mobility impaired pedestrians are not cars: Requirements for the annotation of geographical data. International Conference on Computers for Handicapped Persons, 1085–1092.
  • Ward et al. (2011) Jamie A Ward, Paul Lukowicz, and Hans W Gellersen. 2011. Performance metrics for activity recognition. ACM Transactions on Intelligent Systems and Technology (TIST) 2, 1 (2011), 6.
  • Weld et al. (2019) Galen Weld, Esther Jang, Anthony Li, Aileen Zeng, Kurtis Heimerl, and Jon E Froehlich. 2019. Deep Learning for Automatically Detecting Sidewalk Accessibility Problems Using Streetscape Imagery. The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 196–209.
  • Whittaker et al. (2019) Meredith Whittaker, Meryl Alper, Cynthia L Bennett, Sara Hendren, Liz Kaziunas, Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, et al. 2019. Disability, Bias, and AI. AI Now Institute, November (2019).
  • Winters et al. (2013) Meghan Winters, Michael Brauer, Eleanor M Setton, and Kay Teschke. 2013. Mapping bikeability: a spatial tool to support sustainable travel. Environment and Planning B: Planning and Design 40, 5 (2013), 865–883. publisher: SAGE Publications Sage UK: London, England.
  • Wu et al. (2020) Yenchun Jim Wu, Wan-Ju Liu, and Chih-Hung Yuan. 2020. A mobile-based barrier-free service transportation platform for people with disabilities. Computers in Human Behavior 107 (2020), 105776. publisher: Elsevier.
  • Yairi et al. (2019) Ikuko Eguchi Yairi, Hiroki Takahashi, Takumi Watanabe, Kouya Nagamine, Yusuke Fukushima, Yutaka Matsuo, and Yusuke Iwasawa. 2019. Estimating Spatiotemporal Information from Behavioral Sensing Data of Wheelchair Users by Machine Learning Technologies. Information 10, 3 (2019), 114.
  • Yang and Diez-Roux (2012) Yong Yang and Ana V Diez-Roux. 2012. Walking distance by trip purpose and population subgroups. American journal of preventive medicine 43, 1 (2012), 11–19. publisher: Elsevier.