skip to main content


Title: An AI-enabled Annotation Platform for Storefront Accessibility and Localization
Although various navigation apps are available, people who are blind or have low vision (PVIB) still face challenges to locate store entrances due to missing geospatial information in existing map services. Previously, we have developed a crowdsourcing platform to collect storefront accessibility and localization data to address the above challenges. In this paper, we have significantly improved the efficiency of data collection and user engagement in our new AI-enabled Smart DoorFront platform by designing and developing multiple important features, including a gamified credit ranking system, a volunteer contribution estimator, an AI-based pre-labeling function, and an image gallery feature. For achieving these, we integrate a specially designed deep learning model called MultiCLU into the Smart DoorFront. We also introduce an online machine learning mechanism to iteratively train the MultiCLU model, by using newly labeled storefront accessibility objects and their locations in images. Our new DoorFront platform not only significantly improves the efficiency of storefront accessibility data collection, but optimizes user experience. We have conducted interviews with six adults who are blind to better understand their daily travel challenges and their feedback indicated that the storefront accessibility data collected via the DoorFront platform would be very beneficial for them.  more » « less
Award ID(s):
2131186 1827505 1737533
PAR ID:
10440677
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Robles, A.
Date Published:
Journal Name:
Journal on technology and persons with disabilities
Volume:
11
Issue:
0
ISSN:
2330-4219
Page Range / eLocation ID:
76-94
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others. 
    more » « less
  2. The goal of the proposed project is to transform a large transportation hub into a smart and accessible hub (SAT-Hub), with minimal infrastructure change. The societal need is significant, especially impactful for people in great need, such as those who are blind and visually impaired (BVI) or with Autism Spectrum Disorder (ASD), as well as those unfamiliar with metropolitan areas. With our inter-disciplinary background in urban systems, sensing, AI and data analytics, accessibility, and paratransit and assistive services, our solution is a hu-man-centric system approach that integrates facility modeling, mobile navigation, and user interface designs. We leverage several transportation facili-ties in the heart of New York City and throughout the State of New Jersey as testbeds for ensuring the relevance of the research and a smooth transition to real world applications. 
    more » « less
  3. Santiago, J. (Ed.)
    The storefront accessibility can substantially impact the way people who are blind or visually impaired (BVI) travel in urban environments. Entrance localization is one of the biggest challenges to the BVI people. In addition, improperly designed staircases and obstructive store decorations can create considerable mobility challenges for BVI people, making it more difficult for them to navigate their community hence reducing their desire to travel. Unfortunately, there are few approaches to acquiring this information in advance through computational tools or services. In this paper, we propose a solution to collect large- scale accessibility data of New York City (NYC) storefronts using a crowdsourcing approach on Google Street View (GSV) panoramas. We develop a web-based crowdsourcing application, DoorFront, which enables volunteers not only to remotely label storefront accessibility data on GSV images, but also to validate the labeling result to ensure high data quality. In order to study the usability and user experience of our application, an informal beta-test is conducted and a user experience survey is designed for testing volunteers. The user feedback is very positive and indicates the high potential and usability of the proposed application. 
    more » « less
  4. People who are blind share their images and videos with companies that provide visual assistance technologies (VATs) to gain access to information about their surroundings. A challenge is that people who are blind cannot independently validate the content of the images and videos before they share them, and their visual data commonly contains private content. We examine privacy concerns for blind people who share personal visual data with VAT companies that provide descriptions authored by humans or artifcial intelligence (AI) . We frst interviewed 18 people who are blind about their perceptions of privacy when using both types of VATs. Then we asked the participants to rate 21 types of image content according to their level of privacy concern if the information was shared knowingly versus unknowingly with human- or AI-powered VATs. Finally, we analyzed what information VAT companies communicate to users about their collection and processing of users’ personal visual data through their privacy policies. Our fndings have implications for the development of VATs that safeguard blind users’ visual privacy, and our methods may be useful for other camera-based technology companies and their users. 
    more » « less
  5. In the last decade, there has been a surge in development and mainstream adoption of Artificial Intelligence (AI) systems that can generate textual image descriptions from images. However, only a few of these, such as Microsoft’s SeeingAI, are specifically tailored to needs of people who are blind screen reader users, and none of these have been brought to bear on the particular challenges faced by parents who desire image descriptions of children’s picture books. Such images have distinct qualities, but there exists no research to explore the current state of the art and opportunities to improve image-to-text AI systems for this problem domain. We conducted a content analysis of the image descriptions generated for a sample of 20 images selected from 17 recently published children’s picture books, using five AI systems: asticaVision, BLIP, SeeingAI, TapTapSee, and VertexAI. We found that descriptions varied widely in their accuracy and completeness, with only 13% meeting both criteria. Overall, our findings suggest a need for AI image-to-text generation systems that are trained on the types, contents, styles, and layouts characteristic of children’s picture book images, towards increased accessibility for blind parents. 
    more » « less