skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Using augmented reality to cue obstacles for people with low vision
Detecting and avoiding obstacles while navigating can pose a challenge for people with low vision, but augmented reality (AR) has the potential to assist by enhancing obstacle visibility. Perceptual and user experience research is needed to understand how to craft effective AR visuals for this purpose. We developed a prototype AR application capable of displaying multiple kinds of visual cues for obstacles on an optical see-through head-mounted display. We assessed the usability of these cues via a study in which participants with low vision navigated an obstacle course. The results suggest that 3D world-locked AR cues were superior to directional heads-up cues for most participants during this activity.  more » « less
Award ID(s):
2041726
PAR ID:
10396243
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
31
Issue:
4
ISSN:
1094-4087; OPEXFF
Format(s):
Medium: X Size: Article No. 6827
Size(s):
Article No. 6827
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    ABSTRACT Studies have shown that bats are capable of using visual information for a variety of purposes, including navigation and foraging, but the relative contributions of visual and auditory modalities in obstacle avoidance has yet to be fully investigated, particularly in laryngeal echolocating bats. A first step requires the characterization of behavioral responses to different combinations of sensory cues. Here, we quantified the behavioral responses of the insectivorous big brown bat, Eptesicus fuscus, in an obstacle avoidance task offering different combinations of auditory and visual cues. To do so, we utilized a new method that eliminates the confounds typically associated with testing bat vision and precludes auditory cues. We found that the presence of visual and auditory cues together enhances bats' avoidance response to obstacles compared with cues requiring either vision or audition alone. Analyses of flight and echolocation behaviors, such as speed and call rate, did not vary significantly under different obstacle conditions, and thus are not informative indicators of a bat's response to obstacle stimulus type. These findings advance the understanding of the relative importance of visual and auditory sensory modalities in guiding obstacle avoidance behaviors. 
    more » « less
  2. This paper proposes an AR-based real-time mobile system for assistive indoor navigation with target segmentation (ARMSAINTS) for both sighted and blind or low-vision (BLV) users to safely explore and navigate in an indoor environment. The solution comprises four major components: graph construction, hybrid modeling, real-time navigation and target segmentation. The system utilizes an automatic graph construction method to generate a graph from a 2D floorplan and the Delaunay triangulation-based localization method to provide precise localization with negligible error. The 3D obstacle detection method integrates the existing capability of AR with a 2D object detector and a semantic target segmentation model to detect and track 3D bounding boxes of obstacles and people to increase BLV safety and understanding when traveling in the indoor environment. The entire system does not require the installation and maintenance of expensive infrastructure, run in real-time on a smartphone, and can easily adapt to environmental changes. 
    more » « less
  3. Real-time detection of 3D obstacles and recognition of humans and other objects is essential for blind or low- vision people to travel not only safely and independently but also confidently and interactively, especially in a cluttered indoor environment. Most existing 3D obstacle detection techniques that are widely applied in robotic applications and outdoor environments often require high-end devices to ensure real-time performance. There is a strong need to develop a low-cost and highly efficient technique for 3D obstacle detection and object recognition in indoor environments. This paper proposes an integrated 3D obstacle detection system implemented on a smartphone, by utilizing deep-learning-based pre-trained 2D object detectors and ARKit- based point cloud data acquisition to predict and track the 3D positions of multiple objects (obstacles, humans, and other objects), and then provide alerts to users in real time. The system consists of four modules: 3D obstacle detection, 3D object tracking, 3D object matching, and information filtering. Preliminary tests in a small house setting indicated that this application could reliably detect large obstacles and their 3D positions and sizes in the real world and small obstacles’ positions, without any expensive devices besides an iPhone. 
    more » « less
  4. Social VR has increased in popularity due to its affordances for rich, embodied, and nonverbal communication. However, nonverbal communication remains inaccessible for blind and low vision people in social VR. We designed accessible cues with audio and haptics to represent three nonverbal behaviors: eye contact, head shaking, and head nodding. We evaluated these cues in real-time conversation tasks where 16 blind and low vision participants conversed with two other users in VR. We found that the cues were effective in supporting conversations in VR. Participants had statistically significantly higher scores for accuracy and confidence in detecting attention during conversations with the cues than without. We also found that participants had a range of preferences and uses for the cues, such as learning social norms. We present design implications for handling additional cues in the future, such as the challenges of incorporating AI. Through this work, we take a step towards making interpersonal embodied interactions in VR fully accessible for blind and low vision people. 
    more » « less
  5. For many types of robots, avoiding obstacles is necessary to prevent damage to the robot and environment. As a result, obstacle avoidance has historically been an im- portant problem in robot path planning and control. Soft robots represent a paradigm shift with respect to obstacle avoidance because their low mass and compliant bodies can make collisions with obstacles inherently safe. Here we consider the benefits of intentional obstacle collisions for soft robot navigation. We develop and experimentally verify a model of robot-obstacle interaction for a tip-extending soft robot. Building on the obstacle interaction model, we develop an algorithm to determine the path of a growing robot that takes into account obstacle collisions. We find that obstacle collisions can be beneficial for open-loop navigation of growing robots because the obstacles passively steer the robot, both reducing the uncertainty of the location of the robot and directing the robot to targets that do not lie on a straight path from the starting point. Our work shows that for a robot with predictable and safe interactions with obstacles, target locations in a cluttered, mapped environment can be reached reliably by simply setting the initial trajectory. This has implications for the control and design of robots with minimal active steering. 
    more » « less