skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Shared Autonomy Approach for Wheelchair Navigation based on Learned User Preferences.
Research on robotic wheelchairs covers a broad range from complete autonomy to shared autonomy to manual navigation by a joystick or other means. Shared autonomy is valuable because it allows the user and the robot to complement each other, to correct each other's mistakes and to avoid collisions. In this paper, we present an approach that can learn to replicate path selection according to the wheelchair user's individual, often subjective, criteria in order to reduce the number of times the user has to intervene during automatic navigation. This is achieved by learning to rank paths using a support vector machine trained on selections made by the user in a simulator. If the classifier's confidence in the top ranked path is high, it is executed without requesting confirmation from the user. Otherwise, the choice is deferred to the user. Simulations and laboratory experiments using two path generation strategies demonstrate the effectiveness of our approach.  more » « less
Award ID(s):
1637761
PAR ID:
10041311
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Fifth International Workshop on Assistive Computer Vision and Robotics
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Shared autonomy enables robots to infer user intent and assist in accomplishing it. But when the user wants to do a new task that the robot does not know about, shared autonomy will hinder their performance by attempting to assist them with something that is not their intent. Our key idea is that the robot can detect when its repertoire of intents is insufficient to explain the user’s input, and give them back control. This then enables the robot to observe unhindered task execution, learn the new intent behind it, and add it to this repertoire. We demonstrate with both a case study and a user study that our proposed method maintains good performance when the human’s intent is in the robot’s repertoire, outperforms prior shared autonomy approaches when it isn’t, and successfully learns new skills, enabling efficient lifelong learning for confidence-based shared autonomy. 
    more » « less
  2. Users play an integral role in the performance of many robotic systems, and robotic systems must account for differences in users to improve collaborative performance. Much of the work in adapting to users has focused on designing teleoperation controllers that adjust to extrinsic user indicators such as force, or intent, but do not adjust to intrinsic user qualities. In contrast, the Human-Robot Interaction community has extensively studied intrinsic user qualities, but results may not rapidly be fed back into autonomy design. Here we provide foundational evidence for a new strategy that augments current shared control, and provide a mechanism to directly feed back results from the HRI community into autonomy design. Our evidence is based on a study examining the impact of the user quality “locus of control” on telepresence robot performance. Our results support our hypothesis that key user qualities can be inferred from human-robot interactions (such as through path deviation or time to completion) and that switching or adaptive autonomies might improve shared control performance. 
    more » « less
  3. Navigating safely and independently presents considerable challenges for people who are blind or have low vision (BLV), as it re- quires a comprehensive understanding of their neighborhood environments. Our user study reveals that understanding sidewalk materials and objects on the sidewalks plays a crucial role in navigation tasks. This paper presents a pioneering study in the field of navigational aids for BLV individuals. We investigate the feasibility of using auditory data, specifically the sounds produced by cane tips against various sidewalk materials, to achieve material identification. Our approach utilizes ma- chine learning and deep learning techniques to classify sidewalk materials solely based on audio cues, marking a significant step towards empowering BLV individuals with greater autonomy in their navigation. This study contributes in two major ways: Firstly, a lightweight and practical method is developed for volunteers or BLV individuals to autonomously collect auditory data of sidewalk materials using a microphone-equipped white cane. This innovative approach transforms routine cane usage into an effective data-collection tool. Secondly, a deep learning-based classifier algorithm is designed that leverages a dual architecture to enhance audio feature extraction. This includes a pre-trained Convolutional Neural Network (CNN) for regional feature extraction from two-dimensional Mel-spectrograms and a booster module for global feature enrichment. 
    more » « less
  4. Promising new technology has recently emerged to increase the level of safety and autonomy in driving, including lane and distance keeping assist systems, automatic braking systems, and even highway auto-drive systems. Each of these technologies brings cars closer to the ultimate goal of fully autonomous operation. While it is still unclear, if and when safe, driverless cares will be released on the mass market, a comparison with the development of aircraft autopilot systems can provide valuable insight. This review article contains several Additional Resources at the end, including key references to support its findings. The article investigates a path towards ensuring safety for "self-driving" or "autonomous" cars by leveraging prior work in aviation. It focuses on navigation, or localization, which is a key aspect of automated operation. 
    more » « less
  5. Navigating safely and independently presents considerable challenges for people who are blind or have low vision (BLV), as it re- quires a comprehensive understanding of their neighborhood environments. Our user study reveals that understanding sidewalk materials and objects on the sidewalks plays a crucial role in navigation tasks. This paper presents a pioneering study in the field of navigational aids for BLV individuals. We investigate the feasibility of using auditory data, specifically the sounds produced by cane tips against various sidewalk materials, to achieve material identification. Our approach utilizes ma- chine learning and deep learning techniques to classify sidewalk materials solely based on audio cues, marking a significant step towards empowering BLV individuals with greater autonomy in their navigation. This study contributes in two major ways: Firstly, a lightweight and practical method is developed for volunteers or BLV individuals to autonomously collect auditory data of sidewalk materials using a microphone-equipped white cane. This innovative approach transforms routine cane usage into an effective data-collection tool. Secondly, a deep learning-based classifier algorithm is designed that leverages a dual architecture to enhance audio feature extraction. This includes a pre-trained Convolutional Neural Network (CNN) for regional feature extraction from two-dimensional Mel-spectrograms and a booster module for global feature enrichment. Experimental results indicate that the optimal model achieves an accuracy of 80.96% using audio data only, which can effectively recognize sidewalk materials. 
    more » « less