skip to main content


Title: A Shared Autonomy Approach for Wheelchair Navigation based on Learned User Preferences.
Research on robotic wheelchairs covers a broad range from complete autonomy to shared autonomy to manual navigation by a joystick or other means. Shared autonomy is valuable because it allows the user and the robot to complement each other, to correct each other's mistakes and to avoid collisions. In this paper, we present an approach that can learn to replicate path selection according to the wheelchair user's individual, often subjective, criteria in order to reduce the number of times the user has to intervene during automatic navigation. This is achieved by learning to rank paths using a support vector machine trained on selections made by the user in a simulator. If the classifier's confidence in the top ranked path is high, it is executed without requesting confirmation from the user. Otherwise, the choice is deferred to the user. Simulations and laboratory experiments using two path generation strategies demonstrate the effectiveness of our approach.  more » « less
Award ID(s):
1637761
NSF-PAR ID:
10041311
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Fifth International Workshop on Assistive Computer Vision and Robotics
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Users play an integral role in the performance of many robotic systems, and robotic systems must account for differences in users to improve collaborative performance. Much of the work in adapting to users has focused on designing teleoperation controllers that adjust to extrinsic user indicators such as force, or intent, but do not adjust to intrinsic user qualities. In contrast, the Human-Robot Interaction community has extensively studied intrinsic user qualities, but results may not rapidly be fed back into autonomy design. Here we provide foundational evidence for a new strategy that augments current shared control, and provide a mechanism to directly feed back results from the HRI community into autonomy design. Our evidence is based on a study examining the impact of the user quality “locus of control” on telepresence robot performance. Our results support our hypothesis that key user qualities can be inferred from human-robot interactions (such as through path deviation or time to completion) and that switching or adaptive autonomies might improve shared control performance. 
    more » « less
  2. Each fall, millions of monarch butterflies across the northern US and Canada migrate up to 4,000 km to overwinter in the exact same cluster of mountain peaks in central Mexico. To track monarchs precisely and study their navigation, a monarch tracker must obtain daily localization of the butterfly as it progresses on its 3-month journey. And, the tracker must perform this task while having a weight in the tens of milligram (mg) and measuring a few millimeters (mm) in size to avoid interfering with monarch's flight. This paper proposes mSAIL, 8 × 8 × 2.6 mm and 62 mg embedded system for monarch migration tracking, constructed using 8 prior custom-designed ICs providing solar energy harvesting, an ultra-low power processor, light/temperature sensors, power management, and a wireless transceiver, all integrated and 3D stacked on a micro PCB with an 8 × 8 mm printed antenna. The proposed system is designed to record and compress light and temperature data during the migration path while harvesting solar energy for energy autonomy, and wirelessly transmit the data at the overwintering site in Mexico, from which the daily location of the butterfly can be estimated using a deep learning-based localization algorithm. A 2-day trial experiment of mSAIL attached on a live butterfly in an outdoor botanical garden demonstrates the feasibility of individual butterfly localization and tracking. 
    more » « less
  3. In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides. 
    more » « less
  4. Many organizations maintain and operate large shared computing clusters, since they can substantially reduce computing costs by leveraging statistical multiplexing to amortize it across all users. Importantly, such shared clusters are generally not free to use, but have an internal pricing model that funds their operation. Since employees at many large organizations, especially Universities, have some budgetary autonomy over purchase decisions, internal shared clusters are increasingly competing for users with cloud platforms, which may offer lower costs and better performance. As a result, many organizations are shifting their shared clusters to operate on cloud resources. This paper empirically analyzes the user incentives for shared cloud clusters under two different pricing models using an 8-year job trace from a large shared cluster for a large University system. Our analysis shows that, with either pricing model, a large fraction of users have little financial incentive to participate in a shared cloud cluster compared to directly acquiring resources from a cloud platform. While shared cloud clusters can provide some limited reductions in cost by leveraging reserved instances at a discount, due to bursty workloads, realizing these reductions generally requires imposing long job waiting times, which for many users are likely not worth the cost reduction. In particular, we show that, assuming users defect from the shared cluster if their wait time is greater than 15x their average job runtime, over 80% of the users would defect, which increases the price of the remaining users such that it eliminates any incentive to participate in a shared cluster. Thus, while shared cloud clusters may provide users other benefits, their financial incentives are weak. 
    more » « less
  5. Navigating unfamiliar websites is challenging for users with visual impairments. Although many websites offer visual cues to facilitate access to pages/features most websites are expected to have (e.g., log in at the top right), such visual shortcuts are not accessible to users with visual impairments. Moreover, although such pages serve the same functionality across websites (e.g., to log in, to sign up), the location, wording, and navigation path of links to these pages vary from one website to another. Such inconsistencies are challenging for users with visual impairments, especially for users of screen readers, who often need to linearly listen to content of pages to figure out how to access certain website features. To study how to improve access to main website features, we iteratively designed and tested a command-based approach for main features of websites via a browser extension powered by machine learning and human input. The browser extension gives users a way to access high-level website features (e.g., log in, find stores, contact) via keyboard commands. We tested the browser extension in a lab setting with 15 Internet users, including 9 users with visual impairments and 6 without. Our study showed that commands for main website features can greatly improve the experience of users with visual impairments. People without visual impairments also found command-based access helpful when visiting unfamiliar, cluttered, or infrequently visited websites, suggesting that this approach can support users with visual impairments while also benefiting other user groups (i.e., universal design). Our study reveals concerns about the handling of unsupported commands and the availability and trustworthiness of human input. We discuss how websites, browsers, and assistive technologies could incorporate a command-based paradigm to enhance web accessibility and provide more consistency on the web to benefit users with varied abilities when navigating unfamiliar or complex websites. 
    more » « less