skip to main content


Title: A Novel Perceptive Robotic Cane with Haptic Navigation for Enabling Vision-Independent Participation in the Social Dynamics of Seat Choice
Goal-based navigation in public places is critical for independent mobility and for breaking barriers that exist for blind or visually impaired (BVI) people in a sight-centric society. Through this work we present a proof-of-concept system that autonomously leverages goal-based navigation assistance and perception to identify socially preferred seats and safely guide its user towards them in unknown indoor environments. The robotic system includes a camera, an IMU, vibrational motors, and a white cane, powered via a backpack-mounted laptop. The system combines techniques from computer vision, robotics, and motion planning with insights from psychology to perform 1) SLAM and object localization, 2) goal disambiguation and scoring, and 3) path planning and guidance. We introduce a novel 2-motor haptic feedback system on the cane’s grip for navigation assistance. Through a pilot user study we show that the system is successful in classifying and providing haptic navigation guidance to socially preferred seats, while optimizing for users’ convenience, privacy, and intimacy in addition to increasing their confidence in independent navigation. The implications are encouraging as this technology, with careful design guided by the BVI community, can be adopted and further developed to be used with medical devices enabling the BVI population to better independently engage in socially dynamic situations like seat choice.  more » « less
Award ID(s):
1830686
NSF-PAR ID:
10378510
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the IEEERSJ International Conference on Intelligent Robots and Systems
ISSN:
2153-0858
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Navigation assistive technologies have been designed to support individuals with visual impairments during independent mobility by providing sensory augmentation and contextual awareness of their surroundings. Such information is habitually provided through predefned audio-haptic interaction paradigms. However, individual capabilities, preferences and behavior of people with visual impairments are heterogeneous, and may change due to experience, context and necessity. Therefore, the circumstances and modalities for providing navigation assistance need to be personalized to different users, and through time for each user. We conduct a study with 13 blind participants to explore how the desirability of messages provided during assisted navigation varies based on users' navigation preferences and expertise. The participants are guided through two different routes, one without prior knowledge and one previously studied and traversed. The guidance is provided through turn-by-turn instructions, enriched with contextual information about the environment. During navigation and follow-up interviews, we uncover that participants have diversifed needs for navigation instructions based on their abilities and preferences. Our study motivates the design of future navigation systems capable of verbosity level personalization in order to keep the users engaged in the current situational context while minimizing distractions. 
    more » « less
  2. Consider an assistive system that guides visually impaired users through speech and haptic feedback to their destination. Existing robotic and ubiquitous navigation technologies (e.g., portable, ground, or wearable systems) often operate in a generic, user-agnostic manner. However, to minimize confusion and navigation errors, our real-world analysis reveals a crucial need to adapt theinstructional guidance across different end-users with diverse mobility skills. To address this practical issue in scalable system design, we propose a novel model based reinforcement learning framework for personalizing the system-user interaction experience. When incrementally adapting the system to new users, we propose to use a weighted experts model for addressing data-efficiency limitations in transfer learning with deep models. A real-world dataset of navigation by blind users is used to show that the proposed approach allows for (1) more accurate long-term human behavior prediction (up to 20 seconds into the future) through improved reasoning over personal mobility characteristics, interaction with surrounding obstacles, and the current navigation goal, and (2) quick adaptation at the onset of learning, when data is limited. 
    more » « less
  3. null (Ed.)
    This paper describes the interface and testing of an indoor navigation app - ASSIST - that guides blind & visually impaired (BVI) individuals through an indoor environment with high accuracy while augmenting their understanding of the surrounding environment. ASSIST features personalized interfaces by considering the unique experiences that BVI individuals have in indoor wayfinding and offers multiple levels of multimodal feedback. After an overview of the technical approach and implementation of the first prototype of the ASSIST system, the results of two pilot studies performed with BVI individuals are presented – a performance study to collect data on mobility (walking speed, collisions, and navigation errors) while using the app, and a usability study to collect user evaluation data on the perceived helpfulness, safety, ease-of-use, and overall experience while using the app. Our studies show that ASSIST is useful in providing users with navigational guidance, improving their efficiency and (more significantly) their safety and accuracy in wayfinding indoors. Findings and user feed-back from the studies confirm some of the previous results, while also providing some new insights into the creation of such an app, including the use of customized user interfaces and expanding the types of information provided. 
    more » « less
  4. Museums are gradually becoming more accessible to blind people, who have shown interest in visiting museums and in appreciating visual art. Yet, their ability to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. Based on this observation and on prior research, we developed a solution to support an independent, interactive museum experience that uses the continuous tracking of the user’s location and orientation to enable a seamless interaction between Navigation and Art Appreciation. Accurate localization and context-awareness allow for turn-by-turn guidance (Navigation Mode), as well as detailed audio content when facing an artwork within close proximity (Art Appreciation Mode). In order to evaluate our system, we installed it at The Andy Warhol Museum in Pittsburgh and conducted a user study where nine blind participants followed routes of interest while learning about the artworks. We found that all participants were able to follow the intended path, immediately grasped how to switch between Navigation and Art Appreciation modes, and valued listening to the audio content in front of each artwork. Also, they showed high satisfaction and an increased motivation to visit museums more often 
    more » « less
  5. Abstract

    Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.

     
    more » « less