skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 16 until 2:00 AM ET on Saturday, May 17 due to maintenance. We apologize for the inconvenience.


Title: The HapBack: Evaluation of Absolute and Relative Distance Encoding to Enhance Spatial Awareness in a Wearable Tactile Device
For the significant global population of individuals who are blind or visually impaired, spatial awareness during navigation remains a challenge. Tactile Electronic Travel Aids have been designed to assist with the provision of spatiotemporal information, but an intuitive method for mapping this information to patterns on a vibrotactile display remains to be determined. This paper explores the encoding of distance from a navigator to an object using two strategies: absolute and relative. A wearable prototype, the HapBack, is presented with two straps of vertically aligned vibrotactile motors mapped to five distances, with each distance mapped to a row on the display. Absolute patterns emit a single vibration at the row corresponding to a distance, while relative patterns emit a sequence of vibrations starting from the bottom row and ending at the row mapped to that distance. These two encoding strategies are comparatively evaluated for identification accuracy and perceived intuitiveness of mapping among ten adult participants who are blind or visually impaired. No significant difference was found between the intuitiveness of the two encodings based on these metrics, with each showing promising results for application during navigation tasks.  more » « less
Award ID(s):
1828010
PAR ID:
10277360
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Lecture notes in computer science
Volume:
12426 LNCS
ISSN:
1611-3349
Page Range / eLocation ID:
251-266
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind. 
    more » « less
  2. null (Ed.)
    The iASSIST is an iPhone-based assistive sensor solution for independent and safe travel for people who are blind or visually impaired, or those who simply face challenges in navigating an unfamiliar indoor environment. The solution integrates information of Bluetooth beacons, data connectivity, visual models, and user preferences. Hybrid models of interiors are created in a modeling stage with these multimodal data, collected, and mapped to the floor plan as the modeler walks through the building. Client-server architecture allows scaling to large areas by lazy-loading models according to beacon signals and/or adjacent region proximity. During the navigation stage, a user with the navigation app is localized within the floor plan, using visual, connectivity, and user preference data, along an optimal route to their destination. User interfaces for both modeling and navigation use multimedia channels, including visual, audio, and haptic feedback for targeted users. The design of human subject test experiments is also described, in addition to some preliminary experimental results. 
    more » « less
  3. Disabled people are in a dire need of electronic assistance. There are a couple of electronic assisting devices that bring their life to an easier turn. In this paper, we describe the design and implementation of a personal assistant robot for blind people. Visually impaired people need such personal assistant devices for they provide a real-time assistance regarding any necessary problem that blind people face. Some of those main problems are navigation in the indoors, identifying objects around unless getting a physical sense of those objects and sensing the surrounding with the distance of multiple objects. Our paper discusses the various application targeting features like using the LIDAR for local mapping, using a 3D camera for understanding the depth of the surrounding so that the person understands the distance and other information of the objects around. This design has been experimentally validated and required observations are posted in this paper. 
    more » « less
  4. Blind & visually impaired individuals often face challenges in wayfinding in unfamiliar environments. Thus, an accessible indoor positioning and navigation system that safely and accurately positions and guides such individuals would be welcome. In indoor positioning, both Bluetooth Low Energy (BLE) beacons and Google Tango have their individual strengths but also have weaknesses that can affect the overall usability of a system that solely relies on either component. We propose a hybrid positioning and navigation system that combines both BLE beacons and Google Tango in order to tap into their strengths while minimizing their individual weaknesses. In this paper, we will discuss the approach and implementation of a BLE- and Tango-based hybrid system. The results of pilot tests on the individual components and a human subject test on the full BLE and hybrid systems are also presented. In addition, we have explored the use of vibrotactile devices to provide additional information to a user about their surroundings. 
    more » « less
  5. Scientific disciplines spanning biology, biochemistry, and biophysics involve the study of proteins and their functions. Visualization of protein structures represents a barrier to education and research in these disciplines for students who are blind or visually impaired. Here, we present a software plugin for readily producing variable-height tactile graphics of proteins using the free biomolecular visualization software Visual Molecular Dynamics (VMD) and protein structure data that is publicly available through the Protein Data Bank. Our method also supports interactive tactile visualization of proteins with VMD on electronic refreshable tactile display devices. Employing our method in an academic laboratory has enabled an undergraduate student who is blind to carry out research alongside her sighted peers. By making the study of protein structures accessible to students who are blind or visually impaired, we aim to promote diversity and inclusion in STEM education and research. 
    more » « less