skip to main content


Title: Usability Studies of an Egocentric Vision-Based Robotic Wheelchair
Motivated by the need to improve the quality of life for the elderly and disabled individuals who rely on wheelchairs for mobility, and who may have limited or no hand functionality at all, we propose an egocentric computer vision based co-robot wheelchair to enhance their mobility without hand usage. The robot is built using a commercially available powered wheelchair modified to be controlled by head motion. Head motion is measured by tracking an egocentric camera mounted on the user’s head and faces outward. Compared with previous approaches to hands-free mobility, our system provides a more natural human robot interface because it enables the user to control the speed and direction of motion in a continuous fashion, as opposed to providing a small number of discrete commands. This article presents three usability studies, which were conducted on 37 subjects. The first two usability studies focus on comparing the proposed control method with existing solutions while the third study was conducted to assess the effectiveness of training subjects to operate the wheelchair over several sessions. A limitation of our studies is that they have been conducted with healthy participants. Our findings, however, pave the way for further studies with subjects with disabilities.  more » « less
Award ID(s):
1637761
NSF-PAR ID:
10304165
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
10
Issue:
1
ISSN:
2573-9522
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    There have been significant advances in the technologies for robot-assisted lower-limb rehabilitation in the last decade. However, the development of similar systems for children has been slow despite the fact that children with conditions such as cerebral palsy (CP), spina bifida (SB) and spinal cord injury (SCI) can benefit greatly from these technologies. Robotic assisted gait therapy (RAGT) has emerged as a way to increase gait training duration and intensity while decreasing the risk of injury to therapists. Robotic walking devices can be coupled with motion sensing, electromyography (EMG), scalp electroencephalography (EEG) or other noninvasive methods of acquiring information about the user’s intent to design Brain-Computer Interfaces (BCI) for neuromuscular rehabilitation and control of powered exoskeletons. For users with SCI, BCIs could provide a method of overground mobility closer to the natural process of the brain controlling the body’s movement during walking than mobility by wheelchair. For adults there are currently four FDA approved lower-limb exoskeletons that could be incorporated into such a BCI system, but there are no similar devices specifically designed for children, who present additional physical, neurological and cognitive developmental challenges. The current state of the art for pediatric RAGT relies on large clinical devices with high costs that limit accessibility. This can reduce the amount of therapy a child receives and slow rehabilitation progress. In many cases, lack of gait training can result in a reduction in the mobility, independence and overall quality of life for children with lower-limb disabilities. Thus, it is imperative to facilitate and accelerate the development of pediatric technologies for gait rehabilitation, including their regulatory path. In this paper an overview of the U.S. Food and Drug Administration (FDA) clearance/approval process is presented. An example device has been used to navigate important questions facing device developers focused on providing lower limb rehabilitation to children in home-based or other settings beyond the clinic. 
    more » « less
  2. We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.

     
    more » « less
  3. Tele-operated social robots (telerobots) offer an innovative means of allowing children who are medically restricted to their homes (MRH) to return to their local schools and physical communities. Most commercially available telerobots have three foundational features that facilitate child–robot interaction: remote mobility, synchronous two-way vision capabilities, and synchronous two-way audio capabilities. We conducted a comparative analysis between the Toyota Human Support Robot (HSR) and commercially available telerobots, focusing on these foundational features. Children who used these robots and these features on a daily basis to attend school were asked to pilot the HSR in a simulated classroom for learning activities. As the HSR has three additional features that are not available on commercial telerobots: (1) pan-tilt camera, (2) mapping and autonomous navigation, and (3) robot arm and gripper for children to “reach” into remote environments, participants were also asked to evaluate the use of these features for learning experiences. To expand on earlier work on the use of telerobots by remote children, this study provides novel empirical findings on (1) the capabilities of the Toyota HSR for robot-mediated learning similar to commercially available telerobots and (2) the efficacy of novel HSR features (i.e., pan-tilt camera, autonomous navigation, robot arm/hand hardware) for future learning experiences. We found that among our participants, autonomous navigation and arm/gripper hardware were rated as highly valuable for social and learning activities. 
    more » « less
  4. Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual’s motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual’s intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.

     
    more » « less
  5. Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term “smart” is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application. 
    more » « less