skip to main content


This content will become publicly available on December 31, 2024

Title: Virtual, Augmented, and Mixed Reality for Human-robot Interaction: A Survey and Virtual Design Element Taxonomy
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in HRI research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this article, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the past 30 years as well as the current directions VAM-HRI is headed in the coming decade.  more » « less
Award ID(s):
2233316 1909864
NSF-PAR ID:
10468495
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
12
Issue:
4
ISSN:
2573-9522
Page Range / eLocation ID:
1 to 39
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges.In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts. 
    more » « less
  2. Augmented Reality (AR) or Mixed Reality (MR) enables innovative interactions by overlaying virtual imagery over the physical world. For roboticists, this creates new opportunities to apply proven non-verbal interaction patterns, like gesture, to physically-limited robots. However, a wealth of HRI research has demonstrated that there are real benefits to physical embodiment (compared, e.g., to virtual robots displayed on screens). This suggests that AR augmentation of virtual robot parts could lead to similar challenges. In this work, we present the design of an experiment to objectively and subjectively compare the use of AR and physical arms for deictic gesture, in AR and physical task environments. Our future results will inform robot designers choosing between the use of physical and virtual arms, and provide new nuanced understanding of the use of mixed-reality technologies in HRI contexts. Index T 
    more » « less
  3. null (Ed.)
    There is a significant amount of synergy between virtual reality (VR) and the field of robotics. However, it has only been in approximately the past five years that commercial immersive VR devices have been available to developers. This new availability has led to a rapid increase in research using VR devices in the field of robotics, especially in the development of VR interfaces for operating robots. In this paper, we present a systematic review on VR interfaces for robot operation that utilize commercially available immersive VR devices. A total of 41 papers published between 2016–2020 were collected for review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Papers are discussed and categorized into five categories: (1) Visualization, which focuses on displaying data or information to operators; (2) Robot Control and Planning, which focuses on connecting human input or movement to robot movement; (3) Interaction, which focuses on the development of new interaction techniques and/or identifying best interaction practices; (4) Usability, which focuses on user experiences of VR interfaces; and (5) Infrastructure, which focuses on system architectures or software to support connecting VR and robots for interface development. Additionally, we provide future directions to continue development in VR interfaces for operating robots. 
    more » « less
  4. null (Ed.)
    Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. 
    more » « less
  5. The number of patients diagnosed with Alzheimer's disease is significantly increasing, given the boom in the aging population (i.e., 65 years and older). There exist approximately 5.5 million people in the United States that have been diagnosed with Alzheimer's, and as a result friends and family often need to provide care and support (estimated at 15 million people to the cost of $1.1 trillion). Common symptoms of Alzheimer's disease include memory loss, drastic behavioral change, depression, and loss in cognitive and/or spatial abilities. To support the growing need for caregivers, this project developed a prototype virtual reality (VR) environment for enabling caregivers to experience typical scenarios, as well as common strategies for managing each scenario, that they may experience when providing care and support, thereby providing. For instance, a patient may turn on a gas stove and then leave, forgetting that the stove is on. The caregiver then would be required to turn the stove off, to minimize any potential dangers. The prototype environment, CARETAKVR, was developed as an undergraduate research project for learning the process of research as well as the Unity programming environment and VR. The prototype provides a gamified training tool, masking scenarios as objectives and success with a score, to enable the potential caregiver to feel rewarded for correctly supporting the patient. The virtual patient is controlled via artificial intelligence and follows an initial set of guidelines to behave as a patient with early-stage Alzheimer's may behave. The caregiver is provided with a set of tasks to perform, in VR space, to achieve their goals for each scenario. Common tasks include Check Refrigerator, Check Stove, and Comfort Patient. This project has been demonstrated to colleagues in the health care domain and has seeded future collaborations to iterate the capabilities of this tool. All project artifacts have been open-sourced and are available online. 
    more » « less