skip to main content


This content will become publicly available on January 14, 2025

Title: Immersive Commodity Telepresence with the AVATRINA Robot Avatar
Immersive robotic avatars have the potential to aid and replace humans in a variety of applications such as telemedicine and search-and-rescue operations, reducing the need for travel and the risk to people working in dangerous environments. Many challenges, such as kinematic differences between people and robots, reduced perceptual feedback, and communication latency, currently limit howwell robot avatars can achieve full immersion. This paper presents AVATRINA, a teleoperated robot designed to address some of these concerns and maximize the operator’s capabilities while using a commodity light-weight human–machine interface. Team AVATRINA took 4th place at the recent $10 million ANA Avatar XPRIZE competition, which required contestants to design avatar systems that could be controlled by novice operators to complete various manipulation, navigation, and social interaction tasks. This paper details the components of AVATRINA and the design process that contributed to our success at the competition. We highlight a novel study on one of these components, namely the effects of baseline-interpupillary distance matching and head mobility for immersive stereo vision and hand-eye coordination.  more » « less
Award ID(s):
2025782
NSF-PAR ID:
10486597
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
International Journal of Social Robotics
ISSN:
1875-4791
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Efthimiou, E. ; Fotinea, S-E. ; Hanke, T. ; McDonald, J. ; Shterionov, D. ; Wolfe, R. (Ed.)
    With improved and more easily accessible technology, immersive virtual reality (VR) head-mounted devices have become more ubiquitous. As signing avatar technology improves, virtual reality presents a new and relatively unexplored application for signing avatars. This paper discusses two primary ways that signed language can be represented in immersive virtual spaces: 1) Third-person, in which the VR user sees a character who communicates in signed language; and 2) First-person, in which the VR user produces signed content themselves, tracked by the head-mounted device and visible to the user herself (and/or to other users) in the virtual environment. We will discuss the unique affordances granted by virtual reality and how signing avatars might bring accessibility and new opportunities to virtual spaces. We will then discuss the limitations of signed content in virtual reality concerning virtual signers shown from both third- and first-person perspectives. 
    more » « less
  2. Abstract

    As the metaverse expands, understanding how people use virtual reality to learn and connect is increasingly important. We used the Transformed Social Interaction paradigm (Bailenson et al., 2004) to examine different avatar identities and environments over time. In Study 1 (n = 81), entitativity, presence, enjoyment, and realism increased over 8 weeks. Avatars that resembled participants increased synchrony, similarities in moment-to-moment nonverbal behaviors between participants. Moreover, self-avatars increased self-presence and realism, but decreased enjoyment, compared to uniform avatars. In Study 2 (n = 137), participants cycled through 192 unique virtual environments. As visible space increased, so did nonverbal synchrony, perceived restorativeness, entitativity, pleasure, arousal, self- and spatial presence, enjoyment, and realism. Outdoor environments increased perceived restorativeness and enjoyment more than indoor environments. Self-presence and realism increased over time in both studies. We discuss implications of avatar appearance and environmental context on social behavior in classroom contexts over time.

     
    more » « less
  3. This paper explores avatar identification in creative story- telling applications where users create their own story and environment. We present a study that investigated the effects of avatar facial similarity to the user on the quality of the story product they create. The children told a story using a digital puppet-based storytelling system by inter- acting with a physical puppet box that was augmented with a real-time video feed of the puppet enactment. We used a facial morphing technique to manipulate avatar facial similarity to the user. The resulting morphed image was applied to each participants puppet character, thus creating a custom avatar for each child to use in story creation. We hypothesized that the more familiar avatars appeared to participants, the stronger the sense of character identification would be, resulting in higher story quality. The proposed rationale is that visual familiarity may lead participants to draw richer story details from their past real-life experiences. Qualitative analysis of the stories supported our hypothesis. Our results contribute to avatar design in children's creative storytelling applications. 
    more » « less
  4. Storytelling is a critical step in the cognitive development of children. Particularly, this requires children to mentally project into the story context and to identify with the thoughts of the characters in their stories. We propose to support free imagination in creative storytelling through an enactment- based approach that allows children to embody an avatar and perform as the story character. We designed our story creation interface with two modes of avatar: the story-relevant avatar and the self-avatar, to investigate the effects of avatar design on the quality of children’s creative products. In our study with 20 child participants, the results indicate that self-avatars can create a stronger sense of identification and embodied presence, while story-relevant avatars can provide a scaffold for mental projection. 
    more » « less
  5. Although Augmented Reality (AR) can be easily implemented with most smartphones and tablets today, the investigation of distance perception with these types of devices has been limited. In this paper, we question whether the distance of a virtual human, e.g., avatar, seen through a smartphone or tablet display is perceived accurately. We also investigate, due to the Covid-19 pandemic and increased sensitivity to distances to others, whether a coughing avatar that either does or does not have a mask on affects distance estimates compared to a static avatar. We performed an experiment in which all participants estimated the distances to avatars that were either static or coughing, with and without masks on. Avatars were placed at a range of distances that would be typical for interaction, i.e., action space. Data on judgments of distance to the varying avatars was collected in a distributed manner by deploying an app for smartphones. Results showed that participants were fairly accurate in estimating the distance to all avatars, regardless of coughing condition or mask condition. Such findings suggest that mobile AR applications can be used to obtain accurate estimations of distances to virtual others "in the wild," which is promising for using AR for simulations and training applications that require precise distance estimates. 
    more » « less