skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Embodied Expressive Gestures in Telerobots: A Tale of Two Users
Despite their technical advancements, commercially available telerobots are limited in social interaction capabilities for both pilot and local users, specifically in nonverbal communication. Our group hypothesizes that the introduction of expressive gesturing and tangible interaction capabilities (e.g., handshakes, fist bumps) will enhance telerobotic interactions and increase social connection between users. To investigate the affordances to social connection that gestures and tangible interactions provide in telerobot-mediated interactions, we designed and integrated a lightweight manipulator terminating in an anthropomorphic end effector onto a commercially available telerobot (Anybots QB 2.0). Through virtual reality tracking of the pilot user’s arm and hand, expressive gestures and social contact interactions are recreated via the manipulator, enabling a pilot user and a local user to engage in a tangible exchange. To assess the usability and effectiveness of the gesturing system, we present evaluations from both the local and pilot user perspectives. First, we present a validation study to assess usability of the control system by the pilot user. Our results demonstrate that pilot user interactions can be replicated with a greater than 80% pass rate and mean ease of use rating of\(7.08 \pm 1.32\)(out of 10) with brief training. Finally, we present a user study to assess the social impacts of (1) using the telerobot without the manipulator from both the pilot user and local user perspectives and (2) using the control system and telerobotic manipulator from both the pilot user and local user perspectives. Results demonstrate that the robot with the manipulator elicited a more positive social experience than the robot without the arm for local users but no significant difference in conditions for pilot users. Future work will focus on improving the pilot user experience to support social contact interactions.  more » « less
Award ID(s):
1618926
PAR ID:
10655457
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
12
Issue:
2
ISSN:
2573-9522
Page Range / eLocation ID:
1 to 20
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  2. null (Ed.)
    Mixed Reality visualizations provide a powerful new approach for enabling gestural capabilities on non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). Specifically, we present the results of a within-subjects Mixed Reality HRI experiment (N=23) exploring the trade-offs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show a clear trade-off between performance and social perception, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. 
    more » « less
  3. Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this article presents two studies that explore two different categories of mixed reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the tradeoffs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits. 
    more » « less
  4. Tele-operated social robots (telerobots) offer an innovative means of allowing children who are medically restricted to their homes (MRH) to return to their local schools and physical communities. Most commercially available telerobots have three foundational features that facilitate child–robot interaction: remote mobility, synchronous two-way vision capabilities, and synchronous two-way audio capabilities. We conducted a comparative analysis between the Toyota Human Support Robot (HSR) and commercially available telerobots, focusing on these foundational features. Children who used these robots and these features on a daily basis to attend school were asked to pilot the HSR in a simulated classroom for learning activities. As the HSR has three additional features that are not available on commercial telerobots: (1) pan-tilt camera, (2) mapping and autonomous navigation, and (3) robot arm and gripper for children to “reach” into remote environments, participants were also asked to evaluate the use of these features for learning experiences. To expand on earlier work on the use of telerobots by remote children, this study provides novel empirical findings on (1) the capabilities of the Toyota HSR for robot-mediated learning similar to commercially available telerobots and (2) the efficacy of novel HSR features (i.e., pan-tilt camera, autonomous navigation, robot arm/hand hardware) for future learning experiences. We found that among our participants, autonomous navigation and arm/gripper hardware were rated as highly valuable for social and learning activities. 
    more » « less
  5. Social touch provides a rich non-verbal communication channel between humans and robots. Prior work has identified a set of touch gestures for human-robot interaction and described them with natural language labels (e.g., stroking, patting). Yet, no data exists on the semantic relationships between the touch gestures in users’ minds. To endow robots with touch intelligence, we investigated how people perceive the similarities of social touch labels from the literature. In an online study, 45 participants grouped 36 social touch labels based on their perceived similarities and annotated their groupings with descriptive names. We derived quantitative similarities of the gestures from these groupings and analyzed the similarities using hierarchical clustering. The analysis resulted in 9 clusters of touch gestures formed around the social, emotional, and contact characteristics of the gestures. We discuss the implications of our results for designing and evaluating touch sensing and interactions with social robots. 
    more » « less