skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on August 13, 2025

Title: Understanding the Influence of Fatigue on Full Arm Gestures in Augmented Reality Environments

This research investigates fatigue’s impact on arm gestures within augmented reality environments. Through the analysis of the gathered data, our goal is to develop a comprehensive understanding of the constraints and unique characteristics affecting the performance of arm gestures when individuals are fatigued. Based on our findings, prolonged engagement in full-arm movement gestures under the influence of fatigue resulted in a decline in muscle strength within upper body segments. Thus, this decline led to a notable reduction in the accuracy of gesture detection in the AR environment, dropping from an initial 97.7% to 75.9%. We also found that changes in torso movements can have a ripple effect on the upper and forearm regions. This valuable knowledge will enable us to enhance our gesture detection algorithms, thereby enhancing their precision and accuracy, even in fatigue-related situations.

 
more » « less
PAR ID:
10532913
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
68
Issue:
1
ISSN:
1071-1813
Format(s):
Medium: X Size: p. 1194-1199
Size(s):
p. 1194-1199
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Mixed Reality visualizations provide a powerful new approach for enabling gestural capabilities on non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). Specifically, we present the results of a within-subjects Mixed Reality HRI experiment (N=23) exploring the trade-offs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show a clear trade-off between performance and social perception, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. 
    more » « less
  2. Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this article presents two studies that explore two different categories of mixed reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the tradeoffs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits. 
    more » « less
  3. Understanding abstract concepts in mathematics has continuously presented as a challenge, but the use of directed and spontaneous gestures has shown to support learning and ground higher-order thought. Within embodied learning, gesture has been investigated as part of a multimodal assemblage with speech and movement, centering the body in interaction with the environment. We present a case study of one dyad’s undertaking of a robotic arm activity, targeting learning outcomes in matrix algebra, robotics, and spatial thinking. Through a body syntonicity lens and drawing on video and pre- and post- assessment data, we evaluate learning gains and investigate the multimodal processes contributing to them. We found gesture, speech, and body movement grounded understanding of vector and matrix operations, spatial reasoning, and robotics, as anchored by the physical robotic arm, with implications for the design of learning environments that employ directed gestures. 
    more » « less
  4. Mixed reality visualizations provide a powerful new approach for enabling gestural capabilities for non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arrow positioned over the robot (an ego-sensitive allocentric gesture). We explore the trade-offs between these two types of gestures, with respect to both objective performance and subjective social perceptions. We conducted a 24-participant within-subjects experiment in which a HoloLens-wearing participant interacted with a robot that used these two types of gestures to refer to objects at two different distances. Our results demonstrate a clear trade-off between performance and social perception: non-ego-sensitive allocentric gestures led to quicker reaction time and higher accuracy, but ego-sensitive gesture led to higher perceived social presence, anthropomorphism, and likability. These results present a challenging design decision to creators of mixed reality robotic systems 
    more » « less
  5. Abstract

    Human-robot collaboration (HRC) is a challenging task in modern industry and gesture communication in HRC has attracted much interest. This paper proposes and demonstrates a dynamic gesture recognition system based on Motion History Image (MHI) and Convolutional Neural Networks (CNN). Firstly, ten dynamic gestures are designed for a human worker to communicate with an industrial robot. Secondly, the MHI method is adopted to extract the gesture features from video clips and generate static images of dynamic gestures as inputs to CNN. Finally, a CNN model is constructed for gesture recognition. The experimental results show very promising classification accuracy using this method.

     
    more » « less