skip to main content


This content will become publicly available on October 4, 2024

Title: Eye-Tracking in Physical Human–Robot Interaction: Mental Workload and Performance Prediction
Background

In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.

Objective

The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation.

Methods

Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.

Results

Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.

Conclusion

The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.

Application

Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.

 
more » « less
NSF-PAR ID:
10467647
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Human Factors: The Journal of the Human Factors and Ergonomics Society
ISSN:
0018-7208
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the realm of virtual reality (VR) research, the synergy of methodological advancements, technical innovation, and novel applications is paramount. Our work encapsulates these facets in the context of spatial ability assessments conducted within a VR environment. This paper presents a comprehensive and integrated framework of VR, eye-tracking, and electroencephalography (EEG), which seamlessly combines measuring participants’ behavioral performance and simultaneously collecting time-stamped eye tracking and EEG data to enable understanding how spatial ability is impacted in certain conditions and if such conditions demand increased attention and mental allocation. This framework encompasses the measurement of participants’ gaze pattern (e.g., fixation and saccades), EEG data (e.g., Alpha, Beta, Gamma, and Theta wave patterns), and psychometric and behavioral test performance. On the technical front, we utilized the Unity 3D game engine as the core for running our spatial ability tasks by simulating altered conditions of space exploration. We simulated two types of space exploration conditions: (1) microgravity condition in which participants’ idiotropic (body) axis is in statically and dynamically misaligned with their visual axis; and (2) conditions of Martian terrain that offers a visual frame of reference (FOR) but with limited and unfamiliar landmarks objects. We specifically targeted assessing human spatial ability and spatial perception. To assess spatial ability, we digitalized behavioral tests of Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test and integrated them into the VR settings to evaluate participants’ spatial visualization, spatial relations, and spatial orientation ability, respectively. For spatial perception, we applied digitalized versions of size and distance perception tests to measure participants’ subjective perception of size and distance. A suite of C# scripts orchestrated the VR experience, enabling real-time data collection and synchronization. This technical innovation includes the integration of data streams from diverse sources, such as VIVE controllers, eye-tracking devices, and EEG hardware, to ensure a cohesive and comprehensive dataset. A pivotal challenge in our research was synchronizing data from EEG, eye tracking, and VR tasks to facilitate comprehensive analysis. To address this challenge, we employed the Unity interface of the OpenSync library, a tool designed to unify disparate data sources in the fields of psychology and neuroscience. This approach ensures that all collected measures share a common time reference, enabling meaningful analysis of participant performance, gaze behavior, and EEG activity. The Unity-based system seamlessly incorporates task parameters, participant data, and VIVE controller inputs, providing a versatile platform for conducting assessments in diverse domains. Finally, we were able to collect synchronized measurements of participants’ scores on the behavioral tests of spatial ability and spatial perception, their gaze data and EEG data. In this paper, we present the whole process of combining the eye-tracking and EEG workflows into the VR settings and collecting relevant measurements. We believe that our work not only advances the state-of-the-art in spatial ability assessments but also underscores the potential of virtual reality as a versatile tool in cognitive research, therapy, and rehabilitation.

     
    more » « less
  2. Abstract Background

    Spatial problem‐solving is an essential skill for success in many engineering disciplines; thus, understanding the cognitive processes involved could help inform the design of training interventions for students trying to improve this skill. Prior research has yet to investigate the differences in cognitive processes between spatial tasks in problem‐solving to offer learners timely feedback.

    Purpose/Hypothesis

    In this study, we investigated how different spatial tasks change the cognitive processes and problem‐solving strategies used by engineering students with low spatial ability.

    Design/Method

    Study participants completed mental rotation and mental cutting tasks of high and low difficulty. Eye‐tracking data were collected and categorized as encoding, transformation, and confirmation cognitive processes. The adoption of either a holistic or piecemeal strategy and response accuracy were also measured.

    Results

    Mental rotation was found to have a higher number of fixations for each cognitive process than the mental cutting task. The holistic strategy was used in both difficulty levels of the mental cutting task, while the piecemeal strategy was adopted for the mental rotation task at a high difficulty level. Only encoding fixations were significantly correlated with accuracy and most strongly correlated with strategy.

    Conclusion

    Encoding is an important cognitive process that could affect subsequent cognitive processes and strategies and could, thus, play an important role in performance. Future development in spatial training should consider how to enhance encoding to aid students with low spatial ability. Educators can utilize gaze metrics and empirical research to provide tailored and timely feedback to learners.

     
    more » « less
  3. null (Ed.)
    Changes in task demands can have delayed adverse impacts on performance. This phenomenon, known as the workload history effect, is especially of concern in dynamic work domains where operators manage fluctuating task demands. The existing workload history literature does not depict a consistent picture regarding how these effects manifest, prompting research to consider measures that are informative on the operator's process. One promising measure is visual attention patterns, due to its informativeness on various cognitive processes. To explore its ability to explain workload history effects, participants completed a task in an unmanned aerial vehicle command and control testbed where workload transitioned gradually and suddenly. The participants’ performance and visual attention patterns were studied over time to identify workload history effects. The eye-tracking analysis consisted of using a recently developed eye-tracking metric called coefficient K , as it indicates whether visual attention is more focal or ambient. The performance results found workload history effects, but it depended on the workload level, time elapsed, and performance measure. The eye-tracking analysis suggested performance suffered when focal attention was deployed during low workload, which was an unexpected finding. When synthesizing these results, they suggest unexpected visual attention patterns can impact performance immediately over time. Further research is needed; however, this work shows the value of including a real-time visual attention measure, such as coefficient K , as a means to understand how the operator manages varying task demands in complex work environments. 
    more » « less
  4. Abstract Understanding the human motor control strategy during physical interaction tasks is crucial for developing future robots for physical human–robot interaction (pHRI). In physical human–human interaction (pHHI), small interaction forces are known to convey their intent between the partners for effective motor communication. The aim of this work is to investigate what affects the human’s sensitivity to the externally applied interaction forces. The hypothesis is that one way the small interaction forces are sensed is through the movement of the arm and the resulting proprioceptive signals. A pHRI setup was used to provide small interaction forces to the hand of seated participants in one of four directions, while the participants were asked to identify the direction of the push while blindfolded. The result shows that participants’ ability to correctly report the direction of the interaction force was lower with low interaction force as well as with high muscle contraction. The sensitivity to the interaction force direction increased with the radial displacement of the participant’s hand from the initial position: the further they moved the more correct their responses were. It was also observed that the estimated stiffness of the arm varies with the level of muscle contraction and robot interaction force. 
    more » « less
  5. Effective physical human-robot interaction (pHRI) depends on how humans can communicate their intentions for movement with others. While it is speculated that small interaction forces contain significant information to convey the specific movement intention of physical humanhuman interaction (pHHI), the underlying mechanism for humans to infer intention from such small forces is largely unknown. The hypothesis in this work is that the sensitivity to a small interaction force applied at the hand is affected by the movement of the arm that is affected by the arm stiffness. For this, a haptic robot was used to provide the endpoint interaction forces to the arm of seated human participants. They were asked to determine one of the four directions of the applied robot interaction force without visual feedback. Variations of levels of interaction force as well as arm muscle contraction were applied. The results imply that human’s ability to identify and respond to the correct direction of small interaction forces was lower when the alignment of human arm movement with respect to the force direction was higher. In addition, the sensitivity to the direction of the small interaction force was high when the arm stiffness was low. It is also speculated that humans lower their arm stiffness to be more sensitive to smaller interaction forces. These results will help develop human-like pHRI systems for various applications. 
    more » « less