skip to main content


Title: Predictive Power of Pupil Dynamics in a Team Based Virtual Reality Task
Assessing and tracking physiological and cognitive states of multiple individuals interacting in virtual environments is of increasing interest to the virtual reality (VR) community. In this paper, we describe a team-based VR task termed the Apollo Distributed Control Task (ADCT), where individuals, via the single independent degree-of-freedom control and limited environmental views, must work together to guide a virtual spacecraft back to Earth. Novel to the experiment is that 1) we simultaneously collect multiple physiological measures including electroencephalography (EEG), pupillometry, speech signals, and individual's actions, 2) we regulate the the difficulty of the task and the type of communication between the teammates. Focusing on the analysis of pupil dynamics, which have been linked to a number of cognitive and physiological processes such as arousal, cognitive control, and working memory, we find that pupil diameter changes are predictive of multiple task-related dimensions, including the difficulty of the task, the role of the team member, and the type of communication.  more » « less
Award ID(s):
1934968
NSF-PAR ID:
10397444
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
Page Range / eLocation ID:
592 to 593
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background

    In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.

    Objective

    The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation.

    Methods

    Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.

    Results

    Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.

    Conclusion

    The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.

    Application

    Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.

     
    more » « less
  2. Abstract

    Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.

     
    more » « less
  3. A solid understanding of electromagnetic (E&M) theory is key to the education of electrical engineering students. However, these concepts are notoriously challenging for students to learn, due to the difficulty in grasping abstract concepts such as the electric force as an invisible force that is acting at a distance, or how electromagnetic radiation is permeating and propagating in space. Building physical intuition to manipulate these abstractions requires means to visualize them in a three-dimensional space. This project involves the development of 3D visualizations of abstract E&M concepts in Virtual Reality (VR), in an immersive, exploratory, and engaging environment. VR provides the means of exploration, to construct visuals and manipulable objects to represent knowledge. This leads to a constructivist way of learning, in the sense that students are allowed to build their own knowledge from meaningful experiences. In addition, the VR labs replace the cost of hands-on labs, by recreating the experiments and experiences on Virtual Reality platforms. The development of the VR labs for E&M courses involves four distinct phases: (I) Lab Design, (II) Experience Design, (III) Software Development, and (IV) User Testing. During phase I, the learning goals and possible outcomes are clearly defined, to provide context for the VR laboratory experience, and to identify possible technical constraints pertaining to the specific laboratory exercise. During stage II, the environment (the world) the player (user) will experience is designed, along with the foundational elements, such as ways of navigation, key actions, and immersion elements. During stage III, the software is generated as part of the course projects for the Virtual Reality course taught in the Computer Science Department at the same university, or as part of independent research projects involving engineering students. This reflects the strong educational impact of this project, as it allows students to contribute to the educational experiences of their peers. During phase IV, the VR experiences are played by different types of audiences that fit the player type. The team collects feedback and if needed, implements changes. The pilot VR Lab, introduced as an additional instructional tool for the E&M course during the Fall 2019, engaged over 100 students in the program, where in addition to the regular lectures, students attended one hour per week in the E&M VR lab. Student competencies around conceptual understanding of electromagnetism topics are measured via formative and summative assessments. To evaluate the effectiveness of VR learning, each lab is followed by a 10-minute multiple-choice test, designed to measure conceptual understanding of the various topics, rather than the ability to simply manipulate equations. This paper discusses the implementation and the pedagogy of the Virtual Reality laboratory experiences to visualize concepts in E&M, with examples for specific labs, as well as challenges, and student feedback with the new approach. We will also discuss the integration of the 3D visualizations into lab exercises, and the design of the student assessment tools used to assess the knowledge gain when the VR technology is employed. 
    more » « less
  4. INTRODUCTION: Apollo-11 (A-11) was the first manned space mission to successfully bring astronauts to the moon and return them safely. Effective team based communications is required for mission specialists to work collaboratively to learn, engage, and solve complex problems. As part of NASA’s goal in assessing team and mission success, all vital speech communications between these personnel were recorded using the multi-track SoundScriber system onto analog tapes, preserving their contribution in the success of one of the greatest achievements in human history. More than +400 personnel served as mission specialists/support who communicated across 30 audio loops, resulting in +9k hours of data for A-11. To ensure success of this mission, it was necessary for teams to communicate, learn, and address problems in a timely manner. Previous research has found that compatibility of individual personalities within teams is important for effective team collaboration of those individuals. Hence, it is essential to identify each speaker’s role during an Apollo mission and analyze group communications for knowledge exchange and problem solving to achieve a common goal. Assessing and analyzing speaker roles during the mission can allow for exploring engagement analysis for multi-party speaker situations. METHOD: The UTDallas Fearless steps Apollo data is comprised of 19,000 hours (A-11,A-13,A-1) possessing unique and multiple challenges as it is characterized by severe noise and degradation as well as overlap instances over the 30 channels. For our study, we have selected a subset of 100 hours manually transcribed by professional annotators for speaker labels. The 100 hours are obtained from three mission critical events: 1. Lift-Off (25 hours) 2. Lunar-Landing (50 hours) 3. Lunar-Walking (25 hours). Five channels of interest, out of 30 channels were selected with the most speech activity, the primary speakers operating these five channels are command/owners of these channels. For our analysis, we select five speaker roles: Flight Director (FD), Capsule Communicator (CAPCOM), Guidance, Navigation and, Control (GNC), Electrical, environmental, and consumables manager (EECOM), and Network (NTWK). To track and tag individual speakers across our Fearless Steps audio dataset, we use the concept of ‘where’s Waldo’ to identify all instances of our speakers-of-interest across a cluster of other speakers. Also, to understand speaker roles of our speaker-of-interests, we use speaker duration of primary speaker vs secondary speaker and speaker turns as our metrics to determine the role of the speaker and to understand their responsibility during the three critical phases of the mission. This enables a content linking capability as well as provide a pathway to analyzing group engagement, group dynamics of people working together in an enclosed space, psychological effects, and cognitive analysis in such individuals. IMPACT: NASA’s Apollo Program stands as one of the most significant contributions to humankind. This collection opens new research options for recognizing team communication, group dynamics, and human engagement/psychology for future deep space missions. Analyzing team communications to achieve such goals would allow for the formulation of educational and training technologies for assessment of STEM knowledge, task learning, and educational feedback. Also, identifying these personnel can help pay tribute and yield personal recognition to the hundreds of notable engineers and scientist who made this feat possible. ILLUSTRATION: In this work, we propose to illustrate how a pre-trained speech/language network can be used to obtain powerful speaker embeddings needed for speaker diarization. This framework is used to build these learned embeddings to label unique speakers over sustained audio streams. To train and test our system, we will make use of Fearless Steps Apollo corpus, allowing us to effectively leverage a limited label information resource (100 hours of labeled data out of +9000 hours). Furthermore, we use the concept of 'Finding Waldo' to identify key speakers of interest (SOI) throughout the Apollo-11 mission audio across multiple channel audio streams. 
    more » « less
  5. Purpose of Study Assessment of an individual's postural stability serves as an indirect measure for both physiological and biomechanical stresses placed on an individual. More recently, some individuals after COVID-19 (SARS-CoV-2) infection have been identified with neurological complaints (Post-Acute Sequelae of Covid - PASC). These individuals can also be predisposed to decreased postural stability and an increased risk for falls. The purpose of the project was to incorporate two different wearable technology (virtual reality (VR) based virtual immersive sensorimotor test - VIST and pressure senor-based smart sock) to assess postural stability among healthy and individuals with PASC to quantify the overall status of the postural control system. Methods Used All methods were conducted based on the University's Institutional Review Board (IRB# 21-296) with informed consent. A total of 12 males and females (six healthy and six with self-reported complaints of PASC) have completed the study so far. All participants were tested using the VIST, while standing on a force platform and wearing the smart sock simultaneously. The (VIST uses a VR headset and proprietary software to test an individual's integrated sensory, motor, and cognitive processes through eight unique tests (smooth pursuits, saccades, convergence, peripheral vision, object discrimination, gaze stability, head-eye coordination, cervical neuromotor control). Center of pressure (COP) data from force platform and pressure sensor data from the smart socks were used to calculate anterior-posterior and medial-lateral postural sway variables. These postural sway variables were analyzed using an independent samples t-test between the healthy and PASC groups at an alpha set at 0.05. Summary of 
    more » « less