skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Integrated Eye-Tracking and EEG Data Collection and Synchronization for Virtual Reality-Based Spatial Ability Assessments
In the realm of virtual reality (VR) research, the synergy of methodological advancements, technical innovation, and novel applications is paramount. Our work encapsulates these facets in the context of spatial ability assessments conducted within a VR environment. This paper presents a comprehensive and integrated framework of VR, eye-tracking, and electroencephalography (EEG), which seamlessly combines measuring participants’ behavioral performance and simultaneously collecting time-stamped eye tracking and EEG data to enable understanding how spatial ability is impacted in certain conditions and if such conditions demand increased attention and mental allocation. This framework encompasses the measurement of participants’ gaze pattern (e.g., fixation and saccades), EEG data (e.g., Alpha, Beta, Gamma, and Theta wave patterns), and psychometric and behavioral test performance. On the technical front, we utilized the Unity 3D game engine as the core for running our spatial ability tasks by simulating altered conditions of space exploration. We simulated two types of space exploration conditions: (1) microgravity condition in which participants’ idiotropic (body) axis is in statically and dynamically misaligned with their visual axis; and (2) conditions of Martian terrain that offers a visual frame of reference (FOR) but with limited and unfamiliar landmarks objects. We specifically targeted assessing human spatial ability and spatial perception. To assess spatial ability, we digitalized behavioral tests of Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test and integrated them into the VR settings to evaluate participants’ spatial visualization, spatial relations, and spatial orientation ability, respectively. For spatial perception, we applied digitalized versions of size and distance perception tests to measure participants’ subjective perception of size and distance. A suite of C# scripts orchestrated the VR experience, enabling real-time data collection and synchronization. This technical innovation includes the integration of data streams from diverse sources, such as VIVE controllers, eye-tracking devices, and EEG hardware, to ensure a cohesive and comprehensive dataset. A pivotal challenge in our research was synchronizing data from EEG, eye tracking, and VR tasks to facilitate comprehensive analysis. To address this challenge, we employed the Unity interface of the OpenSync library, a tool designed to unify disparate data sources in the fields of psychology and neuroscience. This approach ensures that all collected measures share a common time reference, enabling meaningful analysis of participant performance, gaze behavior, and EEG activity. The Unity-based system seamlessly incorporates task parameters, participant data, and VIVE controller inputs, providing a versatile platform for conducting assessments in diverse domains. Finally, we were able to collect synchronized measurements of participants’ scores on the behavioral tests of spatial ability and spatial perception, their gaze data and EEG data. In this paper, we present the whole process of combining the eye-tracking and EEG workflows into the VR settings and collecting relevant measurements. We believe that our work not only advances the state-of-the-art in spatial ability assessments but also underscores the potential of virtual reality as a versatile tool in cognitive research, therapy, and rehabilitation.  more » « less
Award ID(s):
1928695
PAR ID:
10502792
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Intelligent Human Systems Integration (IHSI 2024), Vol. 119, 2024, 348–360
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Spatial ability is the ability to generate, store, retrieve, and transform visual information to mentally represent a space and make sense of it. This ability is a critical facet of human cognition that affects knowledge acquisition, productivity, and workplace safety. Although having improved spatial ability is essential for safely navigating and perceiving a space on earth, it is more critical in altered environments of other planets and deep space, which may pose extreme and unfamiliar visuospatial conditions. Such conditions may range from microgravity settings with the misalignment of body and visual axes to a lack of landmark objects that offer spatial cues to perceive size, distance, and speed. These altered visuospatial conditions may pose challenges to human spatial cognitive processing, which assists humans in locating objects in space, perceiving them visually, and comprehending spatial relationships between the objects and surroundings. The main goal of this paper is to examine if eye-tracking data of gaze pattern can indicate whether such altered conditions may demand more mental efforts and attention. The key dimensions of spatial ability (i.e., spatial visualization, spatial relations, and spatial orientation) are examined under the three simulated conditions: (1) aligned body and visual axes (control group); (2) statically misaligned body and visual axes (experiment group I); and dynamically misaligned body and visual axes (experiment group II). The three conditions were simulated in Virtual Reality (VR) using Unity 3D game engine. Participants were recruited from Texas A&M University student population who wore HTC VIVE Head-Mounted Displays (HMDs) equipped with eye-tracking technology to work on three spatial tests to measure spatial visualization, orientation, and relations. The Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test were used to evaluate the spatial visualization, spatial relations, and spatial orientation of 78 participants, respectively. For each test, gaze data was collected through Tobii eye-tracker integrated in the HTC Vive HMDs. Quick eye movements, known as saccades, were identified by analyzing raw eye-tracking data using the rate of change of gaze position over time as a measure of mental effort. The results showed that the mean number of saccades in MCT and PSVT: R tests was statistically larger in experiment group II than in the control group or experiment group I. However, PTA test data did not meet the required assumptions to compare the mean number of saccades in the three groups. The results suggest that spatial relations and visualization may require more mental effort under dynamically misaligned idiotropic and visual axes than aligned or statically misaligned idiotropic and visual axes. However, the data could not reveal whether spatial orientation requires more/less mental effort under aligned, statically misaligned, and dynamically misaligned idiotropic and visual axes. The results of this study are important to understand how altered visuospatial conditions impact spatial cognition and how simulation- or game-based training tools can be developed to train people in adapting to extreme or altered work environments and working more productively and safely. 
    more » « less
  2. Abstract Face perception is a fundamental aspect of human social interaction, yet most research on this topic has focused on single modalities and specific aspects of face perception. Here, we present a comprehensive multimodal dataset for examining facial emotion perception and judgment. This dataset includes EEG data from 97 unique neurotypical participants across 8 experiments, fMRI data from 19 neurotypical participants, single-neuron data from 16 neurosurgical patients (22 sessions), eye tracking data from 24 neurotypical participants, behavioral and eye tracking data from 18 participants with ASD and 15 matched controls, and behavioral data from 3 rare patients with focal bilateral amygdala lesions. Notably, participants from all modalities performed the same task. Overall, this multimodal dataset provides a comprehensive exploration of facial emotion perception, emphasizing the importance of integrating multiple modalities to gain a holistic understanding of this complex cognitive process. This dataset serves as a key missing link between human neuroimaging and neurophysiology literature, and facilitates the study of neuropsychiatric populations. 
    more » « less
  3. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  4. Virtual Reality (VR) headsets with embedded eye trackers are appearing as consumer devices (e.g. HTC Vive Eye, FOVE). These devices could be used in VR-based education (e.g., a virtual lab, a virtual field trip) in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques. The results suggest that a short particle trail representing eye trajectory is promising. In contrast, 3D heatmaps (an adaptation of traditional 2D heatmaps) for visualizing gaze over a short time span are problematic. 
    more » « less
  5. This study presents a comprehensive analysis of three types of multimodal data‐response accuracy, response times, and eye‐tracking data‐derived from a computer‐based spatial rotation test. To tackle the complexity of high‐dimensional data analysis challenges, we have developed a methodological framework incorporating various statistical and machine learning methods. The results of our study reveal that hidden state transition probabilities, based on eye‐tracking features, may be contingent on skill mastery estimated from the fluency CDM model. The hidden state trajectory offers additional diagnostic insights into spatial rotation problem‐solving, surpassing the information provided by the fluency CDM alone. Furthermore, the distribution of participants across different hidden states reflects the intricate nature of visualizing objects in each item, adding a nuanced dimension to the characterization of item features. This complements the information obtained from item parameters in the fluency CDM model, which relies on response accuracy and response time. Our findings have the potential to pave the way for the development of new psychometric and statistical models capable of seamlessly integrating various types of multimodal data. This integrated approach promises more meaningful and interpretable results, with implications for advancing the understanding of cognitive processes involved in spatial rotation tests. 
    more » « less