skip to main content


Title: Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality
Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.  more » « less
Award ID(s):
1937565
NSF-PAR ID:
10390194
Author(s) / Creator(s):
; ; ;
Editor(s):
Blascheck, Tanja; Bradshaw, Jessica; Vrzakova, Hana
Date Published:
Journal Name:
ACM Symposium on Eye Tracking Research and Applications
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Spatial ability is the ability to generate, store, retrieve, and transform visual information to mentally represent a space and make sense of it. This ability is a critical facet of human cognition that affects knowledge acquisition, productivity, and workplace safety. Although having improved spatial ability is essential for safely navigating and perceiving a space on earth, it is more critical in altered environments of other planets and deep space, which may pose extreme and unfamiliar visuospatial conditions. Such conditions may range from microgravity settings with the misalignment of body and visual axes to a lack of landmark objects that offer spatial cues to perceive size, distance, and speed. These altered visuospatial conditions may pose challenges to human spatial cognitive processing, which assists humans in locating objects in space, perceiving them visually, and comprehending spatial relationships between the objects and surroundings. The main goal of this paper is to examine if eye-tracking data of gaze pattern can indicate whether such altered conditions may demand more mental efforts and attention. The key dimensions of spatial ability (i.e., spatial visualization, spatial relations, and spatial orientation) are examined under the three simulated conditions: (1) aligned body and visual axes (control group); (2) statically misaligned body and visual axes (experiment group I); and dynamically misaligned body and visual axes (experiment group II). The three conditions were simulated in Virtual Reality (VR) using Unity 3D game engine. Participants were recruited from Texas A&M University student population who wore HTC VIVE Head-Mounted Displays (HMDs) equipped with eye-tracking technology to work on three spatial tests to measure spatial visualization, orientation, and relations. The Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test were used to evaluate the spatial visualization, spatial relations, and spatial orientation of 78 participants, respectively. For each test, gaze data was collected through Tobii eye-tracker integrated in the HTC Vive HMDs. Quick eye movements, known as saccades, were identified by analyzing raw eye-tracking data using the rate of change of gaze position over time as a measure of mental effort. The results showed that the mean number of saccades in MCT and PSVT: R tests was statistically larger in experiment group II than in the control group or experiment group I. However, PTA test data did not meet the required assumptions to compare the mean number of saccades in the three groups. The results suggest that spatial relations and visualization may require more mental effort under dynamically misaligned idiotropic and visual axes than aligned or statically misaligned idiotropic and visual axes. However, the data could not reveal whether spatial orientation requires more/less mental effort under aligned, statically misaligned, and dynamically misaligned idiotropic and visual axes. The results of this study are important to understand how altered visuospatial conditions impact spatial cognition and how simulation- or game-based training tools can be developed to train people in adapting to extreme or altered work environments and working more productively and safely.

     
    more » « less
  2. Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, (3) utilizes input feature-wise partition to save activation memory size, and (4) develops a sequential-write-parallel-read input buffer to alleviate the bandwidth requirement for the activation global buffer. On-silicon measurement and extensive experiments validate that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95×, 3.21×, and 12.85× over general computing platforms including CPUs and GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy. Codes are available at https://github.com/RICE-EIC/EyeCoD. 
    more » « less
  3. As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users. 
    more » « less
  4. We describe the design and performance of a high-fidelity wearable head-, body-, and eye-tracking system that offers significant improvement over previous such devices. This device’s sensors include a binocular eye tracker, an RGB-D scene camera, a high-frame-rate scene camera, and two visual odometry sensors, for a total of ten cameras, which we synchronize and record from with a data rate of over 700 MB/s. The sensors are operated by a mini-PC optimized for fast data collection, and powered by a small battery pack. The device records a subject’s eye, head, and body positions, simultaneously with RGB and depth data from the subject’s visual environment, measured with high spatial and temporal resolution. The headset weighs only 1.4 kg, and the backpack with batteries 3.9 kg. The device can be comfortably worn by the subject, allowing a high degree of mobility. Together, this system overcomes many limitations of previous such systems, allowing high-fidelity characterization of the dynamics of natural vision. 
    more » « less
  5. In the realm of virtual reality (VR) research, the synergy of methodological advancements, technical innovation, and novel applications is paramount. Our work encapsulates these facets in the context of spatial ability assessments conducted within a VR environment. This paper presents a comprehensive and integrated framework of VR, eye-tracking, and electroencephalography (EEG), which seamlessly combines measuring participants’ behavioral performance and simultaneously collecting time-stamped eye tracking and EEG data to enable understanding how spatial ability is impacted in certain conditions and if such conditions demand increased attention and mental allocation. This framework encompasses the measurement of participants’ gaze pattern (e.g., fixation and saccades), EEG data (e.g., Alpha, Beta, Gamma, and Theta wave patterns), and psychometric and behavioral test performance. On the technical front, we utilized the Unity 3D game engine as the core for running our spatial ability tasks by simulating altered conditions of space exploration. We simulated two types of space exploration conditions: (1) microgravity condition in which participants’ idiotropic (body) axis is in statically and dynamically misaligned with their visual axis; and (2) conditions of Martian terrain that offers a visual frame of reference (FOR) but with limited and unfamiliar landmarks objects. We specifically targeted assessing human spatial ability and spatial perception. To assess spatial ability, we digitalized behavioral tests of Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test and integrated them into the VR settings to evaluate participants’ spatial visualization, spatial relations, and spatial orientation ability, respectively. For spatial perception, we applied digitalized versions of size and distance perception tests to measure participants’ subjective perception of size and distance. A suite of C# scripts orchestrated the VR experience, enabling real-time data collection and synchronization. This technical innovation includes the integration of data streams from diverse sources, such as VIVE controllers, eye-tracking devices, and EEG hardware, to ensure a cohesive and comprehensive dataset. A pivotal challenge in our research was synchronizing data from EEG, eye tracking, and VR tasks to facilitate comprehensive analysis. To address this challenge, we employed the Unity interface of the OpenSync library, a tool designed to unify disparate data sources in the fields of psychology and neuroscience. This approach ensures that all collected measures share a common time reference, enabling meaningful analysis of participant performance, gaze behavior, and EEG activity. The Unity-based system seamlessly incorporates task parameters, participant data, and VIVE controller inputs, providing a versatile platform for conducting assessments in diverse domains. Finally, we were able to collect synchronized measurements of participants’ scores on the behavioral tests of spatial ability and spatial perception, their gaze data and EEG data. In this paper, we present the whole process of combining the eye-tracking and EEG workflows into the VR settings and collecting relevant measurements. We believe that our work not only advances the state-of-the-art in spatial ability assessments but also underscores the potential of virtual reality as a versatile tool in cognitive research, therapy, and rehabilitation.

     
    more » « less