Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues.
more »
« less
Effects of a Distracting Background and Focal Switching Distance in an Augmented Reality System
Many augmented reality (AR) applications require observers to shift their gaze between AR and real-world content. To date, commercial optical see-through (OST) AR displays have presented content at either a single focal distance, or at a small number of fixed focal distances. Meanwhile, real-world stimuli can occur at a variety of focal distances. Therefore, when shifting gaze between AR and real-world content, in order to view new content in sharp focus, observers must often change their eye’s accommodative state. When performed repetitively, this can negatively affect task performance and eye fatigue. However, these effects may be under reported, because past research has not yet considered the potential additional effect of distracting real world backgrounds.An experimental method that analyzes background effects is presented, using a text-based visual search task that requires integrating information presented in both AR and the real world. An experiment is reported, which examined the effect of a distracting background versus a blank background, at focal switching distances of 0, 1.33, 2.0, and 3.33 meters. Qualitatively, a majority of the participants reported that the distracting background made the task more difficult and fatiguing. Quantitatively, increasing the focal switching distance resulted in reduced task performance and increased eye fatigue. However, changing the background, between blank and distracting, did not result in significant measured differences. Suggestions are given for further efforts to examine background effects.
more »
« less
- Award ID(s):
- 1937565
- PAR ID:
- 10316626
- Date Published:
- Journal Name:
- Proceedings of the Workshop on Perceptual and Cognitive Issues in XR (PERCxR), IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A visual experiment using a beam-splitter-based optical see-through augmented reality (OST-AR) setup tested the effect of the size and alignment of AR overlays with a brightness-matching task using physical cubes. Results indicate that more luminance is required when AR overlays are oversized with respect to the cubes, showing that observers discount the AR overlay to a greater extent when it is more obviously a transparent layer. This is not explained by conventional color appearance modeling but supports an AR-specific model based on foreground-background discounting. The findings and model will help determine parameters for creating convincing AR manipulation of real-world objects.more » « less
-
Abstract Real-world work environments require operators to perform multiple tasks with continual support from an automated system. Eye movement is often used as a surrogate measure of operator attention, yet conventional summary measures such as percent dwell time do not capture dynamic transitions of attention in complex visual workspace. This study analyzed eye movement data collected in a controlled a MATB-II task environment using gaze transition entropy analysis. In the study, human subjects performed a compensatory tracking task, a system monitoring task, and a communication task concurrently. The results indicate that both gaze transition entropy and stationary gaze entropy, measures of randomness in eye movements, decrease when the compensatory tracking task required more continuous monitoring. The findings imply that gaze transition entropy reflects attention allocation of operators performing dynamic operational tasks consistently.more » « less
-
Across a wide variety of research environments, the recording of microsaccades and other fixational eye movements has provided insight and solutions into practical problems. Here we review the literature on fixational eye movements—especially microsaccades—in applied and ecologically-valid scenarios. Recent technical advances allow noninvasive fixational eye movement recordings in real-world contexts, while observers perform a variety of tasks. Thus, fixational eye movement measures have been obtained in a host of real-world scenarios, such as in connection with driver fatigue, vestibular sensory deprivation in astronauts, and elite athletic training, among others. Here we present the state of the art in the practical applications of fixational eye movement research, examine its potential future uses, and discuss the benefits of including microsaccade measures in existing eye movement detection technologies. Current evidence supports the inclusion of fixational eye movement measures in real-world contexts, as part of the development of new or improved oculomotor assessment tools. The real-world applications of fixational eye movement measurements will only grow larger and wider as affordable high-speed and high-spatial resolution eye trackers become increasingly prevalent.more » « less
-
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate. Segmentation models trained using supervised machine learning can excel at this task, their effectiveness is determined by the degree of overlap between the narrow distributions of image properties defined by the target dataset and highly specific training datasets, of which there are few. Attempts to broaden the distribution of existing eye image datasets through the inclusion of synthetic eye images have found that a model trained on synthetic images will often fail to generalize back to real-world eye images. In remedy, we use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data, and to prune the training dataset in a manner that maximizes distribution overlap. We demonstrate that our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.more » « less