skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Effects of a Distracting Background and Focal Switching Distance in an Augmented Reality System
Many augmented reality (AR) applications require observers to shift their gaze between AR and real-world content. To date, commercial optical see-through (OST) AR displays have presented content at either a single focal distance, or at a small number of fixed focal distances. Meanwhile, real-world stimuli can occur at a variety of focal distances. Therefore, when shifting gaze between AR and real-world content, in order to view new content in sharp focus, observers must often change their eye’s accommodative state. When performed repetitively, this can negatively affect task performance and eye fatigue. However, these effects may be under reported, because past research has not yet considered the potential additional effect of distracting real world backgrounds.An experimental method that analyzes background effects is presented, using a text-based visual search task that requires integrating information presented in both AR and the real world. An experiment is reported, which examined the effect of a distracting background versus a blank background, at focal switching distances of 0, 1.33, 2.0, and 3.33 meters. Qualitatively, a majority of the participants reported that the distracting background made the task more difficult and fatiguing. Quantitatively, increasing the focal switching distance resulted in reduced task performance and increased eye fatigue. However, changing the background, between blank and distracting, did not result in significant measured differences. Suggestions are given for further efforts to examine background effects.  more » « less
Award ID(s):
1937565
PAR ID:
10316626
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the Workshop on Perceptual and Cognitive Issues in XR (PERCxR), IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Although the “eye-mind link” hypothesis posits that eye movements provide a direct window into cognitive processing, linking eye movements to specific cognitions in real-world settings remains challenging. This challenge may arise because gaze metrics such as fixation duration, pupil size, and saccade amplitude are often aggregated across timelines that include heterogeneous events. To address this, we tested whether aggregating gaze parameters across participant-defined events could support the hypothesis that increased focal processing, indicated by greater gaze duration and pupil diameter, and decreased scene exploration, indicated by smaller saccade amplitude, would predict effective task performance. Using head-mounted eye trackers, nursing students engaged in simulation learning and later segmented their simulation footage into meaningful events, categorizing their behaviors, task outcomes, and cognitive states at the event level. Increased fixation duration and pupil diameter predicted higher student-rated teamwork quality, while increased pupil diameter predicted judgments of effective communication. Additionally, increased saccade amplitude positively predicted students’ perceived self-efficacy. These relationships did not vary across event types, and gaze parameters did not differ significantly between the beginning, middle, and end of events. However, there was a significant increase in fixation duration during the first five seconds of an event compared to the last five seconds of the previous event, suggesting an initial encoding phase at an event boundary. In conclusion, event-level gaze parameters serve as valid indicators of focal processing and scene exploration in natural learning environments, generalizing across event types. 
    more » « less
  2. A visual experiment using a beam-splitter-based optical see-through augmented reality (OST-AR) setup tested the effect of the size and alignment of AR overlays with a brightness-matching task using physical cubes. Results indicate that more luminance is required when AR overlays are oversized with respect to the cubes, showing that observers discount the AR overlay to a greater extent when it is more obviously a transparent layer. This is not explained by conventional color appearance modeling but supports an AR-specific model based on foreground-background discounting. The findings and model will help determine parameters for creating convincing AR manipulation of real-world objects. 
    more » « less
  3. Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues. 
    more » « less
  4. ObjectiveThe goal of this study was to assess how different real-time gaze sharing visualization techniques affect eye tracking metrics, workload, team situation awareness (TSA), and team performance. BackgroundGaze sharing is a real-time visualization technique that allows teams to know where their team members are looking on a shared display. Gaze sharing visualization techniques are a promising means to improve collaborative performance on simple tasks; however, there needs to be validation of gaze sharing with more complex and dynamic tasks. MethodsThis study evaluated the effect of gaze sharing on eye tracking metrics, workload, team SA, and team performance in a simulated unmanned aerial vehicle (UAV) command-and-control task. Thirty-five teams of two performed UAV tasks under three conditions: one with no gaze sharing and two with gaze sharing. Gaze sharing was presented using a fixation dot (i.e., a translucent colored dot) and a fixation trail (i.e., a trail of the most recent fixations). ResultsThe results showed that the fixation trail significantly reduced saccadic activity, lowered workload, supported team SA at all levels, and improved performance compared to no gaze sharing; however, the fixation dot had the opposite effect on performance and SA. In fact, having no gaze sharing outperformed the fixation dot. Participants also preferred the fixation trail for its visibility and ability to track and monitor the history of their partner’s gaze. ConclusionThe results showed that gaze sharing has the potential to support collaboration, but its effectiveness depends highly on the design and context of use. ApplicationThe findings suggest that gaze sharing visualization techniques, like the fixation trail, have the potential to improve teamwork in complex UAV tasks and could have broader applicability in a variety of collaborative settings. 
    more » « less
  5. Abstract Real-world work environments require operators to perform multiple tasks with continual support from an automated system. Eye movement is often used as a surrogate measure of operator attention, yet conventional summary measures such as percent dwell time do not capture dynamic transitions of attention in complex visual workspace. This study analyzed eye movement data collected in a controlled a MATB-II task environment using gaze transition entropy analysis. In the study, human subjects performed a compensatory tracking task, a system monitoring task, and a communication task concurrently. The results indicate that both gaze transition entropy and stationary gaze entropy, measures of randomness in eye movements, decrease when the compensatory tracking task required more continuous monitoring. The findings imply that gaze transition entropy reflects attention allocation of operators performing dynamic operational tasks consistently. 
    more » « less