Complex and dynamic domains rely on operators to collaborate on multiple tasks and cope with changes in task demands. Gaze sharing is a means of communication used to exchange visual information by allowing teammates to view each other’s gaze points on their displays. Existing work on gaze sharing focuses on relatively simple task-specific domains and no work-to-date addresses how to use gaze sharing in data-rich environments. For this study, nine pairs of participants completed a UAV search and rescue command-and-control task with three visualization techniques: no gaze sharing, gaze sharing using the real-time dot, and gaze sharing using the real-time fixation trail. Our preliminary results show that performance scores using the real-time fixation trail were statistically significantly higher than when no gaze sharing was present. This suggests that the real-time fixation trail is a promising tool to better understand operators’ strategies and could form the basis of an adaptive display. 
                        more » 
                        « less   
                    This content will become publicly available on March 1, 2026
                            
                            Gaze Sharing, a Double-Edged Sword: Examining the Effect of Real-Time Gaze Sharing Visualizations on Team Performance and Situation Awareness
                        
                    
    
            ObjectiveThe goal of this study was to assess how different real-time gaze sharing visualization techniques affect eye tracking metrics, workload, team situation awareness (TSA), and team performance. BackgroundGaze sharing is a real-time visualization technique that allows teams to know where their team members are looking on a shared display. Gaze sharing visualization techniques are a promising means to improve collaborative performance on simple tasks; however, there needs to be validation of gaze sharing with more complex and dynamic tasks. MethodsThis study evaluated the effect of gaze sharing on eye tracking metrics, workload, team SA, and team performance in a simulated unmanned aerial vehicle (UAV) command-and-control task. Thirty-five teams of two performed UAV tasks under three conditions: one with no gaze sharing and two with gaze sharing. Gaze sharing was presented using a fixation dot (i.e., a translucent colored dot) and a fixation trail (i.e., a trail of the most recent fixations). ResultsThe results showed that the fixation trail significantly reduced saccadic activity, lowered workload, supported team SA at all levels, and improved performance compared to no gaze sharing; however, the fixation dot had the opposite effect on performance and SA. In fact, having no gaze sharing outperformed the fixation dot. Participants also preferred the fixation trail for its visibility and ability to track and monitor the history of their partner’s gaze. ConclusionThe results showed that gaze sharing has the potential to support collaboration, but its effectiveness depends highly on the design and context of use. ApplicationThe findings suggest that gaze sharing visualization techniques, like the fixation trail, have the potential to improve teamwork in complex UAV tasks and could have broader applicability in a variety of collaborative settings. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2008680
- PAR ID:
- 10616323
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Human Factors: The Journal of the Human Factors and Ergonomics Society
- Volume:
- 67
- Issue:
- 3
- ISSN:
- 0018-7208
- Page Range / eLocation ID:
- 196 to 224
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Virtual Reality (VR) headsets with embedded eye trackers are appearing as consumer devices (e.g. HTC Vive Eye, FOVE). These devices could be used in VR-based education (e.g., a virtual lab, a virtual field trip) in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques. The results suggest that a short particle trail representing eye trajectory is promising. In contrast, 3D heatmaps (an adaptation of traditional 2D heatmaps) for visualizing gaze over a short time span are problematic.more » « less
- 
            Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a finite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.more » « less
- 
            Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a nite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.more » « less
- 
            Eleni Papageorgiou (Ed.)BackgroundResearch on task performance under visual field loss is often limited due to small and heterogenous samples. Simulations of visual impairments hold the potential to account for many of those challenges. Digitally altered pictures, glasses, and contact lenses with partial occlusions have been used in the past. One of the most promising methods is the use of a gaze-contingent display that occludes parts of the visual field according to the current gaze position. In this study, the gaze-contingent paradigm was implemented in a static driving simulator to simulate visual field loss and to evaluate parallels in the resulting driving and gaze behavior in comparison to patients. MethodsThe sample comprised 15 participants without visual impairment. All the subjects performed three drives: with full vision, simulated left-sided homonymous hemianopia, and simulated right-sided homonymous hemianopia, respectively. During each drive, the participants drove through an urban environment where they had to maneuver through intersections by crossing straight ahead, turning left, and turning right. ResultsThe subjects reported reduced safety and increased workload levels during simulated visual field loss, which was reflected in reduced lane position stability and greater absence of large gaze movements. Initial compensatory strategies could be found concerning a dislocated gaze position and a distorted fixation ratio toward the blind side, which was more pronounced for right-sided visual field loss. During left-sided visual field loss, the participants showed a smaller horizontal range of gaze positions, longer fixation durations, and smaller saccadic amplitudes compared to right-sided homonymous hemianopia and, more distinctively, compared to normal vision. ConclusionThe results largely mirror reports from driving and visual search tasks under simulated and pathological homonymous hemianopia concerning driving and scanning challenges, initially adopted compensatory strategies, and driving safety. This supports the notion that gaze-contingent displays can be a useful addendum to driving simulator research with visual impairments if the results are interpreted considering methodological limitations and inherent differences to the pathological impairment.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
