skip to main content


Title: Does What We See Shape History? Examining Workload History as a Function of Performance and Ambient/Focal Visual Attention
Changes in task demands can have delayed adverse impacts on performance. This phenomenon, known as the workload history effect, is especially of concern in dynamic work domains where operators manage fluctuating task demands. The existing workload history literature does not depict a consistent picture regarding how these effects manifest, prompting research to consider measures that are informative on the operator's process. One promising measure is visual attention patterns, due to its informativeness on various cognitive processes. To explore its ability to explain workload history effects, participants completed a task in an unmanned aerial vehicle command and control testbed where workload transitioned gradually and suddenly. The participants’ performance and visual attention patterns were studied over time to identify workload history effects. The eye-tracking analysis consisted of using a recently developed eye-tracking metric called coefficient K , as it indicates whether visual attention is more focal or ambient. The performance results found workload history effects, but it depended on the workload level, time elapsed, and performance measure. The eye-tracking analysis suggested performance suffered when focal attention was deployed during low workload, which was an unexpected finding. When synthesizing these results, they suggest unexpected visual attention patterns can impact performance immediately over time. Further research is needed; however, this work shows the value of including a real-time visual attention measure, such as coefficient K , as a means to understand how the operator manages varying task demands in complex work environments.  more » « less
Award ID(s):
2008680
NSF-PAR ID:
10250012
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Transactions on Applied Perception
Volume:
18
Issue:
2
ISSN:
1544-3558
Page Range / eLocation ID:
1 to 17
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background

    In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.

    Objective

    The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation.

    Methods

    Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.

    Results

    Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.

    Conclusion

    The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.

    Application

    Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.

     
    more » « less
  2. null (Ed.)
    Data-rich environments rely on operators to collaborate, especially in light of workload changes. This work explores the relationship between the operators’ shared visual attention patterns on a target area of interest (AOI), i.e. the AOI causing a workload change, and how it affects collaborative performance. Eye tracking data was collected from ten pairs of participants who completed two scenarios, the first being low workload and the second being high workload, in an unmanned aerial vehicle (UAV) command and control testbed. Then, best and worst performing pairs were compared in terms of two shared visual attention metrics: (1) percent gaze overlap and (2) the phi coefficient for the target AOI. The results showed that coordinated visits to and from the target AOI were associated with better performance during high workload. These results suggest including quantitative measures of visual attention can be indicators of the adaptation process in real- time. 
    more » « less
  3. Given there is no unifying theory or design guidance for workload transitions, this work investigated how visual attention allocation patterns could inform both topics, by understanding if scan-based eye tracking metrics could predict workload transition performance trends in a context-relevant domain. The eye movements of sixty Naval flight students were tracked as workload transitioned at a slow, medium, and fast pace in an unmanned aerial vehicle testbed. Four scan-based metrics were significant predictors across the different growth curve models of response time and accuracy. Stationary gaze entropy (a measure of how dispersed visual attention transitions are across tasks) was predictive across all three transition rates. The other three predictive scan-based metrics captured different aspects of visual attention, including its spread, directness, and duration. The findings specify several missing details in both theory and design guidance, which is unprecedented, and serves as a basis of future workload transition research. 
    more » « less
  4. Given there is no unifying theory or design guidance for workload transitions, this work investigated how visual attention allocation patterns could inform both topics, by understanding if scan-based eye tracking metrics could predict workload transition performance trends in a context-relevant domain. The eye movements of sixty Naval flight students were tracked as workload transitioned at a slow, medium, and fast pace in an unmanned aerial vehicle testbed. Four scan-based metrics were significant predictors across the different growth curve models of response time and accuracy. Stationary gaze entropy (a measure of how dispersed visual attention transitions are across tasks) was predictive across all three transition rates. The other three predictive scan-based metrics captured different aspects of visual attention, including its spread, directness, and duration. The findings specify several missing details in both theory and design guidance, which is unprecedented, and serves as a basis of future workload transition research. 
    more » « less
  5. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less