- Award ID(s):
- 2245055
- PAR ID:
- 10440943
- Date Published:
- Journal Name:
- Human Factors: The Journal of the Human Factors and Ergonomics Society
- ISSN:
- 0018-7208
- Page Range / eLocation ID:
- 001872082311686
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Although driving is a complex and multitask activity, it is not unusual for drivers to engage simultaneously in other non-driving related tasks using secondary in-vehicle displays (IVIS). The use of IVIS and its potential negative safety consequences has been investigated over the years. However with the advent and advance of in-vehicle technologies such as augmented-reality head-up displays (AR HUDs), there are increasing opportunities for improving secondary task engagement and decreasing negative safety consequences. In this study, we aim to understand the effects of AR HUD low cognitive load tasks on driving performance during monotonous driving. Adapting NHTSA’s driver distraction guidelines, we conducted a user-study with twenty-four gender-balanced participants that performed secondary AR HUD tasks of different durations while driving in a monotonous environment using a medium-fidelity driving simulator. We performed a mixed-methods analysis to evaluate driver’s perceived workload (NASA-TLX), lateral, and longitudinal driving performance. Although we found that drivers subjectively perceive AR HUD tasks to have a higher cognitive demand; AR tasks resulted in improved driving performance. Conversely, the duration of the secondary tasks had no measurable impacts on performance which suggests that the amount of time spent on tasks has no negative or positive implications on driving performance. We provide evidence that there are potential benefits of secondary AR task engagement; in fact, there are situations in which AR HUDs can improve driver’s alertness and vigilance.more » « less
-
Vigilance refers to an individual’s ability to maintain attention over time. Vigilance decrement is particularly concerning in clinical environments where shift work and long working hours are common. This study identifies significant factors and indicators for predicting and monitoring individuals’ vigilance decrement. We enrolled 11 participants and measured their vigilance levels by recording their reaction times while completing the Psychomotor Vigilance Test. Additionally, we measured participants’ physiological responses and collected their sleep deprivation data, demographic information, and self-reported anxiety levels. Using repeated-measures correlation analysis, we found that decreased vigilance levels, indicated by longer reaction times, were associated with higher electrodermal activity ( p < .01), lower skin temperature ( p < .001), shorter fixation durations ( p < .05), and increased saccade frequency ( p < .05). Moreover, sleep deprivation significantly affected vigilance decrement ( p < .001). Our findings provide the potential to develop a predictive model of vigilance decrements using physiological signals collected from non-intrusive devices, as an alternative to current behavior-based methods.
-
null (Ed.)We explore the transfer of control from an automated vehicle to the driver. Based on data from N=19 participants who participated in a driving simulator experiment, we find evidence that the transfer of control often does not take place in one step. In other words, when the automated system requests the transfer of control back to the driver, the driver often does not simply stop the non-driving task. Rather, the transfer unfolds as a process of interleaving the non-driving and driving tasks. We also find that the process is moderated by the length of time available for the transfer of con- trol: interleaving is more likely when more time is available. Our interface designs for automated vehicles must take these results into account so as to allow drivers to safely take back control from automation.more » « less
-
null (Ed.)Augmented-Reality (AR) head-up display (HUD) is one of the promising solutions to reduce distraction potential while driving and performing secondary visual tasks; however, we currently don’t know how to effectively evaluate interfaces in this area. In this study, we show that current visual distraction standards for evaluating in-vehicle displays may not be applicable for AR HUDs. We provide evidence that AR HUDs can afford longer glances with no decrement in driving performance. We propose that the selection of measurement methods for driver distraction research should be guided not only by the nature of the task under evaluation but also by the properties of the method itself.more » « less
-
Abstract Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.