skip to main content


Title: How Long Can a Driver (Safely) Glance at an Augmented-Reality Head-Up Display?
Augmented-Reality (AR) head-up display (HUD) is one of the promising solutions to reduce distraction potential while driving and performing secondary visual tasks; however, we currently don’t know how to effectively evaluate interfaces in this area. In this study, we show that current visual distraction standards for evaluating in-vehicle displays may not be applicable for AR HUDs. We provide evidence that AR HUDs can afford longer glances with no decrement in driving performance. We propose that the selection of measurement methods for driver distraction research should be guided not only by the nature of the task under evaluation but also by the properties of the method itself.  more » « less
Award ID(s):
1816721
NSF-PAR ID:
10283671
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
64
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
42 to 46
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As the automotive industry progresses towards the car of the future, we have seen increasing interest using augmented reality (AR) head-up displays (HUD) in driving. AR HUDs provide a fundamentally new driving experience in which drivers still have to respond to both the road and the information provided by the system, creating the perfect atmosphere for potentially unsafe and distracting interfaces. As we start fielding and designing for new AR HUDs displays, the complexities of interface design and its impacts on driver performance must be further understood before AR HUDs can be broadly and safely incorporated into vehicles. Nevertheless, existing methods for assessing the usefulness of computer-based user interfaces may not be sufficiently rich to measure the overall impact of AR HUD interfaces on human performance. Therefore, in my Ph.D. research, I focus on developing and testing methods to evaluate AR HUDs' effects on driver distraction and performance. My primary goal is to assess glance allocation and visual capabilities of drivers with AR HUDs and apply this knowledge to inform new methods of AR HUD assessment that account for inattentional blindness and cognitive tunneling. 
    more » « less
  2. null (Ed.)
    Although driving is a complex and multitask activity, it is not unusual for drivers to engage simultaneously in other non-driving related tasks using secondary in-vehicle displays (IVIS). The use of IVIS and its potential negative safety consequences has been investigated over the years. However with the advent and advance of in-vehicle technologies such as augmented-reality head-up displays (AR HUDs), there are increasing opportunities for improving secondary task engagement and decreasing negative safety consequences. In this study, we aim to understand the effects of AR HUD low cognitive load tasks on driving performance during monotonous driving. Adapting NHTSA’s driver distraction guidelines, we conducted a user-study with twenty-four gender-balanced participants that performed secondary AR HUD tasks of different durations while driving in a monotonous environment using a medium-fidelity driving simulator. We performed a mixed-methods analysis to evaluate driver’s perceived workload (NASA-TLX), lateral, and longitudinal driving performance. Although we found that drivers subjectively perceive AR HUD tasks to have a higher cognitive demand; AR tasks resulted in improved driving performance. Conversely, the duration of the secondary tasks had no measurable impacts on performance which suggests that the amount of time spent on tasks has no negative or positive implications on driving performance. We provide evidence that there are potential benefits of secondary AR task engagement; in fact, there are situations in which AR HUDs can improve driver’s alertness and vigilance. 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. null (Ed.)
    Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving. 
    more » « less
  5. null (Ed.)
    Objective We measured how long distraction by a smartphone affects simulated driving behaviors after the tasks are completed (i.e., the distraction hangover). Background Most drivers know that smartphones distract. Trying to limit distraction, drivers can use hands-free devices, where they only briefly glance at the smartphone. However, the cognitive cost of switching tasks from driving to communicating and back to driving adds an underappreciated, potentially long period to the total distraction time. Method Ninety-seven 21- to 78-year-old individuals who self-identified as active drivers and smartphone users engaged in a simulated driving scenario that included smartphone distractions. Peripheral-cue and car-following tasks were used to assess driving behavior, along with synchronized eye tracking. Results The participants’ lateral speed was larger than baseline for 15 s after the end of a voice distraction and for up to 25 s after a text distraction. Correct identification of peripheral cues dropped about 5% per decade of age, and participants from the 71+ age group missed seeing about 50% of peripheral cues within 4 s of the distraction. During distraction, coherence with the lead car in a following task dropped from 0.54 to 0.045, and seven participants rear-ended the lead car. Breadth of scanning contracted by 50% after distraction. Conclusion Simulated driving performance drops dramatically after smartphone distraction for all ages and for both voice and texting. Application Public education should include the dangers of any smartphone use during driving, including hands-free. 
    more » « less