skip to main content


Title: A Simulator Study on Drivers’ Response and Perception Towards Vehicle Cyberattacks
The introduction of advanced technologies has made driving a more automated activity. However, most vehicles are not designed with cybersecurity considerations and hence, they are susceptible to cyberattacks. When such incidents happen, it is critical for drivers to respond properly. The goal of this study was to observe drivers’ responses to unexpected vehicle cyberattacks while driving in a simulated environment and to gain deeper insights into their perceptions of vehicle cybersecurity. Ten participants completed the experiment and the results showed that they perceived and responded differently to each vehicle cyberattack. Participants correctly identified the cybersecurity issue and took according action when the issue caused a noticeable visual and auditory response. Participants preferred to be clearly informed about what happened and what to do through a combination of visual, tactile, and auditory warnings. The lack of knowledge of vehicle cybersecurity was obvious among participants.  more » « less
Award ID(s):
1755795
NSF-PAR ID:
10160586
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
63
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
1498 to 1502
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The rapid growth of autonomous vehicles is expected to improve roadway safety. However, certain levels of vehicle automation will still require drivers to ‘takeover’ during abnormal situations, which may lead to breakdowns in driver-vehicle interactions. To date, there is no agreement on how to best support drivers in accomplishing a takeover task. Therefore, the goal of this study was to investigate the effectiveness of multimodal alerts as a feasible approach. In particular, we examined the effects of uni-, bi-, and trimodal combinations of visual, auditory, and tactile cues on response times to takeover alerts. Sixteen participants were asked to detect 7 multimodal signals (i.e., visual, auditory, tactile, visual-auditory, visual-tactile, auditory-tactile, and visual-auditory-tactile) while driving under two conditions: with SAE Level 3 automation only or with SAE Level 3 automation in addition to performing a road sign detection task. Performance on the signal and road sign detection tasks, pupil size, and perceived workload were measured. Findings indicate that trimodal combinations result in the shortest response time. Also, response times were longer and perceived workload was higher when participants were engaged in a secondary task. Findings may contribute to the development of theory regarding the design of takeover request alert systems within (semi) autonomous vehicles. 
    more » « less
  2. Adults aged 65 years and older are the fastest growing age group worldwide. Future autonomous vehicles may help to support the mobility of older individuals; however, these cars will not be widely available for several decades and current semi-autonomous vehicles often require manual takeover in unusual driving conditions. In these situations, the vehicle issues a takeover request in any uni-, bi- or trimodal combination of visual, auditory, or tactile alerts to signify the need for manual intervention. However, to date, it is not clear whether age-related differences exist in the perceived ease of detecting these alerts. Also, the extent to which engagement in non-driving-related tasks affects this perception in younger and older drivers is not known. Therefore, the goal of this study was to examine the effects of age on the ease of perceiving takeover requests in different sensory channels and on attention allocation during conditional driving automation. Twenty-four younger and 24 older adults drove a simulated SAE Level 3 vehicle under three conditions: baseline, while performing a non-driving-related task, and while engaged in a driving-related task, and were asked to rate the ease of detecting uni-, bi- or trimodal combinations of visual, auditory, or tactile signals. Both age groups found the trimodal alert to be the easiest to detect. Also, older adults focused more on the road than the secondary task compared to younger drivers. Findings may inform the development of next-generation of autonomous vehicle systems to be safe for a wide range of age groups. 
    more » « less
  3. Objective

    This study examined the impact of monitoring instructions when using an automated driving system (ADS) and road obstructions on post take-over performance in near-miss scenarios.

    Background

    Past research indicates partial ADS reduces the driver’s situation awareness and degrades post take-over performance. Connected vehicle technology may alert drivers to impending hazards in time to safely avoid near-miss events.

    Method

    Forty-eight licensed drivers using ADS were randomly assigned to either the active driving or passive driving condition. Participants navigated eight scenarios with or without a visual obstruction in a distributed driving simulator. The experimenter drove the other simulated vehicle to manually cause near-miss events. Participants’ mean longitudinal velocity, standard deviation of longitudinal velocity, and mean longitudinal acceleration were measured.

    Results

    Participants in passive ADS group showed greater, and more variable, deceleration rates than those in the active ADS group. Despite a reliable audiovisual warning, participants failed to slow down in the red-light running scenario when the conflict vehicle was occluded. Participant’s trust in the automated driving system did not vary between the beginning and end of the experiment.

    Conclusion

    Drivers interacting with ADS in a passive manner may continue to show increased and more variable deceleration rates in near-miss scenarios even with reliable connected vehicle technology. Future research may focus on interactive effects of automated and connected driving technologies on drivers’ ability to anticipate and safely navigate near-miss scenarios.

    Application

    Designers of automated and connected vehicle technologies may consider different timing and types of cues to inform the drivers of imminent hazard in high-risk scenarios for near-miss events.

     
    more » « less
  4. null (Ed.)
    Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving. 
    more » « less
  5. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less