skip to main content


Title: Age-Related Differences in Takeover Request Modality Preferences and Attention Allocation During Semi-autonomous Driving
Adults aged 65 years and older are the fastest growing age group worldwide. Future autonomous vehicles may help to support the mobility of older individuals; however, these cars will not be widely available for several decades and current semi-autonomous vehicles often require manual takeover in unusual driving conditions. In these situations, the vehicle issues a takeover request in any uni-, bi- or trimodal combination of visual, auditory, or tactile alerts to signify the need for manual intervention. However, to date, it is not clear whether age-related differences exist in the perceived ease of detecting these alerts. Also, the extent to which engagement in non-driving-related tasks affects this perception in younger and older drivers is not known. Therefore, the goal of this study was to examine the effects of age on the ease of perceiving takeover requests in different sensory channels and on attention allocation during conditional driving automation. Twenty-four younger and 24 older adults drove a simulated SAE Level 3 vehicle under three conditions: baseline, while performing a non-driving-related task, and while engaged in a driving-related task, and were asked to rate the ease of detecting uni-, bi- or trimodal combinations of visual, auditory, or tactile signals. Both age groups found the trimodal alert to be the easiest to detect. Also, older adults focused more on the road than the secondary task compared to younger drivers. Findings may inform the development of next-generation of autonomous vehicle systems to be safe for a wide range of age groups.  more » « less
Award ID(s):
1755746
NSF-PAR ID:
10171815
Author(s) / Creator(s):
Date Published:
Journal Name:
International Conference on Human-Computer Interaction
Page Range / eLocation ID:
135-146
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The rapid growth of autonomous vehicles is expected to improve roadway safety. However, certain levels of vehicle automation will still require drivers to ‘takeover’ during abnormal situations, which may lead to breakdowns in driver-vehicle interactions. To date, there is no agreement on how to best support drivers in accomplishing a takeover task. Therefore, the goal of this study was to investigate the effectiveness of multimodal alerts as a feasible approach. In particular, we examined the effects of uni-, bi-, and trimodal combinations of visual, auditory, and tactile cues on response times to takeover alerts. Sixteen participants were asked to detect 7 multimodal signals (i.e., visual, auditory, tactile, visual-auditory, visual-tactile, auditory-tactile, and visual-auditory-tactile) while driving under two conditions: with SAE Level 3 automation only or with SAE Level 3 automation in addition to performing a road sign detection task. Performance on the signal and road sign detection tasks, pupil size, and perceived workload were measured. Findings indicate that trimodal combinations result in the shortest response time. Also, response times were longer and perceived workload was higher when participants were engaged in a secondary task. Findings may contribute to the development of theory regarding the design of takeover request alert systems within (semi) autonomous vehicles. 
    more » « less
  2. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  3. The introduction of advanced technologies has made driving a more automated activity. However, most vehicles are not designed with cybersecurity considerations and hence, they are susceptible to cyberattacks. When such incidents happen, it is critical for drivers to respond properly. The goal of this study was to observe drivers’ responses to unexpected vehicle cyberattacks while driving in a simulated environment and to gain deeper insights into their perceptions of vehicle cybersecurity. Ten participants completed the experiment and the results showed that they perceived and responded differently to each vehicle cyberattack. Participants correctly identified the cybersecurity issue and took according action when the issue caused a noticeable visual and auditory response. Participants preferred to be clearly informed about what happened and what to do through a combination of visual, tactile, and auditory warnings. The lack of knowledge of vehicle cybersecurity was obvious among participants. 
    more » « less
  4. Learning the human--mobility interaction (HMI) on interactive scenes (e.g., how a vehicle turns at an intersection in response to traffic lights and other oncoming vehicles) can enhance the safety, efficiency, and resilience of smart mobility systems (e.g., autonomous vehicles) and many other ubiquitous computing applications. Towards the ubiquitous and understandable HMI learning, this paper considers both spoken language (e.g., human textual annotations) and unspoken language (e.g., visual and sensor-based behavioral mobility information related to the HMI scenes) in terms of information modalities from the real-world HMI scenarios. We aim to extract the important but possibly implicit HMI concepts (as the named entities) from the textual annotations (provided by human annotators) through a novel human language and sensor data co-learning design.

    To this end, we propose CG-HMI, a novel Cross-modality Graph fusion approach for extracting important Human-Mobility Interaction concepts from co-learning of textual annotations as well as the visual and behavioral sensor data. In order to fuse both unspoken and spoken languages, we have designed a unified representation called the human--mobility interaction graph (HMIG) for each modality related to the HMI scenes, i.e., textual annotations, visual video frames, and behavioral sensor time-series (e.g., from the on-board or smartphone inertial measurement units). The nodes of the HMIG in these modalities correspond to the textual words (tokenized for ease of processing) related to HMI concepts, the detected traffic participant/environment categories, and the vehicle maneuver behavior types determined from the behavioral sensor time-series. To extract the inter- and intra-modality semantic correspondences and interactions in the HMIG, we have designed a novel graph interaction fusion approach with differentiable pooling-based graph attention. The resulting graph embeddings are then processed to identify and retrieve the HMI concepts within the annotations, which can benefit the downstream human-computer interaction and ubiquitous computing applications. We have developed and implemented CG-HMI into a system prototype, and performed extensive studies upon three real-world HMI datasets (two on car driving and the third one on e-scooter riding). We have corroborated the excellent performance (on average 13.11% higher accuracy than the other baselines in terms of precision, recall, and F1 measure) and effectiveness of CG-HMI in recognizing and extracting the important HMI concepts through cross-modality learning. Our CG-HMI studies also provide real-world implications (e.g., road safety and driving behaviors) about the interactions between the drivers and other traffic participants.

     
    more » « less
  5. null (Ed.)
    Autonomous Vehicle (AV) technology has the potential to significantly improve driver safety. Unfortunately, driver could be reluctant to ride with AVs due to the lack of trust and acceptance of AV’s driving styles. The present study investigated the impact of driver’s driving style (aggressive/defensive) and the designed driving styles of AVs (aggressive/defensive) on driver’s trust, acceptance, and take-over behavior in fully autonomous vehicles. Thirty-two participants were classified into two groups based on their driving styles using the Aggressive Driving Scale and experienced twelve scenarios in either an aggressive AV or a defensive AV. Results revealed that drivers’ trust, acceptance, and takeover frequency were significantly influenced by the interaction effects between AV’s driving style and driver’s driving style. The findings implied that driver’s individual differences should be considered in the design of AV’s driving styles to enhance driver’s trust and acceptance of AVs and reduce undesired take over behaviors. 
    more » « less