skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Bimodal Trust: High and Low Trust in Vehicle Automation Influence Response to Automation Errors
Extended exposure to reliable automation may lead to overreliance as evidenced by poor responses to auto-mation errors. Individual differences in trust may also influence responses. We investigated how these factors affect response to automation errors in a driving simulator study comprised of stop-controlled and uncon-trolled intersections. Drivers experienced reliable vehicle automation during six drives where they indicated if they felt the automation was going too slow or too fast by pressing the accelerator or brake pedal. Engage-ment via pedal presses did not affect the automation but offered an objective measure of trust in automation. In the final drive, an error occurred where the vehicle failed to stop at a stop-controlled intersection. Drivers’ response to the error was inferred from brake presses. Mixture models showed bimodal response times and revealed that drivers with high trust were less likely to respond to automation errors than drivers with low trust.  more » « less
Award ID(s):
1739869
PAR ID:
10581008
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Sage Journals
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
67
Issue:
1
ISSN:
1071-1813
Page Range / eLocation ID:
1144 to 1149
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Trust calibration poses a significant challenge in the interaction between drivers and automated vehicles (AVs) in the context of human-automation collaboration. To effectively calibrate trust, it becomes crucial to accurately measure drivers’ trust levels in real time, allowing for timely interventions or adjustments in the automated driving. One viable approach involves employing machine learning models and physiological measures to model the dynamic changes in trust. This study introduces a technique that leverages machine learning models to predict drivers’ real-time dynamic trust in conditional AVs using physiological measurements. We conducted the study in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition. Each condition had eight takeover requests (TORs) in different scenarios. Drivers’ physiological measures were recorded during the experiment, including galvanic skin response (GSR), heart rate (HR) indices, and eye-tracking metrics. Using five machine learning models, we found that eXtreme Gradient Boosting (XGBoost) performed the best and was able to predict drivers’ trust in real time with an f1-score of 89.1% compared to a baseline model of K -nearest neighbor classifier of 84.5%. Our findings provide good implications on how to design an in-vehicle trust monitoring system to calibrate drivers’ trust to facilitate interaction between the driver and the AV in real time. 
    more » « less
  2. Transitions of control are an important safety concern for human-automation teams and automated vehicle safety. While trust and situation awareness have been observed to influence transitions of control in automated vehicles, there are few objective measurements, making these concepts difficult to operationalize in increasingly automated decision systems. In this study, we take a step towards quantifying trust by mapping latent driver beliefs extracted from an active inference-factor analysis model of driver behavior and cognitive dynamics to subjective responses to trust questionnaires. Our results show that subjective trust is primarily correlated with model parameters affecting perceptual evidence accumulation rate, and the same parameters are significantly correlated with driver age. 
    more » « less
  3. ObjectiveWe examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BackgroundMost existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. MethodSeventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. ResultsOutcome bias and contrast effect significantly influence human operators’ trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. ConclusionHuman operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. ApplicationUnderstanding the trust adjustment process enables accurate prediction of the operators’ moment-to-moment trust in automation and informs the design of trust-aware adaptive automation. 
    more » « less
  4. Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data. 
    more » « less
  5. Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the general public may hesitate to embrace this technology. This research seeks to investigate contextualized trust profiles in order to create personalized experiences for drivers in AVs with varying levels of reliability. A driving simulator experiment involving 70 participants revealed three distinct contextualized trust profiles (i.e., confident copilots, myopic pragmatists, and reluctant automators) identified through K-means clustering, and analyzed in relation to drivers' dynamic trust, dispositional trust, initial learned trust, personality traits, and emotions. The experiment encompassed eight scenarios where participants were requested to take over control from the AV in three conditions: a control condition, a false alarm condition, and a miss condition. To validate the models, a multinomial logistic regression model was constructed using the shapley additive explanations explainer to determine the most influential features in predicting contextualized trust profiles, achieving an F1-score of 0.90 and an accuracy of 0.89. In addition, an examination of how individual factors impact contextualized trust profiles provided valuable insights into trust dynamics from a user-centric perspective. The outcomes of this research hold significant implications for the development of personalized in-vehicle trust monitoring and calibration systems to modulate drivers' trust levels, thereby enhancing safety and user experience in automated driving. 
    more » « less