skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.  more » « less
Award ID(s):
2245055
PAR ID:
10371002
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Journal of Cognitive Engineering and Decision Making
Volume:
16
Issue:
4
ISSN:
1555-3434
Format(s):
Medium: X Size: p. 237-251
Size(s):
p. 237-251
Sponsoring Org:
National Science Foundation
More Like this
  1. Traffic accident anticipation is a vital function of Automated Driving Systems (ADS) in providing a safety-guaranteed driving experience. An accident anticipation model aims to predict accidents promptly and accurately before they occur. Existing Artificial Intelligence (AI) models of accident anticipation lack a human-interpretable explanation of their decision making. Although these models perform well, they remain a black-box to the ADS users who find it to difficult to trust them. To this end, this paper presents a gated recurrent unit (GRU) network that learns spatio-temporal relational features for the early anticipation of traffic accidents from dashcam video data. A post-hoc attention mechanism named Grad-CAM (Gradient-weighted Class Activation Map) is integrated into the network to generate saliency maps as the visual explanation of the accident anticipation decision. An eye tracker captures human eye fixation points for generating human attention maps. The explainability of network-generated saliency maps is evaluated in comparison to human attention maps. Qualitative and quantitative results on a public crash data set confirm that the proposed explainable network can anticipate an accident on average 4.57 s before it occurs, with 94.02% average precision. Various post-hoc attention-based XAI methods are then evaluated and compared. This confirms that the Grad-CAM chosen by this study can generate high-quality, human-interpretable saliency maps (with 1.23 Normalized Scanpath Saliency) for explaining the crash anticipation decision. Importantly, results confirm that the proposed AI model, with a human-inspired design, can outperform humans in accident anticipation. 
    more » « less
  2. Recent work has considered personalized route planning based on user profiles, but none of it accounts for human trust. We argue that human trust is an important factor to consider when planning routes for automated vehicles. This article presents a trust-based route-planning approach for automated vehicles. We formalize the human-vehicle interaction as a partially observable Markov decision process (POMDP) and model trust as a partially observable state variable of the POMDP, representing the human’s hidden mental state. We build data-driven models of human trust dynamics and takeover decisions, which are incorporated in the POMDP framework, using data collected from an online user study with 100 participants on the Amazon Mechanical Turk platform. We compute optimal routes for automated vehicles by solving optimal policies in the POMDP planning and evaluate the resulting routes via human subject experiments with 22 participants on a driving simulator. The experimental results show that participants taking the trust-based route generally reported more positive responses in the after-driving survey than those taking the baseline (trust-free) route. In addition, we analyze the trade-offs between multiple planning objectives (e.g., trust, distance, energy consumption) via multi-objective optimization of the POMDP. We also identify a set of open issues and implications for real-world deployment of the proposed approach in automated vehicles. 
    more » « less
  3. null (Ed.)
    Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees’ inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI’s classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI’s judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases. 
    more » « less
  4. It’s critical to understand how to use artificial intelligence (AI) to foster innovation in the modern world as AI becomes more integrated into creative and problem-solving tasks. Using the sustainable washing machine as a primary example, this study designed and developed AI design assistant AIDA as a web-based chatbot to facilitate design ideation, leveraging large language models. AIDA prompts design tasks and assesses user-generated ideas for validity, novelty, and feasibility using RoBERTa-based models. As in the initial phase of an ongoing project, we conducted a human-subject experiment to validate a baseline version of AIDA and examined user performance and perceptions. The participants demonstrated smooth interaction with AIDA and consistent performance. They reported mostly positive perceived usefulness, enjoyment, and trust. Moreover, females and participants equal to or over 25 showed a comparable level of trust for general automated systems and AIDA, whereas male and under 25 participants were more skeptical about AIDA. This research offers a framework for technical development, tailored interactions, and real-time feedback, as well as insights into the use of AI chatbots to mediate engineering design. By analyzing user behavior and survey responses, we identified future directions in designing AI systems in engineering education and early-stage design. 
    more » « less
  5. The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences. 
    more » « less