skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: How Decision Making Develops: Adolescents, Irrational Adults, and Should AI be Trusted With the Car Keys?
This paper reviews the developmental literature on decision making, discussing how increased reliance on gist thinking explains the surprising finding that important cognitive biases increase from childhood to adulthood. This developmental trend can be induced experimentally by encouraging verbatim (younger) versus gist (older) ways of thinking. We then build on this developmental literature to assess the developmental stage of artificial intelligence (AI) and how its decision making compares with humans, finding that popular models are not only irrational but they sometimes resemble immature adolescents. To protect public safety and avoid risk, we propose that AI models build on policy frameworks already established to regulate other immature decision makers such as adolescents.  more » « less
Award ID(s):
2229885
PAR ID:
10483063
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Policy Insights from the Behavioral and Brain Sciences
Volume:
11
Issue:
1
ISSN:
2372-7322
Format(s):
Medium: X Size: p. 11-18
Size(s):
p. 11-18
Sponsoring Org:
National Science Foundation
More Like this
  1. Current AI systems lack several important human capabilities, such as adaptability, generalizability, selfcontrol, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities, which can be implemented by learning and reasoning components respectively, allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency. 
    more » « less
  2. A framework is presented for understanding how misinformation shapes decision-making, which has cognitive representations of gist at its core. I discuss how the framework goes beyond prior work, and how it can be implemented so that valid scientific messages are more likely to be effective, remembered, and shared through social media, while misinformation is resisted. The distinction between mental representations of the rote facts of a message—its verbatim representation—and its gist explains several paradoxes, including the frequent disconnect between knowing facts and, yet, making decisions that seem contrary to those facts. Decision makers can falsely remember the gist as seen or heard even when they remember verbatim facts. Indeed, misinformation can be more compelling than information when it provides an interpretation of reality that makes better sense than the facts. Consequently, for many issues, scientific information and misinformation are in a battle for the gist. A fuzzy-processing preference for simple gist explains expectations for antibiotics, the spread of misinformation about vaccination, and responses to messages about global warming, nuclear proliferation, and natural disasters. The gist, which reflects knowledge and experience, induces emotions and brings to mind social values. However, changing mental representations is not sufficient by itself; gist representations must be connected to values. The policy choice is not simply between constraining behavior or persuasion—there is another option. Science communication needs to shift from an emphasis on disseminating rote facts to achieving insight, retaining its integrity but without shying away from emotions and values. 
    more » « less
  3. Recent years have witnessed the growing literature in empirical evaluation of explainable AI (XAI) methods. This study contributes to this ongoing conversation by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy — improve people’s understanding of the AI model, help people recognize the model uncertainty, and support people’s calibrated trust in the model. Through three randomized controlled experiments, we evaluate whether four types of common model-agnostic explainable AI methods satisfy these properties on two types of AI models of varying levels of complexity, and in two kinds of decision making contexts where people perceive themselves as having different levels of domain expertise. Our results demonstrate that many AI explanations do not satisfy any of the desirable properties when used on decision making tasks that people have little domain expertise in. On decision making tasks that people are more knowledgeable, the feature contribution explanation is shown to satisfy more desiderata of AI explanations, even when the AI model is inherently complex. We conclude by discussing the implications of our study for improving the design of XAI methods to better support human decision making, and for advancing more rigorous empirical evaluation of XAI methods. 
    more » « less
  4. Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts. 
    more » « less
  5. The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences. 
    more » « less