skip to main content


This content will become publicly available on October 31, 2024

Title: Building Human-Like Artificial Agents: A General Cognitive Algorithm for Emulating Human Decision-Making in Dynamic Environments

One of the early goals of artificial intelligence (AI) was to create algorithms that exhibited behavior indistinguishable from human behavior (i.e., human-like behavior). Today, AI has diverged, often aiming to excel in tasks inspired by human capabilities and outperform humans, rather than replicating human cogntion and action. In this paper, I explore the overarching question of whether computational algorithms have achieved this initial goal of AI. I focus on dynamic decision-making, approaching the question from the perspective of computational cognitive science. I present a general cognitive algorithm that intends to emulate human decision-making in dynamic environments, as defined in instance-based learning theory (IBLT). I use the cognitive steps proposed in IBLT to organize and discuss current evidence that supports some of the human-likeness of the decision-making mechanisms. I also highlight the significant gaps in research that are required to improve current models and to create higher fidelity in computational algorithms to represent human decision processes. I conclude with concrete steps toward advancing the construction of algorithms that exhibit human-like behavior with the ultimate goal of supporting human dynamic decision-making.

 
more » « less
NSF-PAR ID:
10471988
Author(s) / Creator(s):
 
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Perspectives on Psychological Science
ISSN:
1745-6916
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? In general, how should we account for and balance the ethical values, safety recommendations, and societal norms, when we are trying to achieve a certain objective? To enable effective AI-human collaboration, we must equip AI agents with a model of how humans make such trade-offs in environments where there is not only a goal to be reached, but there are also ethical constraints to be considered and to possibly align with. These ethical constraints could be both deontological rules on actions that should not be performed, or also consequentialist policies that recommend avoiding reaching certain states of the world. Our purpose is to build AI agents that can mimic human behavior in these ethically constrained decision environments, with a long term research goal to use AI to help humans in making better moral judgments and actions. To this end, we propose a computational approach where competing objectives and ethical constraints are orchestrated through a method that leverages a cognitive model of human decision making, called multi-alternative decision field theory (MDFT). Using MDFT, we build an orchestrator, called MDFT-Orchestrator (MDFT-O), that is both general and flexible. We also show experimentally that MDFT-O both generates better decisions than using a heuristic that takes a weighted average of competing policies (WA-O), but also performs better in terms of mimicking human decisions as collected through Amazon Mechanical Turk (AMT). Our methodology is therefore able to faithfully model human decision in ethically constrained decision environments. 
    more » « less
  2. Current AI systems lack several important human capabilities, such as adaptability, generalizability, selfcontrol, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities, which can be implemented by learning and reasoning components respectively, allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency. 
    more » « less
  3. null (Ed.)
    Interconnected food, energy, and water (FEW) nexus systems face many challenges to support human well-being (HWB) and maintain resilience, especially in arid and semiarid regions like New Mexico (NM), United States (US). Insufficient FEW resources, unstable economic growth due to fluctuations in prices of crude oil and natural gas, inequitable education and employment, and climate change are some of these challenges. Enhancing the resilience of such coupled socio-environmental systems depends on the efficient use of resources, improved understanding of the interlinkages across FEW system components, and adopting adaptable alternative management strategies. The goal of this study was to develop a framework that can be used to enhance the resilience of these systems. An integrated food, energy, water, well-being, and resilience (FEW-WISE) framework was developed and introduced in this study. This framework consists mainly of five steps to qualitatively and quantitatively assess FEW system relationships, identify important external drivers, integrate FEW systems using system dynamics models, develop FEW and HWB performance indices, and develop a resilience monitoring criterion using a threshold-based approach that integrates these indices. The FEW-WISE framework can be used to evaluate and predict the dynamic behavior of FEW systems in response to environmental and socioeconomic changes using resilience indicators. In conclusion, the derived resilience index can be used to inform the decision-making processes to guide the development of alternative scenario-based management strategies to enhance the resilience of ecological and socioeconomic well-being of vulnerable regions like NM. 
    more » « less
  4. The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences. 
    more » « less
  5. While a vast collection of explainable AI (XAI) algorithms has been developed in recent years, they have been criticized for significant gaps with how humans produce and consume explanations. As a result, current XAI techniques are often found to be hard to use and lack effectiveness. In this work, we attempt to close these gaps by making AI explanations selective ---a fundamental property of human explanations---by selectively presenting a subset of model reasoning based on what aligns with the recipient's preferences. We propose a general framework for generating selective explanations by leveraging human input on a small dataset. This framework opens up a rich design space that accounts for different selectivity goals, types of input, and more. As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task. We conducted two experimental studies to examine three paradigms based on our proposed framework: in Study 1, we ask the participants to provide critique-based or open-ended input to generate selective explanations (self-input). In Study 2, we show the participants selective explanations based on input from a panel of similar users (annotator input). Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI and improving collaborative decision making and subjective perceptions of the AI system, but also paint a nuanced picture that attributes some of these positive effects to the opportunity to provide one's own input to augment AI explanations. Overall, our work proposes a novel XAI framework inspired by human communication behaviors and demonstrates its potential to encourage future work to make AI explanations more human-compatible.

     
    more » « less