skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Utility Theory based Cognitive Modeling in the Application of Robotics: A Survey
Cognitive modeling, which explores the essence of cognition, including motivation, emotion, and perception, has been widely applied in the artificial intelligence (AI) agent domains, such as robotics. From the computational perspective, various cognitive functionalities have been developed through utility theory to provide a detailed and process-based understanding for specifying corresponding computational models of representations, mechanisms, and processes. Especially for decision-making and learning in multi-agent/robot systems (MAS/MRS), a suitable cognitive model can guide agents in choosing reasonable strategies to achieve their current needs and learning to cooperate and organize their behaviors, optimizing the system's utility, building stable and reliable relationships, and guaranteeing each group member's sustainable development, similar to the human society. This survey examines existing robotic systems for developmental cognitive models in the context of utility theory. We discuss the evolution of cognitive modeling in robotics from behavior-based robotics (BBR) and cognitive architectures to the properties of value systems in robots, such as the studies on motivations as artificial value systems, and the utility theory based cognitive modeling for generating and updating strategies in robotic interactions. Then, we examine the extent to which existing value systems support the application of robotics from an AI agent cognitive modeling perspective, including single-agent and multi-agent systems, trust among agents, and human-robot interaction. Finally, we survey the existing literature of current value systems in relevant fields and propose several promising research directions, along with some open problems that we deem necessary for further investigation.  more » « less
Award ID(s):
2348013
PAR ID:
10671022
Author(s) / Creator(s):
Publisher / Repository:
arxiv
Date Published:
Subject(s) / Keyword(s):
Cognition Utility Needs Motivation Value Systems Trust Human-Robot Interaction
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Many domains of AI and its effects are established, which mainly rely on their integration modeling cognition of human and AI agents, collecting and representing knowledge using them at the human level, and maintaining decision-making processes towards physical action eligible to and in cooperation with humans. Especially in human-robot interaction, many AI and robotics technologies are focused on human- robot cognitive modeling, from visual processing to symbolic reasoning and from reactive control to action recognition and learning, which will support human-multi-agent cooperative achieving tasks. However, the main challenge is efficiently combining human motivations and AI agents’ purposes in a sharing architecture and reaching a consensus in complex environments and missions. To fill this gap, this workshop brings together researchers from different communities inter- ested in multi-agent systems (MAS) and human-robot interaction (HRI) to explore potential approaches, future research directions, and domains in human-multi-agent cognitive fusion. 
    more » « less
  2. Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams. 
    more » « less
  3. As intelligent systems gain autonomy and capability, it becomes vital to ensure that their objectives match those of their human users; this is known as the value-alignment problem. In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users’ objectives as they go.We argue that a meaningful solution to value alignment must combine multi-agent decision theory with rich mathematical models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. We present a solution to the cooperative inverse reinforcement learning (CIRL) dynamic game based on well-established cognitive models of decision making and theory of mind. The solution captures a key reciprocity relation: the human will not plan her actions in isolation, but rather reason pedagogically about how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. To our knowledge, this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models. 
    more » « less
  4. Computational metacognition represents a cognitive systems perspective on high-order reasoning in integrated artificial systems that seeks to leverage ideas from human metacognition and from metareasoning approaches in artificial intelligence. The key characteristic is to declaratively represent and then monitor traces of cognitive activity in an intelligent system in order to manage the performance of cognition itself. Improvements in cognition then lead to improvements in behavior and thus performance. We illustrate these concepts with an agent implementation in a cognitive architecture called MIDCA and show the value of metacognition in problem-solving. The results illustrate how computational metacognition improves performance by changing cognition through meta-level goal operations and learning. 
    more » « less
  5. Studying social-ecological systems, in which agents interact with each other and their environment are important both for sustainability applications and for understanding how human cognition functions in context. In such systems, the environment shapes the agents' experience and actions, and in turn collective action of agents changes social and physical aspects of the environment. Here we review current investigation approaches, which rely on a lean design, with discrete actions and outcomes and little scope for varying environmental parameters and cognitive demands. We then introduce a multiagent reinforcement learning (MARL) approach, which builds on modern artificial intelligence techniques, and provides new avenues to model complex social worlds, while preserving more of their characteristics, and allowing them to capture a variety of social phenomena. These techniques can be fed back to the laboratory where they make it easier to design experiments in complex social situations without compromising their tractability for computational modeling. We showcase the potential MARL by discussing several recent studies that have used it, detailing the way environmental settings and cognitive constraints can lead to the emergence of complex cooperation strategies. This novel approach can help researchers bring together insights from human cognition, sustainability, and AI, to tackle real world problems of social-ecological systems. 
    more » « less