skip to main content


Title: Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration
Abstract

Collaboration requires agents to coordinate their behavior on the fly, sometimes cooperating to solve a single task together and other times dividing it up into sub‐tasks to work on in parallel. Underlying the human ability to collaborate is theory‐of‐mind (ToM), the ability to infer the hidden mental states that drive others to act. Here, we develop Bayesian Delegation, a decentralized multi‐agent learning mechanism with these abilities. Bayesian Delegation enables agents to rapidly infer the hidden intentions of others by inverse planning. We test Bayesian Delegation in a suite of multi‐agent Markov decision processes inspired by cooking problems. On these tasks, agents with Bayesian Delegation coordinate both their high‐level plans (e.g., what sub‐task they should work on) and their low‐level actions (e.g., avoiding getting in each other's way). When matched with partners that act using the same algorithm, Bayesian Delegation outperforms alternatives. Bayesian Delegation is also a capable ad hoc collaborator and successfully coordinates with other agent types even in the absence of prior experience. Finally, in a behavioral experiment, we show that Bayesian Delegation makes inferences similar to human observers about the intent of others. Together, these results argue for the centrality of ToM for successful decentralized multi‐agent collaboration.

 
more » « less
NSF-PAR ID:
10220850
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Topics in Cognitive Science
Volume:
13
Issue:
2
ISSN:
1756-8757
Page Range / eLocation ID:
p. 414-432
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. “Theory of Mind” (ToM; people’s ability to infer and use information about others’ mental states) varies across cultures. In four studies ( N = 881), including two preregistered replications, we show that social class predicts performance on ToM tasks. In Studies 1A and 1B, we provide new evidence for a relationship between social class and emotion perception: Higher-class individuals performed more poorly than their lower-class counterparts on the Reading the Mind in the Eyes Test, which has participants infer the emotional states of targets from images of their eyes. In Studies 2A and 2B, we provide the first evidence that social class predicts visual perspective taking: Higher-class individuals made more errors than lower-class individuals in the Director Task, which requires participants to assume the visual perspective of another person. Potential mechanisms linking social class to performance in different ToM domains, as well as implications for deficiency-centered perspectives on low social class, are discussed. 
    more » « less
  2. Collaborative tasks often begin with partial task knowledge and incomplete initial plans from each partner. To complete these tasks, agents need to engage in situated communication with their partners and coordinate their partial plans towards a complete plan to achieve a joint task goal. While such collaboration seems effortless in a human-human team, it is highly challenging for human-AI collaboration. To address this limitation, this paper takes a step towards collaborative plan acquisition, where humans and agents strive to learn and communicate with each other to acquire a complete plan for joint tasks. Specifically, we formulate a novel problem for agents to predict the missing task knowledge for themselves and for their partners based on rich perceptual and dialogue history. We extend a situated dialogue benchmark for symmetric collaborative tasks in a 3D blocks world and investigate computational strategies for plan acquisition. Our empirical results suggest that predicting the partner's missing knowledge is a more viable approach than predicting one's own. We show that explicit modeling of the partner's dialogue moves and mental states produces improved and more stable results than without. These results provide insight for future AI agents that can predict what knowledge their partner is missing and, therefore, can proactively communicate such information to help their partner acquire such missing knowledge toward a common understanding of joint tasks. 
    more » « less
  3. Andreas Krause, Emma Brunskill (Ed.)
    Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL). However, the recent success of MARL relies heavily on the convenient paradigm of purely decentralized execution, where there is no action correlation among agents for scalability considerations. In this work, we introduce a Bayesian network to inaugurate correlations between agents’ action selections in their joint policy. Theoretically, we establish a theoretical justification for why action dependencies are beneficial by deriving the multi-agent policy gradient formula under such a Bayesian network joint policy and proving its global convergence to Nash equilibria under tabular softmax policy parameterization in cooperative Markov games. Further, by equipping existing MARL algorithms with a recent method of differentiable directed acyclic graphs (DAGs), we develop practical algorithms to learn the context-aware Bayesian network policies in scenarios with partial observability and various difficulty. We also dynamically decrease the sparsity of the learned DAG throughout the training process, which leads to weakly or even purely independent policies for decentralized execution. Empirical results on a range of MARL benchmarks show the benefits of our approach. 
    more » « less
  4. null (Ed.)
    In many real-world multiagent systems, agents must learn diverse tasks and coordinate with other agents. This paper introduces a method to allow heterogeneous agents to specialize and only learn complementary divergent behaviors needed for coordination in a shared environment. We use a hierarchical decomposition of diversity search and fitness optimization to allow agents to speciate and learn diverse temporally extended actions. Within an agent population, diversity in niches is favored. Agents within a niche compete for optimizing the higher level coordination task. Experimental results in a multiagent rover exploration task demonstrate the diversity of acquired agent behavior that promotes coordination. 
    more » « less
  5. Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time. 
    more » « less