This article presents a new decentralized multi-agent information-theoretic (DeMAIT) control algorithm for mobile sensors (agents). The algorithm leverages Bayesian estimation and information-theoretic motion planning for efficient and effective estimation and localization of a target, such as a chemical gas leak. The algorithm consists of: (1) a non-parametric Bayesian estimator, (2) an information-theoretic trajectory planner that generates “informative trajectories” for agents to follow, and (3) a controller and collision avoidance algorithm to ensure that each agent follows its trajectory as closely as possible in a safe manner. Advances include the use of a new information-gain metric and its analytical gradient, which do not depend on an infinite series like prior information metrics. Dynamic programming and multi-threading techniques are applied to efficiently compute the mutual information to minimize measurement uncertainty. The estimation and motion planning processes also take into account the dynamics of the sensors and agents. Extensive simulations are conducted to compare the performance between the DeMAIT algorithm to a traditional raster-scanning method and a clustering method with coordination. The main hypothesis that the DeMAIT algorithm outperforms the other two methods is validated, specifically where the average localization success rate for the DeMAIT algorithm is (a) higher and (b) more robust to changes in the source location, robot team size, and search area size than the raster-scanning and clustering methods. Finally, outdoor field experiments are conducted using a team of custom-built aerial robots equipped with gas concentration sensors to demonstrate efficacy of the DeMAIT algorithm to estimate and find the source of a propane gas leak.
This content will become publicly available on March 1, 2025
Contingency Games for Multi-Agent Interaction
Contingency planning, wherein an agent generates a set of possible plans conditioned on the outcome of an uncertain event, is an increasingly popular way for robots to act under uncertainty. In this work we take a game-theoretic perspective on contingency planning, tailored to multi-agent scenarios in which a robot’s actions impact the decisions of other agents and vice versa. The resulting contingency game allows the robot to efficiently interact with other agents by generating strategic motion plans conditioned on multiple possible intents for other actors in the scene. Contingency games are parameterized via a scalar variable which represents a future time when intent uncertainty will be resolved. By estimating this parameter online, we construct a game-theoretic motion planner that adapts to changing beliefs while anticipating future certainty. We show that existing variants of game-theoretic planning under uncertainty are readily obtained as special cases of contingency games. Through a series of simulated autonomous driving scenarios, we demonstrate that contingency games close the gap between certainty-equivalent games that commit to a single hypothesis and non-contingent multi-hypothesis games that do not account for future uncertainty reduction.
more »
« less
- Award ID(s):
- 2211548
- PAR ID:
- 10511428
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Robotics and Automation Letters
- Volume:
- 9
- Issue:
- 3
- ISSN:
- 2377-3774
- Page Range / eLocation ID:
- 2208 to 2215
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Digital games featuring programmable agents are popular tools for teaching coding and computational thinking skills. However, today's games perpetuate an arguably obsolete relationship between programmable agents and human operators. Borrowing from the field of human-robotics interaction, we argue that collaborative robots, or cobots, are a better model for thinking about computational agents, working directly with humans rather than in place of or at arm's length from them. In this paper, we describe an initial design inquiry into the design of “cobot games”, programmable agent scenarios in which players program an in-game ally to assist them in accomplishing gameplay objectives. We detail three questions that emerged out of this exploration, our present thinking on them, and plans for deepening inquiry into cobot game design moving forward.more » « less
-
Effectively predicting intent and behavior requires inferring leadership in multi-agent interactions. Dynamic games provide an expressive theoretical framework for modeling these interactions. Employing this framework, we propose a novel method to infer the leader in a two-agent game by observing the agents’ behavior in complex, long-horizon interactions. We make two con- tributions. First, we introduce an iterative algorithm that solves dynamic two-agent Stackelberg games with nonlinear dynamics and nonquadratic costs, and demonstrate that it consistently converges in repeated trials. Second, we propose the Stackelberg Leadership Filter (SLF), an online method for identifying the leading agent in interactive scenarios based on observations of the game interactions. We validate the leadership filter’s efficacy on simulated driving scenarios to demonstrate that the SLF can draw conclusions about leadership that match right-of-way expectations.more » « less
-
We provide a polynomial-time, scalable algorithm for equilibrium computation in multi-agent influence games on networks, extending work of Bindel, Kleinberg, and Oren (2015) from the single-agent to the multi-agent setting. In games of influence, agents have limited advertising budget to influence the initial predisposition of nodes in some network towards their products, but the eventual decisions of the nodes are determined by the stationary state of DeGroot opinion dynamics on the network, which takes over after the seeding (Ahmadinejad et al. 2014, 2015). In multi-agent systems, how should agents spend their budgets to seed the network to maximize their utility in anticipation of other advertising agents and the network dynamics? We show that Nash equilibria of this game are pure and (under weak assumptions) unique, and can be computed in polynomial time; we test our model by computing equilibria using mirror descent for the two-agent case on random graphs.more » « less
-
In a group anagram game, players are provided letters to form as many words as possible. They can also request letters from their neighbors and reply to letter requests. Currently, a single agent-based model is produced from all experimental data, with dependence only on number of neighbors. In this work, we build, exercise, and evaluate enhanced agent behavior models for networked group anagram games under an uncertainty quantification framework. Specifically, we cluster game data for players based on their skill levels (forming words, requesting letters, and replying to requests), perform multinomial logistic regression for transition probabilities, and quantify uncertainty within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. We conduct simulations of ego agents with neighbors to demonstrate the efficacy of our proposed methods.more » « less