We address the problem of learning the legitimacy of other agents in a multiagent network when an unknown subset is comprised of malicious actors. We specifically derive results for the case of directed graphs and where stochastic side information, or observations of trust, is available. We refer to this as “learning trust” since agents must identify which neighbors in the network are reliable, and we derive a protocol to achieve this. We also provide analytical results showing that under this protocol i) agents can learn the legitimacy of all other agents almost surely, and that ii) the opinions of the agents converge in mean to the true legitimacy of all other agents in the network. Lastly, we provide numerical studies showing that our convergence results hold in practice for various network topologies and variations in the number of malicious agents in the network. Keywords: Multiagent systems, adversarial learning, directed graphs, networked systems
more »
« less
Consent as a Foundation for Responsible Autonomy
This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users.The contribution of this paper is twofold. First, it provides a conceptual analysis of consent, its benefits and misuses, and how understanding consent can help achieve responsible autonomy. Second, it outlines challenges for AI (in particular, for agents and multiagent systems) that merit investigation to form as a basis for modeling consent in multiagent systems and applying consent to achieve responsible autonomy.
more »
« less
- Award ID(s):
- 2116751
- PAR ID:
- 10356633
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Artificial Intelligence
- Volume:
- 36
- Issue:
- 11
- ISSN:
- 2159-5399
- Page Range / eLocation ID:
- 12301 to 12306
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper analyzes the consensus problem in heterogenous nonlinear multiagent systems. The multiagent systems not only have nonidentical nonlinear dynamics for all agents, but also have different network topologies for position and velocity interactions. An asynchronous sampled-data control without any input delays is first proposed, the information of each agent is only sampled at its own sampling instants and need not be sampled at other sampling instants. Then, quasi-consensus in heterogenous multiagent systems is proved by Lyapunov stability theory. When asynchronous sampled-data control has nonuniform input delays, sufficient conditions for quasi-consensus in heterogenous multiagent systems are further obtained. The upper bound of quasi-consensus errors is estimated. Finally, numerical simulations are provided to verify the effectiveness of theoretical results.more » « less
-
Agmon, N; An, B; Ricci, A; Yeoh, W. (Ed.)In multiagent systems that require coordination, agents must learn diverse policies that enable them to achieve their individual and team objectives. Multiagent Quality-Diversity methods partially address this problem by filtering the joint space of policies to smaller sub-spaces that make the diversification of agent policies tractable. However, in teams of asymmetric agents (agents with different objectives and capabilities), the search for diversity is primarily driven by the need to find policies that will allow agents to assume complementary roles required to work together in teams. This work introduces Asymmetric Island Model (AIM), a multiagent framework that enables populations of asymmetric agents to learn diverse complementary policies that foster teamwork via dynamic population size allocation on a wide variety of team tasks. The key insight of AIM is that the competitive pressure arising from the distribution of policies on different team-wide tasks drives the agents to explore regions of the policy space that yield specializations that generalize across tasks. Simulation results on multiple variations of a remote habitat problem highlight the strength of AIM in discovering robust synergies that allow agents to operate near-optimally in response to the changing team composition and policies of other agents.more » « less
-
We model a multiagent system (MAS) in socio-technical terms, combining a social layer consisting of norms with a technical layer consisting of actions that the agents execute. This approach emphasizes autonomy, and makes assumptions about both the social and technical layers explicit. Autonomy means that agents may violate norms. In our approach, agents are computational entities, with each representing a different stakeholder. We express stakeholder requirements of the form that a MAS is resilient in that it can recover (sufficiently) from a failure within a (sufficiently short) duration. We present ReNo, a framework that computes probabilistic and temporal guarantees on whether the underlying requirements are met or, if failed, recovered. ReNo supports the refinement of the specification of a socio-technical system through methodological guidelines to meet the stated requirements. An important contribution of ReNo is that it shows how the social and technical layers can be modeled jointly to enable the construction of resilient systems of autonomous agents. We demonstrate ReNo using a manufacturing scenario with competing public, industrial, and environmental requirements.more » « less
-
null (Ed.)In many real-world multiagent systems, agents must learn diverse tasks and coordinate with other agents. This paper introduces a method to allow heterogeneous agents to specialize and only learn complementary divergent behaviors needed for coordination in a shared environment. We use a hierarchical decomposition of diversity search and fitness optimization to allow agents to speciate and learn diverse temporally extended actions. Within an agent population, diversity in niches is favored. Agents within a niche compete for optimizing the higher level coordination task. Experimental results in a multiagent rover exploration task demonstrate the diversity of acquired agent behavior that promotes coordination.more » « less
An official website of the United States government

