skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Nova : Value-based Negotiation of Norms
Specifying a normative multiagent system (nMAS) is challenging, because different agents often have conflicting requirements. Whereas existing approaches can resolve clear-cut conflicts, tradeoffs might occur in practice among alternative nMAS specifications with no apparent resolution. To produce an nMAS specification that is acceptable to each agent, we model the specification process as a negotiation over a set of norms. We propose an agent-based negotiation framework, where agents’ requirements are represented as values (e.g., patient safety, privacy, and national security), and an agent revises the nMAS specification to promote its values by executing a set of norm revision rules that incorporate ontology-based reasoning. To demonstrate that our framework supports creating a transparent and accountable nMAS specification, we conduct an experiment with human participants who negotiate against our agent. Our findings show that our negotiation agent reaches better agreements (with small p -value and large effect size) faster than a baseline strategy. Moreover, participants perceive that our agent enables more collaborative and transparent negotiations than the baseline (with small p -value and large effect size in particular settings) toward reaching an agreement.  more » « less
Award ID(s):
1908374
PAR ID:
10287342
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Intelligent Systems and Technology
Volume:
12
Issue:
4
ISSN:
2157-6904
Page Range / eLocation ID:
1 to 29
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This article is a study on the use of additive obfuscation signals to keep the reference values of the agents in the continuous-time Laplacian average consensus algorithm private from eavesdroppers. Obfuscation signals are perturbations that agents add to their local dynamics and their transmitted-out messages to conceal their private reference values. An eavesdropper is an agent inside or outside the network that has access to some subset of the interagent communication messages, and its knowledge set also includes the network topology. Rather than focusing on using a zero-sum and vanishing additive signal, our work determines the necessary and sufficient conditions that define the set of admissible obfuscation signals that do not perturb the convergence point of the algorithm from the average of the reference values of the agents. Of theoretical interest, our results show that this class includes nonvanishing signals as well. Given this broader class of admissible obfuscation signals, we define a deterministic notion of privacy preservation. In this definition, privacy preservation for an agent means that neither the private reference value nor a finite set of values to which the private reference value of the agent belongs to can be obtained. Then, we evaluate the agents’ privacy against eavesdroppers with different knowledge sets. 
    more » « less
  2. Prior research has highlighted users’ preferences for embodiment when interacting with virtual agents in augmented reality headsets. However, open questions remain regarding users’ preferences towards agent placement and gaze direction. In our study, we asked 48 adults to wear the Microsoft HoloLens 2 and find objects in a hidden object game with the help of embodied agents. We examined four distinct agent configurations for both male and female agents: a human-size agent standing beside participants, a human-size agent sitting beside participants, a small desk agent facing the screen, and a small desk agent facing the participant. Overall, participants preferred male over female virtual agents when receiving assistance, and no consistent preference emerged regarding the agents’ position or gaze direction. From our results, we build upon existing guidelines for designing better virtual agents for AR with headsets. 
    more » « less
  3. Multi-agent systems (MASs) consisting of a number of autonomous agents that communicate, coordinate, and jointly sense the environment to achieve complex missions can be found in a variety of applications such as robotics, smart cities, and internet-of-things applications. Modeling and monitoring MAS requirements to guarantee overall mission objectives, safety, and reliability is an important problem. Such requirements implicitly require reasoning about diverse sensing and communication modalities between agents, analysis of the dependencies between agent tasks, and the spatial or virtual distance between agents. To capture such rich MAS requirements, we model agent interactions via multiple directed graphs, and introduce a new logic –Spatio-Temporal Logic with Graph Operators(STL-GO). The key innovation in STL-GO are graph operators that enable us to reason about the number of agents along either the incoming or outgoing edges of the underlying interaction graph that satisfy a given property of interest; for example, the requirement that an agent should sense at least two neighboring agents whose task graphs indicate the ability to collaborate. We then propose novel distributed monitoring conditions for individual agents that use only local information to determine whether or not an STL-GO specification is satisfied. We compare the expressivity of STL-GO against existing spatio-temporal logic formalisms, and demonstrate the utility of STL-GO and our distributed monitors in a bike-sharing and a multi-drone case study. 
    more » « less
  4. In this paper, we study the problem of privacy preservation of the continuous-time Laplacian static average consensus algorithm using additive perturbation signals. We consider this problem over a strongly connected and weight-balanced digraph. Starting from a local reference value, in static average consensus algorithm each agent constantly communicates with its neighboring agents to update its local state to compute the average of the reference values across the network. Since every agent transmits its local reference value to its in-neighbors, the reference value of the agents are trivially disclosed. In this paper, we investigate the possibility of preserving the privacy of the reference value of the agents by adding admissible perturbation signals to the local dynamics and the transmitted out signals of the agents. Admissible additive perturbation signals are those signals that do not perturb the final convergence point of the algorithm from the average of the reference values of the agents. Our results show that if an adversarial agent has access to the output of another agent and all the input signals transmitted to that agent, the adversary can discover the private reference value of that agent, regardless of the perturbation signals. Otherwise, the privacy of the agent can be preserved. We demonstrate our results through a numerical example. 
    more » « less
  5. Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people—socially situated learning—is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence—agents that seek out new information through social interactions with people—as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike active-learning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%. This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments. 
    more » « less