skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Public and Private Affairs in Strategic Reasoning
Do agents know each others’ strategies? In multi-process software construction, each process has access to the processes already constructed; but in typical human-robot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multi-agent systems. In this work, we study how it impacts strategic reasoning.To do so we consider Strategy Logic (SL), a well-established and highly expressive logic for strategic reasoning. Its usual semantics, which we call “white-box semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “black-box semantics”, in which agents keep their strategies private. We consider the model-checking problem and show that the black-box semantics has much lower complexity than white-box semantics for an important fragment of Strategy Logic.  more » « less
Award ID(s):
1704883
PAR ID:
10391913
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning
Page Range / eLocation ID:
132 to 140
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We consider the problem of stopping a diffusion process with a payoff functional that renders the problem time‐inconsistent. We study stopping decisions of naïve agents who reoptimize continuously in time, as well as equilibrium strategies of sophisticated agents who anticipate but lack control over their future selves' behaviors. When the state process is one dimensional and the payoff functional satisfies some regularity conditions, we prove that any equilibrium can be obtained as a fixed point of an operator. This operator represents strategic reasoning that takes the future selves' behaviors into account. We then apply the general results to the case when the agents distort probability and the diffusion process is a geometric Brownian motion. The problem is inherently time‐inconsistent as the level of distortion of a same event changes over time. We show how the strategic reasoning may turn a naïve agent into a sophisticated one. Moreover, we derive stopping strategies of the two types of agent for various parameter specifications of the problem, illustrating rich behaviors beyond the extreme ones such as “never‐stopping” or “never‐starting.” 
    more » « less
  2. Many domains of AI and its effects are established, which mainly rely on their integration modeling cognition of human and AI agents, collecting and representing knowledge using them at the human level, and maintaining decision-making processes towards physical action eligible to and in cooperation with humans. Especially in human-robot interaction, many AI and robotics technologies are focused on human- robot cognitive modeling, from visual processing to symbolic reasoning and from reactive control to action recognition and learning, which will support human-multi-agent cooperative achieving tasks. However, the main challenge is efficiently combining human motivations and AI agents’ purposes in a sharing architecture and reaching a consensus in complex environments and missions. To fill this gap, this workshop brings together researchers from different communities inter- ested in multi-agent systems (MAS) and human-robot interaction (HRI) to explore potential approaches, future research directions, and domains in human-multi-agent cognitive fusion. 
    more » « less
  3. null (Ed.)
    Understanding spatial expressions and using them appropriately is necessary for seamless and natural human-machine interaction. However, capturing the semantics and appropriate usage of spatial prepositions is notoriously difficult, because of their vagueness and polysemy. Although modern data-driven approaches are good at capturing statistical regularities in the usage, they usually require substantial sample sizes, often do not generalize well to unseen instances and, most importantly, their structure is essentially opaque to analysis, which makes diagnosing problems and understanding their reasoning process difficult. In this work, we discuss our attempt at modeling spatial senses of prepositions in English using a combination of rule-based and statistical learning approaches. Each preposition model is implemented as a tree where each node computes certain intuitive relations associated with the preposition, with the root computing the final value of the prepositional relation itself. The models operate on a set of artificial 3D “room world” environments, designed in Blender, taking the scene itself as an input. We also discuss our annotation framework used to collect human judgments employed in the model training. Both our factored models and black-box baseline models perform quite well, but the factored models will enable reasoned explanations of spatial relation judgements. 
    more » « less
  4. The ability to determine whether a robot's grasp has a high chance of failing, before it actually does, can save significant time and avoid failures by planning for re-grasping or changing the strategy for that special case. Machine Learning (ML) offers one way to learn to predict grasp failure from historic data consisting of a robot's attempted grasps alongside labels of the success or failure. Unfortunately, most powerful ML models are black-box models that do not explain the reasons behind their predictions. In this paper, we investigate how ML can be used to predict robot grasp failure and study the tradeoff between accuracy and interpretability by comparing interpretable (white box) ML models that are inherently explainable with more accurate black box ML models that are inherently opaque. Our results show that one does not necessarily have to compromise accuracy for interpretability if we use an explanation generation method, such as Shapley Additive explanations (SHAP), to add explainability to the accurate predictions made by black box models. An explanation of a predicted fault can lead to an efficient choice of corrective action in the robot's design that can be taken to avoid future failures. 
    more » « less
  5. In this paper, we argue that, as HCI becomes more multimodal with the integration of gesture, gaze, posture, and other nonverbal behavior, it is important to understand the role played by affordances and their associated actions in human-object interactions (HOI), so as to facilitate reasoning in HCI and HRI environments. We outline the requirements and challenges involved in developing a multimodal semantics for human-computer and human-robot interactions. Unlike unimodal interactive agents (e.g., text-based chatbots or voice-based personal digital assistants), multimodal HCI and HRI inherently require a notion of embodiment, or an understanding of the agent’s placement within the environment and that of its interlocutor. We present a dynamic semantics of the language, VoxML, to model human-computer, human-robot, and human-human interactions by creating multimodal simulations of both the communicative content and the agents’ common ground, and show the utility of VoxML information that is reified within the environment on computational understanding of objects for HOI. 
    more » « less