skip to main content


Title: Public and Private Affairs in Strategic Reasoning

Do agents know each others’ strategies? In multi-process software construction, each process has access to the processes already constructed; but in typical human-robot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multi-agent systems. In this work, we study how it impacts strategic reasoning.To do so we consider Strategy Logic (SL), a well-established and highly expressive logic for strategic reasoning. Its usual semantics, which we call “white-box semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “black-box semantics”, in which agents keep their strategies private. We consider the model-checking problem and show that the black-box semantics has much lower complexity than white-box semantics for an important fragment of Strategy Logic.

 
more » « less
Award ID(s):
1704883
NSF-PAR ID:
10391913
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning
Page Range / eLocation ID:
132 to 140
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We consider the problem of stopping a diffusion process with a payoff functional that renders the problem time‐inconsistent. We study stopping decisions of naïve agents who reoptimize continuously in time, as well as equilibrium strategies of sophisticated agents who anticipate but lack control over their future selves' behaviors. When the state process is one dimensional and the payoff functional satisfies some regularity conditions, we prove that any equilibrium can be obtained as a fixed point of an operator. This operator represents strategic reasoning that takes the future selves' behaviors into account. We then apply the general results to the case when the agents distort probability and the diffusion process is a geometric Brownian motion. The problem is inherently time‐inconsistent as the level of distortion of a same event changes over time. We show how the strategic reasoning may turn a naïve agent into a sophisticated one. Moreover, we derive stopping strategies of the two types of agent for various parameter specifications of the problem, illustrating rich behaviors beyond the extreme ones such as “never‐stopping” or “never‐starting.”

     
    more » « less
  2. In this paper, we argue that, as HCI becomes more multimodal with the integration of gesture, gaze, posture, and other nonverbal behavior, it is important to understand the role played by affordances and their associated actions in human-object interactions (HOI), so as to facilitate reasoning in HCI and HRI environments. We outline the requirements and challenges involved in developing a multimodal semantics for human-computer and human-robot interactions. Unlike unimodal interactive agents (e.g., text-based chatbots or voice-based personal digital assistants), multimodal HCI and HRI inherently require a notion of embodiment, or an understanding of the agent’s placement within the environment and that of its interlocutor. We present a dynamic semantics of the language, VoxML, to model human-computer, human-robot, and human-human interactions by creating multimodal simulations of both the communicative content and the agents’ common ground, and show the utility of VoxML information that is reified within the environment on computational understanding of objects for HOI. 
    more » « less
  3. null (Ed.)
    Understanding spatial expressions and using them appropriately is necessary for seamless and natural human-machine interaction. However, capturing the semantics and appropriate usage of spatial prepositions is notoriously difficult, because of their vagueness and polysemy. Although modern data-driven approaches are good at capturing statistical regularities in the usage, they usually require substantial sample sizes, often do not generalize well to unseen instances and, most importantly, their structure is essentially opaque to analysis, which makes diagnosing problems and understanding their reasoning process difficult. In this work, we discuss our attempt at modeling spatial senses of prepositions in English using a combination of rule-based and statistical learning approaches. Each preposition model is implemented as a tree where each node computes certain intuitive relations associated with the preposition, with the root computing the final value of the prepositional relation itself. The models operate on a set of artificial 3D “room world” environments, designed in Blender, taking the scene itself as an input. We also discuss our annotation framework used to collect human judgments employed in the model training. Both our factored models and black-box baseline models perform quite well, but the factored models will enable reasoned explanations of spatial relation judgements. 
    more » « less
  4. The ability to determine whether a robot's grasp has a high chance of failing, before it actually does, can save significant time and avoid failures by planning for re-grasping or changing the strategy for that special case. Machine Learning (ML) offers one way to learn to predict grasp failure from historic data consisting of a robot's attempted grasps alongside labels of the success or failure. Unfortunately, most powerful ML models are black-box models that do not explain the reasons behind their predictions. In this paper, we investigate how ML can be used to predict robot grasp failure and study the tradeoff between accuracy and interpretability by comparing interpretable (white box) ML models that are inherently explainable with more accurate black box ML models that are inherently opaque. Our results show that one does not necessarily have to compromise accuracy for interpretability if we use an explanation generation method, such as Shapley Additive explanations (SHAP), to add explainability to the accurate predictions made by black box models. An explanation of a predicted fault can lead to an efficient choice of corrective action in the robot's design that can be taken to avoid future failures. 
    more » « less
  5. In a black-box setting, the adversary only has API access to the target model and each query is expensive. Prior work on black-box adversarial examples follows one of two main strategies: (1) transfer attacks use white-box attacks on local models to find candidate adversarial examples that transfer to the target model, and (2) optimization-based attacks use queries to the target model and apply optimization techniques to search for adversarial examples. We propose hybrid attacks that combine both strategies, using candidate adversarial examples from local models as starting points for optimization-based attacks and using labels learned in optimization-based attacks to tune local models for finding transfer candidates. We empirically demonstrate on the MNIST, CIFAR10, and ImageNet datasets that our hybrid attack strategy reduces cost and improves success rates, and in combination with our seed prioritization strategy, enables batch attacks that can efficiently find adversarial examples with only a handful of queries. 
    more » « less