skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learnability with PAC Semantics for Multi-agent Beliefs
The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition, and artificial intelligence. In an influential paper,Valiantrecognized that the challenge of learning should be integrated with deduction. In particular, he proposed a semantics to capture the quality possessed by the output ofprobably approximately correct(PAC) learning algorithms when formulated in a logic. Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries. In this paper, we provide a new technical foundation to demonstrate PAC learning with multi-agent epistemic logics. To circumvent the negative results in the literature on the difficulty of robust learning with the PAC semantics, we consider so-called implicit learning where we are able to incorporate observations to the background theory in service of deciding the entailment of an epistemic query. We prove correctness of the learning procedure and discuss results on the sample complexity, that is how many observations we will need to provably assert that the query is entailed given a user-specified error bound. Finally, we investigate under what circumstances this algorithm can be made efficient. On the last point, given that reasoning in epistemic logics especially in multi-agent epistemic logics is PSPACE-complete, it might seem like there is no hope for this problem. We leverage some recent results on the so-calledRepresentation Theoremexplored for single-agent and multi-agent epistemic logics with theonly knowingoperator to reduce modal reasoning to propositional reasoning.  more » « less
Award ID(s):
1942336 1939677 1908287
PAR ID:
10467344
Author(s) / Creator(s):
; ;
Publisher / Repository:
Cambridge University Press
Date Published:
Journal Name:
Theory and Practice of Logic Programming
Volume:
23
Issue:
4
ISSN:
1471-0684
Page Range / eLocation ID:
730 to 747
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Designing agents that reason and act upon the world has always been one of the main objectives of the Artificial Intelligence community. While for planning in “simple” domains the agents can solely rely on facts about the world, in several contexts, e.g. , economy, security, justice and politics, the mere knowledge of the world could be insufficient to reach a desired goal. In these scenarios, epistemic reasoning, i.e. , reasoning about agents’ beliefs about themselves and about other agents’ beliefs, is essential to design winning strategies. This paper addresses the problem of reasoning in multi-agent epistemic settings exploiting declarative programming techniques. In particular, the paper presents an actual implementation of a multi-shot Answer Set Programming -based planner that can reason in multi-agent epistemic settings, called PLATO (e P istemic mu L ti-agent A nswer se T programming s O lver). The ASP paradigm enables a concise and elegant design of the planner, w.r.t. other imperative implementations, facilitating the development of formal verification of correctness. The paper shows how the planner, exploiting an ad-hoc epistemic state representation and the efficiency of ASP solvers, has competitive performance results on benchmarks collected from the literature. 
    more » « less
  2. null (Ed.)
    In explainable planning, the planning agent needs to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach is called model reconciliation, where the agent reconciles the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as a more general problem as follows: Given two knowledge bases πa and πh and a query q such that πa entails q and πh does not entail q, where the notion of entailment is dependent on the logical theories underlying πa and πh, how to change πh–given πa and the support for q in πa–so that πh does entail q. In this paper, we study this problem under the context of answer set programming. To achieve this goal, we (1) define the notion of a conditional update between two logic programs πa and πh with respect to a query q;(2) define the notion of an explanation for a query q from a program πa to a program πh using conditional updates;(3) develop algorithms for computing explanations; and (4) show how the notion of explanation based on conditional updates can be used in explainable planning. 
    more » « less
  3. Earlier epistemic planning systems for multi-agent domains generate plans that contain various types of actions such as ontic, sensing, or announcement actions. However, none of these systems consider untruthful announcements, i.e., none can generate plans that contain a lying or a misleading announcement. In this paper, we present a novel epistemic planner, called EFP3.0, for multi-agent domains with untruthful announcements. The planner is similar to the systems EFP or EFP2.0 in that it is a forward-search planner and can deal with unlimited nested beliefs and common knowledge by employing a Kripke based state representation and implementing an update model based transition function. Different from EFP, EFP3.0 employs a specification language that uses edge-conditioned update models for reasoning about effects of actions in multi-agent domains. We describe the basics of EFP3.0 and conduct experimental evaluations of the system against state-of-the-art epistemic planners. We discuss potential improvements that could be useful for scalability and efficiency of the system. 
    more » « less
  4. Do agents know each others’ strategies? In multi-process software construction, each process has access to the processes already constructed; but in typical human-robot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multi-agent systems. In this work, we study how it impacts strategic reasoning.To do so we consider Strategy Logic (SL), a well-established and highly expressive logic for strategic reasoning. Its usual semantics, which we call “white-box semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “black-box semantics”, in which agents keep their strategies private. We consider the model-checking problem and show that the black-box semantics has much lower complexity than white-box semantics for an important fragment of Strategy Logic. 
    more » « less
  5. null (Ed.)
    Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. 
    more » « less