skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An Answer Set Programming Framework for Reasoning about Agents' Beliefs and Truthfulness of Statements
The paper proposes a framework for capturing how an agent’s beliefs evolve over time in response to observations and for answering the question of whether statements made by a third party can be believed. The basic components of the framework are a formalism for reasoning about actions, changes, and observations and a formalism for default reasoning. The paper describes a concrete implementation that leverages answer set programming for determining the evolution of an agent's ``belief state'', based on observations, knowledge about the effects of actions, and a theory about how these influence an agent's beliefs. The beliefs are then used to assess whether statements made by a third party can be accepted as truthful. The paper investigates an application of the proposed framework in the detection of man-in-the-middle attacks targeting computers and cyber-physical systems. Finally, we briefly discuss related work and possible extensions.  more » « less
Award ID(s):
1914635 1757207
PAR ID:
10208820
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning
Page Range / eLocation ID:
69 to 78
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The paper proposes a framework for capturing how an agent’s beliefs evolve over time in response to observations and for answering the question of whether statements made by a third party can be believed. The basic components of the framework are a formalism for reasoning about actions, changes, and observations and a formalism for default reasoning. 
    more » « less
  2. In multi-agent domains (MADs), an agent's action may not just change the world and the agent's knowledge and beliefs about the world, but also may change other agents' knowledge and beliefs about the world and their knowledge and beliefs about other agents' knowledge and beliefs about the world. The goals of an agent in a multi-agent world may involve manipulating the knowledge and beliefs of other agents' and again, not just their knowledge/belief about the world, but also their knowledge about other agents' knowledge about the world. Our goal is to present an action language (mA+) that has the necessary features to address the above aspects in representing and RAC in MADs. mA+ allows the representation of and reasoning about different types of actions that an agent can perform in a domain where many other agents might be present -- such as world-altering actions, sensing actions, and announcement/communication actions. It also allows the specification of agents' dynamic awareness of action occurrences which has future implications on what agents' know about the world and other agents' knowledge about the world. mA+ considers three different types of awareness: full-, partial- awareness, and complete oblivion of an action occurrence and its effects. This keeps the language simple, yet powerful enough to address a large variety of knowledge manipulation scenarios in MADs. The semantics of mA+ relies on the notion of state, which is described by a pointed Kripke model and is used to encode the agent's knowledge and the real state of the world. It is defined by a transition function that maps pairs of actions and states into sets of states. We illustrate properties of the action theories, including properties that guarantee finiteness of the set of initial states and their practical implementability. Finally, we relate mA+ to other related formalisms that contribute to RAC in MADs. 
    more » « less
  3. This paper explores the challenge of encountering incorrect beliefs in the context of reasoning about actions and changes using action languages with sensing actions. An incorrect belief occurs when some observations conflict with the agent’s own beliefs. A common approach to recover from this situation is to replace the initial beliefs with beliefs that conform to the sequence of actions and the observations. The paper introduces a regression-based and revision-based approach to calculate a correct initial belief. Starting from an inconsistent history consisting of actions and observations, the proposed framework (1) computes the initial belief states that support the actions and observations and (2) uses a belief revision operator to repair the false initial belief state. The framework operates on domains with static causal laws, supports arbitrary sequences of actions, and integrates belief revision methods to select a meaningful initial belief state among possible alternatives. 
    more » « less
  4. We present a novel experiment demonstrating strategies selfish individuals utilize to avoid social pressure to be altruistic. Subjects participate in a trust game, after which they have an opportunity to state their beliefs about their opponent's actions. Subsequently, subjects participate in a task designed to “reveal” their true beliefs. Subjects who initially made selfish choices falsely state their beliefs about their opponent's kindness. Their “revealed” beliefs were significantly more accurate, which exposed subjects' knowledge that their selfishness was unjustifiable by their opponent's behavior. The initial false statements complied with social norms, suggesting subjects' attempts to project a more favorable social image. (JELC9, D03, D83) 
    more » « less
  5. Explanations of AI Agents' actions are considered to be an important factor in improving users' trust in the decisions made by autonomous AI systems. However, as these autonomous systems evolve from reactive, i.e., acting on user input, to proactive, i.e., acting without requiring user intervention, there is a need to explore how the explanation for the actions of these agents should evolve. In this work, we explore the design of explanations through participatory design methods for a proactive auto-response messaging agent that can reduce perceived obligations and social pressure to respond quickly to incoming messages by providing unavailability-related context. We recruited 14 participants who worked in pairs during collaborative design sessions where they reasoned about the agent's design and actions. We qualitatively analyzed the data collected through these sessions and found that participants' reasoning about agent actions led them to speculate heavily on its design. These speculations significantly influenced participants' desire for explanations and the controls they sought to inform the agents' behavior. Our findings indicate a need to transform users' speculations into accurate mental models of agent design. Further, since the agent acts as a mediator in human-human communication, it is also necessary to account for social norms in its explanation design. Finally, user expertise in understanding their habits and behaviors allows the agent to learn from the user their preferences when justifying its actions. 
    more » « less