Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
null (Ed.)The paper introduces the notion of an epistemic argumentation framework (EAF) as a means to integrate the beliefs of a reasoner with argumentation. Intuitively, an EAF encodes the beliefs of an agent who reasons about arguments. Formally, an EAF is a pair of an argumentation framework and an epistemic constraint. The semantics of the EAF is defined by the notion of an ω-epistemic labelling set, where ω is complete, stable, grounded, or preferred, which is a set of ω-labellings that collectively satisfies the epistemic constraint of the EAF. The paper shows how EAF can represent different views of reasoners on the same argumentation framework. It also includes representing preferences in EAF and multi-agent argumentation. Finally, the paper discusses complexity issues and computation using epistemic logic programming.more » « less
-
null (Ed.)In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.more » « less
-
null (Ed.)The paper proposes a framework for capturing how an agent’s beliefs evolve over time in response to observations and for answering the question of whether statements made by a third party can be believed. The basic components of the framework are a formalism for reasoning about actions, changes, and observations and a formalism for default reasoning.more » « less
-
null (Ed.)Abstract The paper describes an ongoing effort in developing a declarative system for supporting operators in the Nuclear Power Plant (NPP) control room. The focus is on two modules: diagnosis and explanation of events that happened in NPPs. We describe an Answer Set Programming (ASP) representation of an NPP, which consists of declarations of state variables, components, their connections, and rules encoding the plant behavior. We then show how the ASP program can be used to explain the series of events that occurred in the Three Mile Island, Unit 2 (TMI-2) NPP accident, the most severe accident in the USA nuclear power plant operating history. We also describe an explanation module aimed at addressing answers to questions such as “why an event occurs?” or “what should be done?” given the collected data.more » « less
-
null (Ed.)In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.more » « less
-
null (Ed.)
Abstract In the Nuclear Power Plant (NPP) control room, the operators’ performance in emergencies is impacted by the need to monitor many indicators on the control room boards, the limited time to interact with dynamic events, and the incompleteness of the operator’s knowledge. Recent research has been directed toward increasing the level of automation in the NPP system by employing modern AI techniques that support the operator’s decisions. In previous work, the authors have employed a novel AI-guided declarative approach (namely, Answer Set Programming (ASP)) to represent and reason with human qualitative knowledge. This represented knowledge is structured to form a reasoning-based operator support system that assists the operator and compensates for any knowledge incompleteness by performing reasoning to diagnose failures and recommend executing actions in real time. A general ASP code structure has been proposed and tested against simple scenarios, e.g., diagnosis of pump failures that result in loss of flow transients and generating the needed plans for resolving the issue of stuck valves in the secondary loop.
In this work, we investigate the potential of the previously proposed ASP structure by applying ASP to a realistic case study of the Three Mile Island, Unit 2 (TMI-2) accident event sequence (in particular, the first 142 minutes). The TMI scenario presents many challenges for a reasoning system, including a large number of variables, the complexity of the scenario, and the misleading readings. The capability of the ASP-based reasoning system is tested for diagnosis and recommending actions throughout the scenario. This paper is the first work to test and demonstrate the capability of an automated reasoning system by applying it to a realistic nuclear accident scenario, such as the TMI-2 accident.
-
This paper presents a framework for reasoning about trustworthiness in cyber-physical systems (CPS) that combines ontology-based reasoning and answer set programming (ASP). It introduces a formal definition of CPS and several problems related to trustworthiness of a CPS such as the problem of identification of the most vulnerable components of the system and of computing a strategy for mitigating an issue. It then shows how a combination of ontology based reasoning and ASP can be used to address the aforementioned problems. The paper concludes with a discussion of the potentials of the proposed methodologies.more » « less
-
The paper introduces the notion of an epistemic argumentation framework (EAF) as a means to integrate the beliefs of a reasoner with argumentation. Intuitively, an EAF encodes the beliefs of an agent who reasons about arguments. Formally, an EAF is a pair of an argumentation framework and an epistemic constraint. The semantics of the EAF is defined by the notion of an -epistemic labelling set, where is complete, stable, grounded, or preferred, which is a set of -labellings that collectively satisfies the epistemic constraint of the EAF. The paper shows how EAF can represent different views of reasoners on the same argumentation framework. It also includes representing preferences in EAF and multi-agent argumentation. Finally, the paper discusses the complexity of the problem of determining whether or not an -epistemic labelling set exists.more » « less