Title: Explainable Planning Using Answer Set Programming
In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming. more »« less
Nguyen, Van; Vasileiou, Stylianos L; Son, Tran C; Yeoh, W
(, International Conference on Principles of Knowledge Representation and Reasoning ()
null
(Ed.)
In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.
Vasileiou, Stylianos Loukas; Yeoh, William; Cao Son, Tran; Kumar, Ashwin; Cashmore, Michael; Magazzeni, Dianele
(, Journal of Artificial Intelligence Research)
In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.
V Nguyen, S. Tran
(, Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems)
null
(Ed.)
In explainable planning, the planning agent needs to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach is called model reconciliation, where the agent reconciles the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as a more general problem as follows: Given two knowledge bases πa and πh and a query q such that πa entails q and πh does not entail q, where the notion of entailment is dependent on the logical theories underlying πa and πh, how to change πh–given πa and the support for q in πa–so that πh does entail q. In this paper, we study this problem under the context of answer set programming. To achieve this goal, we (1) define the notion of a conditional update between two logic programs πa and πh with respect to a query q;(2) define the notion of an explanation for a query q from a program πa to a program πh using conditional updates;(3) develop algorithms for computing explanations; and (4) show how the notion of explanation based on conditional updates can be used in explainable planning.
Hanou, Issa K; Thomas, Devin W; Ruml, Wheeler; De_Weerdt, Mathijs
(, Proceedings of the International Conference on Automated Planning and Scheduling)
Train routing is sensitive to delays that occur in the network. When a train is delayed, it is imperative that a new plan be found quickly, or else other trains may need to be stopped to ensure safety, potentially causing cascading delays. In this paper, we consider this class of multi-agent planning problems, which we call Multi-Agent Execution Delay Replanning. We show that these can be solved by reducing the problem to an any-start-time safe interval planning problem. When an agent has an any-start-time plan, it can react to a delay by simply looking up the precomputed plan for the delayed start time. We identify crucial real-world problem characteristics like the agent's speed, size, and safety envelope, and extend the any-start-time planning to account for them. Experimental results on real-world train networks show that any-start-time plans are compact and can be computed in reasonable time while enabling agents to instantly recover a safe plan.
While the development of proactive personal assistants has been a popular topic within AI research, most research in this direction tends to focus on a small subset of possible interaction settings. An important setting that is often overlooked is one where the users may have an incomplete or incorrect understanding of the task. This could lead to the user following incorrect plans with potentially disastrous consequences. Supporting such settings requires agents that are able to detect when the user's actions might be leading them to a possibly undesirable state and, if they are, intervene so the user can correct their course of actions. For the former problem, we introduce a novel planning compilation that transforms the task of estimating the likelihood of task failures into a probabilistic goal recognition problem. This allows us to leverage the existing goal recognition techniques to detect the likelihood of failure. For the intervention problem, we use model search algorithms to detect the set of minimal model updates that could help users identify valid plans. These identified model updates become the basis for agent intervention. We further extend the proposed approach by developing methods for pre-emptive interventions, to prevent the users from performing actions that might result in eventual plan failure. We show how we can identify such intervention points by using an efficient approximation of the true intervention problems, which are best represented as a Partially Observable Markov Decision-Process (POMDP). To substantiate our claims and demonstrate the applicability of our methodology, we have conducted exhaustive evaluations across a diverse range of planning benchmarks. These tests have consistently shown the robustness and adaptability of our approach, further solidifying its potential utility in real-world applications.
Nguyen, Van, Vasileiou, Stylianos Loukas, Son, Tran Cao, and Yeoh, William. Explainable Planning Using Answer Set Programming. Retrieved from https://par.nsf.gov/biblio/10208818. Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning . Web. doi:10.24963/kr.2020/66.
Nguyen, Van, Vasileiou, Stylianos Loukas, Son, Tran Cao, & Yeoh, William. Explainable Planning Using Answer Set Programming. Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, (). Retrieved from https://par.nsf.gov/biblio/10208818. https://doi.org/10.24963/kr.2020/66
Nguyen, Van, Vasileiou, Stylianos Loukas, Son, Tran Cao, and Yeoh, William.
"Explainable Planning Using Answer Set Programming". Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning (). Country unknown/Code not available. https://doi.org/10.24963/kr.2020/66.https://par.nsf.gov/biblio/10208818.
@article{osti_10208818,
place = {Country unknown/Code not available},
title = {Explainable Planning Using Answer Set Programming},
url = {https://par.nsf.gov/biblio/10208818},
DOI = {10.24963/kr.2020/66},
abstractNote = {In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.},
journal = {Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning},
author = {Nguyen, Van and Vasileiou, Stylianos Loukas and Son, Tran Cao and Yeoh, William},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.