skip to main content


Title: A Logic-Based Explanation Generation Framework for Classical and Hybrid Planning Problems
In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.  more » « less
Award ID(s):
1812628
NSF-PAR ID:
10353076
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of Artificial Intelligence Research
Volume:
73
ISSN:
1076-9757
Page Range / eLocation ID:
1473 to 1534
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In explainable planning, the planning agent needs to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach is called model reconciliation, where the agent reconciles the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as a more general problem as follows: Given two knowledge bases πa and πh and a query q such that πa entails q and πh does not entail q, where the notion of entailment is dependent on the logical theories underlying πa and πh, how to change πh–given πa and the support for q in πa–so that πh does entail q. In this paper, we study this problem under the context of answer set programming. To achieve this goal, we (1) define the notion of a conditional update between two logic programs πa and πh with respect to a query q;(2) define the notion of an explanation for a query q from a program πa to a program πh using conditional updates;(3) develop algorithms for computing explanations; and (4) show how the notion of explanation based on conditional updates can be used in explainable planning. 
    more » « less
  2. null (Ed.)

    In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.

     
    more » « less
  3. null (Ed.)
    In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming. 
    more » « less
  4. Robust motion planning entails computing a global motion plan that is safe under all possible uncertainty realizations, be it in the system dynamics, the robot’s initial position, or with respect to external disturbances. Current approaches for robust motion planning either lack theoretical guarantees, or make restrictive assumptions on the system dynamics and uncertainty distributions. In this paper, we address these limitations by proposing the robust rapidly-exploring random-tree (Robust-RRT) algorithm, which integrates forward reachability analysis directly into sampling-based control trajectory synthesis. We prove that Robust-RRT is probabilistically complete (PC) for nonlinear Lipschitz continuous dynamical systems with bounded uncertainty. In other words, Robust-RRT eventually finds a robust motion plan that is feasible under all possible uncertainty realizations assuming such a plan exists. Our analysis applies even to unstable systems that admit only short-horizon feasible plans; this is because we explicitly consider the time evolution of reachable sets along control trajectories. Thanks to the explicit consideration of time dependency in our analysis, PC applies to unstabilizable systems. To the best of our knowledge, this is the most general PC proof for robust sampling-based motion planning, in terms of the types of uncertainties and dynamical systems it can handle. Considering that an exact computation of reachable sets can be computationally expensive for some dynamical systems, we incorporate sampling-based reachability analysis into Robust-RRT and demonstrate our robust planner on nonlinear, underactuated, and hybrid systems. 
    more » « less
  5. Creating a domain model, even for classical, domain-independent planning, is a notoriously hard knowledge-engineering task. A natural approach to solve this problem is to learn a domain model from observations. However, model learning approaches frequently do not provide safety guarantees: the learned model may assume actions are applicable when they are not, and may incorrectly capture actions' effects. This may result in generating plans that will fail when executed. In some domains such failures are not acceptable, due to the cost of failure or inability to replan online after failure. In such settings, all learning must be done offline, based on some observations collected, e.g., by some other agents or a human. Through this learning, the task is to generate a plan that is guaranteed to be successful. This is called the model-free planning problem. Prior work proposed an algorithm for solving the model-free planning problem in classical planning. However, they were limited to learning grounded domains, and thus they could not scale. We generalize this prior work and propose the first safe model-free planning algorithm for lifted domains. We prove the correctness of our approach, and provide a statistical analysis showing that the number of trajectories needed to solve future problems with high probability is linear in the potential size of the domain model. We also present experiments on twelve IPC domains showing that our approach is able to learn the real action model in all cases with at most two trajectories.

     
    more » « less