skip to main content


Title: Explainable Planning Using Answer Set Programming
In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.  more » « less
Award ID(s):
1812628
NSF-PAR ID:
10286658
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Conference on Principles of Knowledge Representation and Reasoning (
Volume:
17
Issue:
1
Page Range / eLocation ID:
662-666
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication. 
    more » « less
  2. null (Ed.)

    In human-aware planning problems, the planning agent may need to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach to do so is called model reconciliation, where the planning agent tries to reconcile the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as an optimization problem, where the goal is to find a subset-minimal explanation that one can use to modify the model of the user such that the plan of the agent is also feasible and optimal to the user. This paper presents an algorithm for solving such problems using answer set programming.

     
    more » « less
  3. null (Ed.)
    In explainable planning, the planning agent needs to explain its plan to a human user, especially when the plan appears infeasible or suboptimal for the user. A popular approach is called model reconciliation, where the agent reconciles the differences between its model and the model of the user such that its plan is also feasible and optimal to the user. This problem can be viewed as a more general problem as follows: Given two knowledge bases πa and πh and a query q such that πa entails q and πh does not entail q, where the notion of entailment is dependent on the logical theories underlying πa and πh, how to change πh–given πa and the support for q in πa–so that πh does entail q. In this paper, we study this problem under the context of answer set programming. To achieve this goal, we (1) define the notion of a conditional update between two logic programs πa and πh with respect to a query q;(2) define the notion of an explanation for a query q from a program πa to a program πh using conditional updates;(3) develop algorithms for computing explanations; and (4) show how the notion of explanation based on conditional updates can be used in explainable planning. 
    more » « less
  4. Space mission planning and spacecraft design are tightly coupled and need to be considered together for optimal performance; however, this integrated optimization problem results in a large-scale Mixed-Integer Nonlinear Programming (MINLP) problem, which is challenging to solve. In response to this challenge, this paper proposes a new solution approach to this MINLP problem by iterative solving a set of coupled subproblems via the augmented Lagrangian coordination approach following the philosophy of Multi-disciplinary Design Optimization (MDO). The proposed approach leverages the unique structure of the problem that enables its decomposition into a set of coupled subproblems of different types: a Mixed-Integer Quadratic Programming (MIQP) subproblem for mission planning and one or more Nonlinear Programming (NLP) subproblem(s) for spacecraft design. Since specialized MIQP or NLP solvers can be applied to each subproblem, the proposed approach can efficiently solve the otherwise intractable integrated MINLP problem. An automatic and effective method to find an initial solution for this iterative approach is also proposed so that the optimization can be performed without the need for a user-defined initial guess. In the demonstration case study, a human lunar exploration mission sequence is optimized with a subsystem-level parametric spacecraft design model. Compared to the state-of-the-art method, the proposed formulation can obtain a better solution with a shorter computational time even without parallelization. For larger problems, the proposed solution approach can also be easily parallelizable and thus is expected to be further advantageous and scalable. 
    more » « less
  5. The large number of antennas in massive MIMO systems allows the base station to communicate with multiple users at the same time and frequency resource with multi-user beamforming. However, highly correlated user channels could drastically impede the spectral efficiency that multi-user beamforming can achieve. As such, it is critical for the base station to schedule a suitable group of users in each time and frequency resource block to achieve maximum spectral efficiency while adhering to fairness constraints among the users. In this paper, we consider the resource scheduling problem for massive MIMO systems with its optimal solution known to be NP-hard. Inspired by recent achievements in deep reinforcement learning (DRL) to solve problems with large action sets, we propose SMART, a dynamic scheduler for massive MIMO based on the state-of-the-art Soft Actor-Critic (SAC) DRL model and the K-Nearest Neighbors (KNN) algorithm. Through comprehensive simulations using realistic massive MIMO channel models as well as real-world datasets from channel measurement experiments, we demonstrate the effectiveness of our proposed model in various channel conditions. Our results show that our proposed model performs very close to the optimal proportionally fair (Opt-PF) scheduler in terms of spectral efficiency and fairness with more than one order of magnitude lower computational complexity in medium network sizes where Opt-PF is computationally feasible. Our results also show the feasibility and high performance of our proposed scheduler in networks with a large number of users and resource blocks. 
    more » « less