Low-level game environments and other simulations present a difficulty of scale for an expensive AI technique like narrative planning, which is normally constrained to environments with small state spaces. Due to this limitation, the intentional and cooperative behavior of agents guided by this technology cannot be deployed for different systems without significant additional authoring effort. I propose a process for automatically creating models for larger-scale domains such that a narrative planner can be employed in these settings. By generating an abstract domain of an environment while retaining the information needed to produce behavior appropriate to the abstract actions, agents are able to reason in a lower-complexity space and act in the higher-complexity one. This abstraction is accomplished by the development of extended-duration actions and the identification of their preconditions and effects. Together these components may be combined to form a narrative planning domain, and plans from this domain can be executed within the low-level environment.
more »
« less
This content will become publicly available on November 15, 2025
A Model for Automating the Abstraction of Planning Problems in a Narrative Context
Contemporary automated planning research emphasizes the use of domain knowledge abstractions like heuristics to improve search efficiency. Transformative automated abstraction techniques which decompose or otherwise reformulate the problem have a limited presence, owing to poor performance in key metrics like plan length and time efficiency. In this paper, we argue for a reexamination of these transformative techniques in the context of narrative planning, where classical metrics are less appropriate. We propose a model for automating abstraction by decomposing a planning problem into subproblems which serve as abstract features of the problem. We demonstrate the application of this approach on a low-level problem and discuss key features of the resulting abstract problem. Plans in the abstract problem are shorter, representing summaries of low-level plans, but can be directly translated into low-level plans for the original problem.
more »
« less
- PAR ID:
- 10569056
- Publisher / Repository:
- AAAI Press
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
- Volume:
- 20
- Issue:
- 1
- ISSN:
- 2326-909X
- Page Range / eLocation ID:
- 35 to 45
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Robots acting in human-scale environments must plan under uncertainty in large state–action spaces and face constantly changing reward functions as requirements and goals change. Planning under uncertainty in large state–action spaces requires hierarchical abstraction for efficient computation. We introduce a new hierarchical planning framework called Abstract Markov Decision Processes (AMDPs) that can plan in a fraction of the time needed for complex decision making in ordinary MDPs. AMDPs provide abstract states, actions, and transition dynamics in multiple layers above a base-level “flat” MDP. AMDPs decompose problems into a series of subtasks with both local reward and local transition functions used to create policies for subtasks. The resulting hierarchical planning method is independently optimal at each level of abstraction, and is recursively optimal when the local reward and transition functions are correct. We present empirical results showing significantly improved planning speed, while maintaining solution quality, in the Taxi domain and in a mobile-manipulation robotics problem. Furthermore, our approach allows specification of a decision-making model for a mobile-manipulation problem on a Turtlebot, spanning from low-level control actions operating on continuous variables all the way up through high-level object manipulation tasks.more » « less
-
Robots acting in human-scale environments must plan under uncertainty in large state–action spaces and face constantly changing reward functions as requirements and goals change. Planning under uncertainty in large state–action spaces requires hierarchical abstraction for efficient computation. We introduce a new hierarchical planning framework called Abstract Markov Decision Processes (AMDPs) that can plan in a fraction of the time needed for complex decision making in ordinary MDPs. AMDPs provide abstract states, actions, and transition dynamics in multiple layers above a base-level “flat” MDP. AMDPs decompose problems into a series of subtasks with both local reward and local transition functions used to create policies for subtasks. The resulting hierarchical planning method is independently optimal at each level of abstraction, and is recursively optimal when the local reward and transition functions are correct. We present empirical results showing significantly improved planning speed, while maintaining solution quality, in the Taxi domain and in a mobile-manipulation robotics problem. Furthermore, our approach allows specification of a decision-making model for a mobile-manipulation problem on a Turtlebot, spanning from low-level control actions operating on continuous variables all the way up through high-level object manipulation tasks.more » « less
-
Most programming problems have multiple viable solutions that organize the underlying problem’s tasks in fundamentally different ways. Which organizations (a.k.a. plans) students implement and prefer depends on solutions they have seen before as well as features of their programming language. How much exposure to planning do students need before they can appreciate and produce different plans? We report on a study in which students in introductory courses at two universities were given a single lecture on planning between assessments. In the post-assessment, many students produced multiple high-level plans (including ones first introduced in the lecture) and richly discussed tradeoffs between plans. This suggests that planning can be taught with fairly low overhead once students have a decent foundation in programming.more » « less
-
In the face of difficult exploration problems in reinforcement learning, we study whether giving an agent an object-centric mapping (describing a set of items and their attributes) allow for more efficient learning. We found this problem is best solved hierarchically by modelling items at a higher level of state abstraction to pixels, and attribute change at a higher level of temporal abstraction to primitive actions. This abstraction simplifies the transition dynamic by making specific future states easier to predict. We make use of this to propose a fully model-based algorithm that learns a discriminative world model, plans to explore efficiently with only a count-based intrinsic reward, and can subsequently plan to reach any discovered (abstract) states. We demonstrate the model's ability to (i) efficiently solve single tasks, (ii) transfer zero-shot and few-shot across item types and environments, and (iii) plan across long horizons. Across a suite of 2D crafting and MiniHack environments, we empirically show our model significantly out-performs state-of-the-art low-level methods (without abstraction), as well as performant model-free and model-based methods using the same abstraction. Finally, we show how to learn low level object-perturbing policies via reinforcement learning, and the object mapping itself by supervised learning.more » « less
An official website of the United States government
