skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Rational Subgoals from Demonstrations and Instructions
We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals. At the core of our framework is a collection of rational subgoals (RSGs), which are essentially binary classifiers over the environmental states. RSGs can be learned from weakly-annotated data, in the form of unsegmented demonstration trajectories, paired with abstract task descriptions, which are composed of terms initially unknown to the agent (e.g., collect-wood then craft-boat then go-across-river). Our framework also discovers dependencies between RSGs, e.g., the task collect-wood is a helpful subgoal for the task craft-boat. Given a goal description, the learned subgoals and the derived dependencies facilitate off-the-shelf planning algorithms, such as A* and RRT, by setting helpful subgoals as waypoints to the planner, which significantly improves performance-time efficiency.  more » « less
Award ID(s):
2214177
PAR ID:
10444337
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the AAAI Conference on Artificial Intelligence
ISSN:
2159-5399
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments is a challenging problem. We consider that user defines every task by a linear temporal logic (LTL) formula. However, some causal dependencies in complex environments may be unknown to the user in advance. Hence, when human user is specifying instructions, the robot cannot solve the tasks by simply following the given instructions. In this work, we propose a hierarchical reinforcement learning (HRL) framework in which a symbolic transition model is learned to efficiently produce high-level plans that can guide the agent efficiently solve different tasks. Specifically, the symbolic transition model is learned by inductive logic programming (ILP) to capture logic rules of state transitions. By planning over the product of the symbolic transition model and the automaton derived from the LTL formula, the agent can resolve causal dependencies and break a causally complex problem down into a sequence of simpler low-level sub-tasks. We evaluate the proposed framework on three environments in both discrete and continuous domains, showing advantages over previous representative methods. 
    more » « less
  2. In this work, we theoretically investigate why large language model (LLM)-empowered agents can solve decision-making problems in the physical world. We consider a hierarchical reinforcement learning (RL) model where the LLM Planner handles high-level task planning and the Actor performs low-level execution. Within this model, the LLM Planner operates in a partially observable Markov decision process (POMDP), iteratively generating language-based subgoals through prompting. Assuming appropriate pretraining data, we prove that the pretrained LLM Planner effectively conducts Bayesian aggregated imitation learning (BAIL) via in-context learning. We also demonstrate the need for exploration beyond the subgoals produced by BAIL, showing that naively executing these subgoals results in linear regret. To address this, we propose an ε-greedy exploration strategy for BAIL, which we prove achieves sublinear regret when pretraining error is low. Finally, we extend our theoretical framework to cases where the LLM Planner acts as a world model to infer the environment’s transition model and to multi-agent settings, facilitating coordination among multiple Actors. 
    more » « less
  3. Great storytellers know how to take us on a journey. They direct characters to act—not necessarily in the most rational way—but rather in a way that leads to interesting situations, and ultimately creates an impactful experience for audience members looking on. If audience experience is what matters most, then can we help artists and animators directly craft such experiences, independent of the concrete character actions needed to evoke those experiences? In this paper, we offer a novel computational framework for such tools. Our key idea is to optimize animations with respect to simulated audience members’ experiences. To simulate the audience, we borrow an established principle from cognitive science: that human social intuition can be modeled as “inverse planning,” the task of inferring an agent’s (hidden) goals from its (observed) actions. Building on this model, we treat storytelling as “inverse inverse planning,” the task of choosing actions to manipulate an inverse planner’s inferences. Our framework is grounded in literary theory, naturally capturing many storytelling elements from first principles. We give a series of examples to demonstrate this, with supporting evidence from human subject studies. 
    more » « less
  4. This paper aims to improve the computational efficiency of motion planning for mobile robots with non-trivial dynamics through the use of learned controllers. Offline, a system-specific controller is first trained in an empty environment. Then, for the target environment, the approach constructs a data structure, a “Roadmap with Gaps,” to approximately learn how to solve planning queries using the learned controller. The roadmap nodes correspond to local regions. Edges correspond to applications of the learned controller that approximately connect these regions. Gaps arise as the controller does not perfectly connect pairs of individual states along edges. Online, given a query, a tree sampling-based motion planner uses the roadmap so that the tree’s expansion is informed towards the goal region. The tree expansion selects local subgoals given a wavefront on the roadmap that guides towards the goal. When the controller cannot reach a subgoal region, the planner resorts to random exploration to maintain probabilistic completeness and asymptotic optimality. The accompanying experimental evaluation shows that the approach significantly improves the computational efficiency of motion planning on various benchmarks, including physics-based vehicular models on uneven and varying friction terrains as well as a quadrotor under air pressure effects. 
    more » « less
  5. Search-as-learning research has emphasized the need to better support searchers when learning about complex topics online. Prior work in the learning sciences has shown that effective self-regulated learning (SRL), in which goals are a central function, is critical to improving learning outcomes. This dissertation investigates the influence of subgoals on learning during search. Two conditions were investigated: \textsc{Subgoals} and \textsc{NoSubgoals}. In the \textsc{Subgoals} condition, a tool called the Subgoal Manager was used to help searchers to develop specific subgoals associated with an overall learning-oriented search task. The influence of subgoals is explored along four dimensions: (1) learning outcomes; (2) searcher perceptions; (3) search behaviors; and (4) SRL processes. Learning outcomes were measured with two assessments, an established multiple-choice conceptual knowledge test and an open-ended summary of learning. Learning assessments were administered immediately after search and one week after search to capture learning retention. A qualitative analysis was conducted to identify the percentage of true statements on open-ended learning assessments. A think-aloud protocol was used to capture SRL processes. A second qualitative analysis was conducted to categorize SRL processes from think-aloud comments and behaviors during the search session. Findings from the dissertation suggest that subgoals improved learning during search. Additionally, it seems that subgoals helped participants to better retain what was learned one week later. Findings also suggest that SRL processes of participants in the \textsc{Subgoals} condition were more frequent and more diverse. SRL processes that were explicitly supported by the Subgoal Manager seemed to be more frequent in the \textsc{Subgoals} condition as well as SRL processes that were not explicitly supported. 
    more » « less