Task and motion planning subject to linear temporal logic (LTL) specifications in complex, dynamic
environments requires efficient exploration of many possible future worlds. model‐free reinforcement
learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that
require long‐term planning. in this work, we integrate Monte Carlo tree search with hierarchical neural net
policies trained on expressive LTL specifications. we use reinforcement learning to find deep neural networks
representing both low‐level control policies and task‐level ``option policies'' that achieve high‐level goals. our
combined architecture generates safe and responsive motion plans that respect theLTL constraints. we
demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a
road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.
more »
« less
Instructing Goal-Conditioned Reinforcement Learning Agents with Temporal Logic Objectives
Goal-conditioned reinforcement learning (RL) is a powerful approach for learning general-purpose skills by reaching diverse goals. However, it has limitations when it comes to task-conditioned policies, where goals are specified by temporally extended instructions written in the Linear Temporal Logic (LTL) formal language. Existing approaches for finding LTL-satisfying policies rely on sampling a large set of LTL instructions during training to adapt to unseen tasks at inference time. However, these approaches do not guarantee generalization to out-of-distribution LTL objectives, which may have increased complexity. In this paper, we propose a novel approach to address this challenge. We show that simple goal-conditioned RL agents can be instructed to follow arbitrary LTL specifications without additional training over the LTL task space. Unlike existing approaches that focus on LTL specifications expressible as regular expressions, our technique is unrestricted and generalizes to ω-regular expressions. Experiment results demonstrate the effectiveness of our approach in adapting goal-conditioned RL agents to satisfy complex temporal logic task specifications zero-shot.
more »
« less
- PAR ID:
- 10511032
- Publisher / Repository:
- NeurIPS Proceedings
- Date Published:
- Journal Name:
- Advances in Neural Information Processing Systems (NeurIPS)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement LearningPiotr Faliszewski ; Viviana Mascardi (Ed.)Recent success in reinforcement learning (RL) has brought renewed attention to the design of reward functions by which agent behavior is reinforced or deterred. Manually designing reward functions is tedious and error-prone. An alternative approach is to specify a formal, unambiguous logic requirement, which is automatically translated into a reward function to be learned from. Omega-regular languages, of which Linear Temporal Logic (LTL) is a subset, are a natural choice for specifying such requirements due to their use in verification and synthesis. However, current techniques based on omega-regular languages learn in an episodic manner whereby the environment is periodically reset to an initial state during learning. In some settings, this assumption is challenging or impossible to satisfy. Instead, in the continuing setting the agent explores the environment without resets over a single lifetime. This is a more natural setting for reasoning about omega-regular specifications defined over infinite traces of agent behavior. Optimizing the average reward instead of the usual discounted reward is more natural in this case due to the infinite-horizon objective that poses challenges to the convergence of discounted RL solutions. We restrict our attention to the omega-regular languages which correspond to absolute liveness specifications. These specifications cannot be invalidated by any finite prefix of agent behavior, in accordance with the spirit of a continuing problem. We propose a translation from absolute liveness omega-regular languages to an average reward objective for RL. Our reduction can be done on-the-fly, without full knowledge of the environment, thereby enabling the use of model-free RL algorithms. Additionally, we propose a reward structure that enables RL without episodic resetting in communicating MDPs, unlike previous approaches. We demonstrate empirically with various benchmarks that our proposed method of using average reward RL for continuing tasks defined by omega-regular specifications is more effective than competing approaches that leverage discounted RL.more » « less
-
Integrating Symbolic Planning and Reinforcement Learning for Following Temporal Logic SpecificationsTeaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments is a challenging problem. We consider that user defines every task by a linear temporal logic (LTL) formula. However, some causal dependencies in complex environments may be unknown to the user in advance. Hence, when human user is specifying instructions, the robot cannot solve the tasks by simply following the given instructions. In this work, we propose a hierarchical reinforcement learning (HRL) framework in which a symbolic transition model is learned to efficiently produce high-level plans that can guide the agent efficiently solve different tasks. Specifically, the symbolic transition model is learned by inductive logic programming (ILP) to capture logic rules of state transitions. By planning over the product of the symbolic transition model and the automaton derived from the LTL formula, the agent can resolve causal dependencies and break a causally complex problem down into a sequence of simpler low-level sub-tasks. We evaluate the proposed framework on three environments in both discrete and continuous domains, showing advantages over previous representative methods.more » « less
-
Mission-time Linear Temporal Logic (MLTL) represents the most practical fragment of Metric Temporal Logic; MLTL resembles the popular logic Linear Temporal Logic (LTL) with finite closed-interval integer bounds on the temporal operators. Increasingly, many tools reason over MLTL specifications, yet these tools are useful only when system designers can validate the input specifications. We design an automated characterization of the structure of the computations that satisfy a given MLTL formula using regular expressions. We prove soundness and completeness of our structure. We also give an algorithm for automated MLTL formula validation and analyze its complexity both theoretically and experimentally. Additionally, we generate a test suite using control flow diagrams to robustly test our implementation and release an open-source tool with a user-friendly graphical interface. The result of our contributions are improvements to existing algorithms for MLTL analysis, and are applicable to many other tools for automated, efficient MLTL formula validation. Our updated tool may be found at https://temporallogic.org/research/WEST.more » « less
-
Continuous-time Markov decision processes (CTMDPs) are canonical models to express sequential decision-making under dense-time and stochastic environments. When the stochastic evolution of the environment is only available via sampling, model-free reinforcement learning (RL) is the algorithm-of-choice to compute optimal decision sequence. RL, on the other hand, requires the learning objective to be encoded as scalar reward signals. Since doing such transla- tions manually is both tedious and error-prone, a number of techniques have been proposed to translate high-level objec- tives (expressed in logic or automata formalism) to scalar re- wards for discrete-time Markov decision processes. Unfortu- nately, no automatic translation exists for CTMDPs. We consider CTMDP environments against the learning objectives expressed as omega-regular languages. Omega- regular languages generalize regular languages to infinite- horizon specifications and can express properties given in popular linear-time logic LTL. To accommodate the dense- time nature of CTMDPs, we consider two different semantics of omega-regular objectives: 1) satisfaction semantics where the goal of the learner is to maximize the probability of spend- ing positive time in the good states, and 2) expectation seman- tics where the goal of the learner is to optimize the long-run expected average time spent in the “good states” of the au- tomaton. We present an approach enabling correct translation to scalar reward signals that can be readily used by off-the- shelf RL algorithms for CTMDPs. We demonstrate the effec- tiveness of the proposed algorithms by evaluating it on some popular CTMDP benchmarks with omega-regular objectives.more » « less