skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Formally Verifying Deep Reinforcement Learning Controllers with Lyapunov Barrier Certificates
Deep reinforcement learning (DRL) is a powerful machine learning paradigm for generating agents that control autonomous systems. However, the “black box” nature of DRL agents limits their deployment in real-world safety-critical applications. A promising approach for providing strong guarantees on an agent's behavior is to use Neural Lyapunov Barrier (NLB) certifcates, which are learned functions over the system whose properties indirectly imply that an agent behaves as desired. However, NLB-based certifcates are typically diffcult to learn and even more diffcult to verify, especially for complex systems. In this work, we present a novel method for training and verifying NLB-based certifcates for discrete-time systems. Specifcally, we introduce a technique for certifcate composition, which simplifes the verifcation of highly-complex systems by strategically designing a sequence of certifcates. When jointly verifed with neural network verifcation engines, these certifcates provide a formal guarantee that a DRL agent both achieves its goals and avoids unsafe behavior. Furthermore, we introduce a technique for certifcate fltering, which signifcantly simplifes the process of producing formally verifed certifcates. We demonstrate the merits of our approach with a case study on providing safety and liveness guarantees for a DRL-controlled spacecraft.  more » « less
Award ID(s):
2211505
PAR ID:
10584629
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Editor(s):
Narodytska, Nina; Ruemmer, Philipp
Publisher / Repository:
TU Wien Academic Press
Date Published:
Page Range / eLocation ID:
95-106
Subject(s) / Keyword(s):
formal methods
Format(s):
Medium: X
Location:
Prague, Czech Republic
Sponsoring Org:
National Science Foundation
More Like this
  1. Jara, C; Borras_Sol, J (Ed.)
    Deep Reinforcement Learning (DRL) has shown its capability to solve the high degrees of freedom in control and the complex interaction with the object in the multi-finger dexterous in-hand manipulation tasks. Current DRL approaches lack behavior constraints during the learning process, leading to aggressive and unstable policies that are insufficient for safety-critical in-hand manipulation tasks. The centralized learning strategy also limits the flexibility to fine-tune each robot finger's behavior. This work proposes the Finger-specific Multi-agent Shadow Critic Consensus (FMSC) method, which models the in-hand manipulation as a multi-agent collaboration task where each finger is an individual agent and trains the policies for the fingers to achieve a consensus across the critic networks through the Information Sharing (IS) across the neighboring agents and finger-specific stable manipulation objectives based on the state-action occupancy measure, a general utility of DRL that is approximated during the learning process. The methods are evaluated in two in-hand manipulation tasks on the Shadow Hand. The results show that FMSC+IS converges faster in training, achieving a comparable success rate and much better manipulation stability than conventional DRL methods. 
    more » « less
  2. Deep reinforcement learning (DRL) has demonstrated impressive success in solving complex control tasks by synthesizing control policies from data. However, the safety and stability of applying DRL to safety-critical systems remain a primary concern and challenging problem. To address the problem, we propose the Phy-DRL: a novel physics-model regulated deep reinforcement learning framework. The Phy-DRL is novel in two architectural designs: a physics-model-regulated reward and residual control, which integrates physics-model-based control and data-driven control. The concurrent designs enable the Phy-DRL to mathematically provable safety and stability guarantees. Finally, the effectiveness of the Phy-DRL is validated by an inverted pendulum system. Additionally, the experimental results demonstrate that the Phy-DRL features remarkably accelerated training and enlarged reward. 
    more » « less
  3. In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains. However, the opaque nature of these models limits their applicability in aerospace systems and sasfety-critical domains, in which a single mistake can have dire consequences. In this paper, we present novel advancements in both the training and verification of DRL controllers, which can help ensure their safe behavior. We showcase a design-for-verification approach utilizing k-induction and demonstrate its use in verifying liveness properties. In addition, we also give a brief overview of neural Lyapunov Barrier certificates and summarize their capabilities on a case study. Finally, we describe several other novel reachability-based approaches which, despite failing to provide guarantees of interest, could be effective for verification of other DRL systems, and could be of further interest to the community. 
    more » « less
  4. In recent years, Deep Reinforcement Learning (DRL) has emerged as an effective approach to solving real-world tasks. However, despite their successes, DRL-based policies suffer from poor reliability, which limits their deployment in safety-critical domains. Various methods have been put forth to address this issue by providing formal safety guarantees. Two main approaches include shielding and verification. While shielding ensures the safe behavior of the policy by employing an external online component (i.e., a “shield”) that overrides potentially dangerous actions, this approach has a significant computational cost as the shield must be invoked at runtime to validate every decision. On the other hand, verification is an offline process that can identify policies that are unsafe, prior to their deployment, yet, without providing alternative actions when such a policy is deemed unsafe. In this work, we present verification-guided shielding — a novel approach that bridges the DRL reliability gap by integrating these two methods. Our approach combines both formal and probabilistic verification tools to partition the input domain into safe and unsafe regions. In addition, we employ clustering and symbolic representation procedures that compress the unsafe regions into a compact representation. This, in turn, allows to temporarily activate the shield solely in (potentially) unsafe regions, in an efficient manner. Our novel approach allows to significantly reduce runtime overhead while still preserving formal safety guarantees. We extensively evaluate our approach on two benchmarks from the robotic navigation domain, as well as provide an in-depth analysis of its scalability and completeness. 1 
    more » « less
  5. The Open Radio Access Network (RAN) paradigm is transforming cellular networks into a system of disaggregated, virtualized, and software-based components. These self-optimize the network through programmable, closed-loop control, leveraging Artificial Intelligence (AI) and Machine Learning (ML) routines. In this context, Deep Reinforcement Learning (DRL) has shown great potential in addressing complex resource allocation problems. However, DRL-based solutions are inherently hard to explain, which hinders their deployment and use in practice. In this paper, we propose EXPLORA, a framework that provides explainability of DRL-based control solutions for the Open RAN ecosystem. EXPLORA synthesizes network-oriented explanations based on an attributed graph that produces a link between the actions taken by a DRL agent (i.e., the nodes of the graph) and the input state space (i.e., the attributes of each node). This novel approach allows EXPLORA to explain models by providing information on the wireless context in which the DRL agent operates. EXPLORA is also designed to be lightweight for real-time operation. We prototype EXPLORA and test it experimentally on an O-RAN-compliant near-real-time RIC deployed on the Colosseum wireless network emulator. We evaluate EXPLORA for agents trained for different purposes and showcase how it generates clear network-oriented explanations. We also show how explanations can be used to perform informative and targeted intent-based action steering and achieve median transmission bitrate improvements of 4% and tail improvements of 10%. 
    more » « less