skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Gaussian Control Barrier Functions: Non-Parametric Paradigm to Safety
Inspired by the success of control barrier functions (CBFs) in addressing safety, and the rise of data-driven techniques for modeling functions, we propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs). A dynamical system is defined to be safe if a subset of its states remains within the prescribed set, also called the safe set. CBFs achieve safety by designing a candidate function a priori. However, designing such a function can be challenging. Consider designing a CBF in a disaster recovery scenario where safe and navigable regions need to be determined. The decision boundary for safety here is unknown and cannot be designed a priori. Moreover, CBFs employ a parametric design approach and cannot handle arbitrary changes to the safe set in practice. In our approach, we work with safety samples to construct the CBF online by assuming a flexible GP prior on these samples, and term our formulation as a Gaussian CBF. GPs have favorable properties such as analytical tractability and robust uncertainty estimation. This allows realizing the posterior with high safety guarantees while also computing associated partial derivatives analytically for safe control. Moreover, Gaussian CBFs can change the safe set arbitrarily based on sampled data, thus allowing non-convex safe sets. We validated experimentally on a quadrotor by demonstrating safe control for 1) arbitrary safe sets, 2) collision avoidance with online safe set synthesis, 3) and juxtaposed Gaussian CBFs with CBFs in the presence of noisy states. The experiment video link is: https://youtu.be/HX6uokvCiGk.  more » « less
Award ID(s):
1723997 2128419
PAR ID:
10382583
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Access
Volume:
10
ISSN:
2169-3536
Page Range / eLocation ID:
99823 to 99836
Subject(s) / Keyword(s):
Control barrier functions, Gaussian processes, non-parametric, safety-critical control
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Safety is a critical component in today's autonomous and robotic systems. Many modern controllers endowed with notions of guaranteed safety properties rely on accurate mathematical models of these nonlinear dynamical systems. However, model uncertainty is always a persistent challenge weakening theoretical guarantees and compromising safety. For safety-critical systems, this is an even bigger challenge. Typically, safety is ensured by constraining the system states within a safe constraint set defined a priori by relying on the model of the system. A popular approach is to use Control Barrier Functions (CBFs) that encode safety using a smooth function. However, CBFs fail in the presence of model uncertainties. Moreover, an inaccurate model can either lead to incorrect notions of safety or worse, incur system critical failures. Addressing these drawbacks, we present a novel safety formulation that leverages properties of CBFs and positive definite kernels to design Gaussian CBFs. The underlying kernels are updated online by learning the unmodeled dynamics using Gaussian Processes (GPs). While CBFs guarantee forward invariance, the hyperparameters estimated using GPs update the kernel online and thereby adjust the relative notion of safety. We demonstrate our proposed technique on a safety-critical quadrotor on SO(3) in the presence of model uncertainty in simulation. With the kernel update performed online, safety is preserved for the system. 
    more » « less
  2. Matni, Nikolai; Morari, Manfred; Pappas, George J. (Ed.)
    Safe reinforcement learning (RL) with assured satisfaction of hard state constraints during training has recently received a lot of attention. Safety filters, e.g., based on control barrier functions (CBFs), provide a promising way for safe RL via modifying the unsafe actions of an RL agent on the fly. Existing safety filter-based approaches typically involve learning of uncertain dynamics and quantifying the learned model error, which leads to conservative filters before a large amount of data is collected to learn a good model, thereby preventing efficient exploration. This paper presents a method for safe and efficient RL using disturbance observers (DOBs) and control barrier functions (CBFs). Unlike most existing safe RL methods that deal with hard state constraints, our method does not involve model learning, and leverages DOBs to accurately estimate the pointwise value of the uncertainty, which is then incorporated into a robust CBF condition to generate safe actions. The DOB-based CBF can be used as a safety filter with model-free RL algorithms by minimally modifying the actions of an RL agent whenever necessary to ensure safety throughout the learning process. Simulation results on a unicycle and a 2D quadrotor demonstrate that the proposed method outperforms a state-of-the-art safe RL algorithm using CBFs and Gaussian processes-based model learning, in terms of safety violation rate, and sample and computational efficiency. 
    more » « less
  3. Safe reinforcement learning (RL) with assured satisfaction of hard state constraints during training has recently received a lot of attention. Safety filters, e.g., based on control barrier functions (CBFs), provide a promising way for safe RL via modifying the unsafe actions of an RL agent on the fly. Existing safety filter-based approaches typically involve learning of uncertain dynamics and quantifying the learned model error, which leads to conservative filters before a large amount of data is collected to learn a good model, thereby preventing efficient exploration. This paper presents a method for safe and efficient RL using disturbance observers (DOBs) and control barrier functions (CBFs). Unlike most existing safe RL methods that deal with hard state constraints, our method does not involve model learning, and leverages DOBs to accurately estimate the pointwise value of the uncertainty, which is then incorporated into a robust CBF condition to generate safe actions. The DOB-based CBF can be used as a safety filter with model-free RL algorithms by minimally modifying the actions of an RL agent whenever necessary to ensure safety throughout the learning process. Simulation results on a unicycle and a 2D quadrotor demonstrate that the proposed method outperforms a state-of-the-art safe RL algorithm using CBFs and Gaussian processes-based model learning, in terms of safety violation rate, and sample and computational efficiency. 
    more » « less
  4. In this study, we address the problem of safe control in systems subject to state and input constraints by integrating the Control Barrier Function (CBF) into the Model Predictive Control (MPC) formulation. While CBF offers a conservative policy and traditional MPC lacks the safety guarantee beyond the finite horizon, the proposed scheme takes advantage of both MPC and CBF approaches to provide a guaranteed safe control policy with reduced conservatism and a shortened horizon. The proposed methodology leverages the sum-of-square (SOS) technique to construct CBFs that make forward invariant safe sets in the state space that are then used as a terminal constraint on the last predicted state. CBF invariant sets cover the state space around system fixed points. These islands of forward invariant CBF sets will be connected to each other using MPC. To do this, we proposed a technique to handle the MPC optimization problem subject to the combination of intersections and union of constraints. Our approach, termed Model Predictive Control Barrier Functions (MPCBF), is validated using numerical examples to demonstrate its efficacy, showing improved performance compared to classical MPC and CBF. 
    more » « less
  5. Control Barrier Functions (CBFs) are an effective methodology to ensure safety and performative efficacy in real-time control applications such as power systems, resource allocation, autonomous vehicles, robotics, etc. This approach ensures safety independently of the high-level tasks that may have been pre-planned off-line. For example, CBFs can be used to guarantee that a vehicle will remain in its lane. However, when the number of agents is large, computation of CBFs can suffer from the curse of dimensionality in the multi-agent setting. In this work, we present Mean-field Control Barrier Functions (MF-CBFs), which extends the CBF framework to the mean-field (or swarm control) setting. The core idea is to model a population of agents as probability measures in the state space and build corresponding control barrier functions. Similar to traditional CBFs, we derive safety constraints on the (distributed) controls but now relying on the differential calculus in the space of probability measures. 
    more » « less