Safe and Efficient Reinforcement Learning Using Disturbance-Observer-Based Control Barrier Functions
Safe reinforcement learning (RL) with assured satisfaction of hard state constraints during
training has recently received a lot of attention. Safety filters, e.g., based on control barrier functions
(CBFs), provide a promising way for safe RL via modifying the unsafe actions of an RL agent on
the fly. Existing safety filter-based approaches typically involve learning of uncertain dynamics and
quantifying the learned model error, which leads to conservative filters before a large amount of data
is collected to learn a good model, thereby preventing efficient exploration. This paper presents a
method for safe and efficient RL using disturbance observers (DOBs) and control barrier functions
(CBFs). Unlike most existing safe RL methods that deal with hard state constraints, our method
does not involve model learning, and leverages DOBs to accurately estimate the pointwise value
of the uncertainty, which is then incorporated into a robust CBF condition to generate safe actions.
The DOB-based CBF can be used as a safety filter with model-free RL algorithms by minimally
modifying the actions of an RL agent whenever necessary to ensure safety throughout the learning
process. Simulation results on a unicycle and a 2D quadrotor demonstrate that the proposed method
outperforms a state-of-the-art safe RL algorithm using CBFs and Gaussian processes-based model
learning, in terms of safety violation rate, and sample and computational efficiency.
more »
« less