With the increasing need for safe control in the domain of autonomous driving, model-based safety-critical control approaches are widely used, especially Control Barrier Function (CBF) based approaches. Among them, Exponential CBF (eCBF) is particularly popular due to its realistic applicability to high-relative-degree systems. However, for most of the optimization-based controllers utilizing CBF-based constraints, solution feasibility is a common issue raised from potential conflict among different constraints. Moreover, how to incorporate uncertainty into the eCBF-based constraints in high-relative-degree systems to account for safety remains an open challenge. In this paper, we present a novel approach to extend a eCBF-based safe critical controller to a probabilistic setting to handle potential motion uncertainty from system dynamics. More importantly, we leverage an optimization-based technique to provide a solution feasibility guarantee in run time, while ensuring probabilistic safety. Lane changing and intersection handling are demonstrated as two use cases, and experiment results are provided to show the effectiveness of the proposed approach.
Control Barrier Functions for Complete and Incomplete Information Stochastic Systems
Real-time controllers must satisfy strict safety
requirements. Recently, Control Barrier Functions (CBFs) have
been proposed that guarantee safety by ensuring that a suitablydefined
barrier function remains bounded for all time. The
CBF method, however, has only been developed for deterministic
systems and systems with worst-case disturbances and
uncertainties. In this paper, we develop a CBF framework for
safety of stochastic systems. We consider complete information
systems, in which the controller has access to the exact system
state, as well as incomplete information systems where the
state must be reconstructed from noisy measurements. In the
complete information case, we formulate a notion of barrier
functions that leads to sufficient conditions for safety with
probability 1. In the incomplete information case, we formulate
barrier functions that take an estimate from an extended
Kalman filter as input, and derive bounds on the probability
of safety as a function of the asymptotic error in the filter. We
show that, in both cases, the sufficient conditions for safety can
be mapped to linear constraints on the control input at each
time, enabling the development of tractable optimization-based
controllers that guarantee safety, performance, and stability.
Our approach is evaluated via simulation study on an adaptive
cruise control case study.
- Award ID(s):
- 1656981
- Publication Date:
- NSF-PAR ID:
- 10131729
- Journal Name:
- American Control Conference (ACC)
- Page Range or eLocation-ID:
- 2928-2935
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This work provides a decentralized approach to safety by combining tools from control barrier functions (CBF) and nonlinear model predictive control (NMPC). It is shown how leveraging backup safety controllers allows for the robust implementation of CBF over the NMPC computation horizon, ensuring safety in nonlinear systems with actuation constraints. A leader-follower approach to control barrier functions (LFCBF) enforcement will be introduced as a strategy to enable a robot leader, in a multi-robot interactions, to complete its task in minimum time, hence aggressively maneuvering. An algorithmic implementation of the proposed solution is provided and safety is verified via simulation.
-
Modern nonlinear control theory seeks to develop feedback controllers that endow systems with properties such as safety and stability. The guarantees ensured by these controllers often rely on accurate estimates of the system state for determining control actions. In practice, measurement model uncertainty can lead to error in state estimates that degrades these guarantees. In this paper, we seek to unify techniques from control theory and machine learning to synthesize controllers that achieve safety in the presence of measurement model uncertainty. We define the notion of a Measurement-Robust Control Barrier Function (MR-CBF) as a tool for determining safe control inputs when facing measurement model uncertainty. Furthermore, MR-CBFs are used to inform sampling methodologies for learning-based perception systems and quantify tolerable error in the resulting learned models. We demonstrate the efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a simulated Segway system.
-
Control barrier functions are mathematical constructs used to guarantee safety for robotic systems. When integrated as constraints in a quadratic programming optimization problem, instantaneous control synthesis with real-time performance demands can be achieved for robotics applications. Prevailing use has assumed full knowledge of the safety barrier functions, however there are cases where the safe regions must be estimated online from sensor measurements. In these cases, the corresponding barrier function must be synthesized online. This paper describes a learning framework for estimating control barrier functions from sensor data. Doing so affords system operation in unknown state space regions without compromising safety. Here, a support vector machine classifier provides the barrier function specification as determined by sets of safe and unsafe states obtained from sensor measurements. Theoretical safety guarantees are provided. Experimental ROS-based simulation results for an omnidirectional robot equipped with LiDAR demonstrate safe operation.
-
Shared autonomy provides a framework where a human and an automated system, such as a robot, jointly control the system’s behavior, enabling an effective solution for various applications, including human-robot interaction and remote operation of a semi-autonomous system. However, a challenging problem in shared autonomy is safety because the human input may be unknown and unpredictable, which affects the robot’s safety constraints. If the human input is a force applied through physical contact with the robot, it also alters the robot’s behavior to maintain safety. We address the safety issue of shared autonomy in real-time applications by proposing a two-layer control framework. In the first layer, we use the history of human input measurements to infer what the human wants the robot to do and define the robot’s safety constraints according to that inference. In the second layer, we formulate a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a barrier function and a controller. Using the controllers in these barrier pairs, the robot is able to maintain its safe operation under the intervention from the human input. This proposed control framework allows the robot to assist the human while preventing them from encountering safety issues.more »