skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Opportunistic Safety Outside the Maximal Controlled Invariant Set
Existing safety control methods for non-stochastic systems become undefined when the system operates outside the maximal robust controlled invariant set (RCIS), making those methods vulnerable to unexpected initial states or unmodeled disturbances. In this work, we propose a novel safety control framework that can work both inside and outside the maximal RCIS, by identifying a worst-case disturbance that can be handled at each state and constructing the control inputs robust to that worst-case disturbance model. We show that such disturbance models and control inputs can be jointly computed by considering an invariance problem for an auxiliary system. Finally, we demonstrate the efficacy of our method both in simulation and in a drone experiment.  more » « less
Award ID(s):
1931982
PAR ID:
10568116
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Control Systems Letters
Volume:
7
ISSN:
2475-1456
Page Range / eLocation ID:
3992 to 3997
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this paper, we first propose a method that can efficiently compute the maximal robust controlled invariant set for discrete-time linear systems with pure delay in input. The key to this method is to construct an auxiliary linear system (without delay) with the same state-space dimension of the original system in consideration and to relate the maximal invariant set of the auxiliary system to that of the original system. When the system is subject to disturbances, guaranteeing safety is harder for systems with input delays. Ability to incorporate any additional information about the disturbance becomes more critical in these cases. Motivated by this observation, in the second part of the paper, we generalize the proposed method to take into account additional preview information on the disturbances, while maintaining computational efficiency. Compared with the naive approach of constructing a higher dimensional system by appending the state-space with the delayed inputs and previewed disturbances, the proposed approach is demonstrated to scale much better with the increasing delay time. 
    more » « less
  2. Real-time controllers must satisfy strict safety requirements. Recently, Control Barrier Functions (CBFs) have been proposed that guarantee safety by ensuring that a suitablydefined barrier function remains bounded for all time. The CBF method, however, has only been developed for deterministic systems and systems with worst-case disturbances and uncertainties. In this paper, we develop a CBF framework for safety of stochastic systems. We consider complete information systems, in which the controller has access to the exact system state, as well as incomplete information systems where the state must be reconstructed from noisy measurements. In the complete information case, we formulate a notion of barrier functions that leads to sufficient conditions for safety with probability 1. In the incomplete information case, we formulate barrier functions that take an estimate from an extended Kalman filter as input, and derive bounds on the probability of safety as a function of the asymptotic error in the filter. We show that, in both cases, the sufficient conditions for safety can be mapped to linear constraints on the control input at each time, enabling the development of tractable optimization-based controllers that guarantee safety, performance, and stability. Our approach is evaluated via simulation study on an adaptive cruise control case study. 
    more » « less
  3. We present a simple model-free control algorithm that is able to robustly learn and stabilize an unknown discrete time linear system with full control and state feedback subject to arbitrary bounded disturbance and noise sequences. The controller does not require any prior knowledge of the system dynamics, disturbances or noise, yet can guarantee robust stability, uniform asymptotic bounds and uniform worst-case bounds on the state-deviation. Rather than the algorithm itself, we would like to highlight the new approach taken towards robust stability analysis which served as a key enabler in providing the presented stability and performance guarantees. We will conclude with simulation results that show that despite the generality and simplicity, the controller demonstrates good closed-loop performance. 
    more » « less
  4. This work proposes an accelerated first-order algorithm we call the Robust Momentum Method for optimizing smooth strongly convex functions. The algorithm has a single scalar parameter that can be tuned to trade off robustness to gradient noise versus worst-case convergence rate. At one extreme, the algorithm is faster than Nesterov's Fast Gradient Method by a constant factor but more fragile to noise. At the other extreme, the algorithm reduces to the Gradient Method and is very robust to noise. The algorithm design technique is inspired by methods from classical control theory and the resulting algorithm has a simple analytical form. Algorithm performance is verified on a series of numerical simulations in both noise-free and relative gradient noise cases. 
    more » « less
  5. Machine learning driven image-based controllers allow robotic systems to take intelligent actions based on the visual feedback from their environment. Understanding when these controllers might lead to system safety violations is important for their integration in safety-critical applications and engineering corrective safety measures for the system. Existing methods leverage simulation-based testing (or falsification) to find the failures of vision-based controllers, i.e., the visual inputs that lead to closed-loop safety violations. However, these techniques do not scale well to the scenarios involving high-dimensional and complex visual inputs, such as RGB images. In this work, we cast the problem of finding closed-loop vision failures as a Hamilton-Jacobi (HJ) reachability problem. Our approach blends simulation-based analysis with HJ reachability methods to compute an approximation of the backward reachable tube (BRT) of the system, i.e., the set of unsafe states for the system under vision-based controllers. Utilizing the BRT, we can tractably and systematically find the system states and corresponding visual inputs that lead to closed-loop failures. These visual inputs can be subsequently analyzed to find the input characteristics that might have caused the failure. Besides its scalability to high-dimensional visual inputs, an explicit computation of BRT allows the proposed approach to capture non-trivial system failures that are difficult to expose via random simulations. We demonstrate our framework on two case studies involving an RGB image-based neural network controller for (a) autonomous indoor navigation, and (b) autonomous aircraft taxiing. 
    more » « less