skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Guaranteed Safe Path and Trajectory Tracking via Reachability Analysis Using Differential Inequalities
In many automated motion planning systems, vehicles are tasked with tracking a reference path or trajectory that is safe by design. However, due to various uncertainties, real vehicles may deviate from such references, potentially leading to collisions. This paper presents rigorous reachable set bounding methods for rapidly enclosing the set of possible deviations under uncertainty, which is critical information for online safety verification. The proposed approach applies recent advances in the theory of differential inequalities that exploit redundant model equations to achieve sharp bounds using only simple interval calculations. These methods have been shown to produce very sharp bounds at low cost for nonlinear systems in other application domains, but they rely on problem-specific insights to identify appropriate redundant equations, which makes them difficult to generalize and automate. Here, we demonstrate the application of these methods to tracking problems for the first time using three representative case studies. We find that defining redundant equations in terms of Lyapunov-like functions is particularly effective. The results show that this technique can produce effective bounds with computational times that are orders of magnitude less than the planned time horizon, making this a promising approach for online safety verification. This performance, however, comes at the cost of low generalizability, specifically due to the need for problem-specific insights and advantageous problem structure, such as the existence of appropriate Lyapunov-like functions.  more » « less
Award ID(s):
1949748
PAR ID:
10431461
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Journal of intelligent robotic systems
ISSN:
1573-0409
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Vector Lyapunov functions are a multi-dimensional extension of the more familiar (scalar) Lyapunov functions, commonly used to prove stability properties in systems described by non-linear ordinary differential equations (ODEs). This paper explores an analogous vector extension for so-called barrier certificates used in safety verification. As with vector Lyapunov functions, the approach hinges on constructing appropriate comparison systems, i.e., related differential equation systems from which properties of the original system may be inferred. The paper presents an accessible development of the approach, demonstrates that most previous notions of barrier certificate are special cases of comparison systems, and discusses the potential applications of vector barrier certificates in safety verification and invariant synthesis. 
    more » « less
  2. This tutorial paper focuses on safe physics- informed machine learning in the context of dynamics and control, providing a comprehensive overview of how to integrate physical models and safety guarantees. As machine learning techniques enhance the modeling and control of complex dynamical systems, ensuring safety and stability remains a critical challenge, especially in safety-critical applications like autonomous vehicles, robotics, medical decision-making, and energy systems. We explore various approaches for embedding and ensuring safety constraints, including structural priors, Lyapunov and Control Barrier Functions, predictive control, projections, and robust optimization techniques. Additionally, we delve into methods for uncertainty quantification and safety verification, including reachability analysis and neural network verification tools, which help validate that control policies remain within safe operating bounds even in uncertain environments. The paper includes illustrative examples demonstrating the implementation aspects of safe learning frameworks that combine the strengths of data-driven approaches with the rigor of physical principles, offering a path toward the safe control of complex dynamical systems. 
    more » « less
  3. Abstract Advances in deep learning have revolutionized cyber‐physical applications, including the development of autonomous vehicles. However, real‐world collisions involving autonomous control of vehicles have raised significant safety concerns regarding the use of deep neural networks (DNNs) in safety‐critical tasks, particularly perception. The inherent unverifiability of DNNs poses a key challenge in ensuring their safe and reliable operation. In this work, we propose perception simplex ( ), a fault‐tolerant application architecture designed for obstacle detection and collision avoidance. We analyse an existing LiDAR‐based classical obstacle detection algorithm to establish strict bounds on its capabilities and limitations. Such analysis and verification have not been possible for deep learning‐based perception systems yet. By employing verifiable obstacle detection algorithms, identifies obstacle existence detection faults in the output of unverifiable DNN‐based object detectors. When faults with potential collision risks are detected, appropriate corrective actions are initiated. Through extensive analysis and software‐in‐the‐loop simulations, we demonstrate that provides deterministic fault tolerance against obstacle existence detection faults, establishing a robust safety guarantee. 
    more » « less
  4. We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging robot control problems such as path tracking for wheeled vehicles and humanoid robot balancing. 
    more » « less
  5. This paper presents a counterexample-guided iterative algorithm to compute convex, piecewise linear (polyhedral) Lyapunov functions for continuous-time piecewise linear systems. Polyhedral Lyapunov functions provide an alternative to commonly used polynomial Lyapunov functions. Our approach first characterizes intrinsic properties of a polyhedral Lyapunov function including its “eccentricity” and “robustness” to perturbations. We then derive an algorithm that either computes a polyhedral Lyapunov function proving that the system is asymptotically stable, or concludes that no polyhedral Lyapunov function exists whose eccentricity and robustness parameters satisfy some user-provided limits. Significantly, our approach places no a-priori bound on the number of linear pieces that make up the desired polyhedral Lyapunov function. The algorithm alternates between a learning step and a verification step, always maintaining a finite set of witness states. The learning step solves a linear program to compute a candidate Lyapunov function compatible with a finite set of witness states. In the verification step, our approach verifies whether the candidate Lyapunov function is a valid Lyapunov function for the system. If verification fails, we obtain a new witness. We prove a theoretical bound on the maximum number of iterations needed by our algorithm. We demonstrate the applicability of the algorithm on numerical examples. 
    more » « less