This paper presents a counterexample-guided iterative algorithm to compute convex, piecewise linear (polyhedral) Lyapunov functions for continuous-time piecewise linear systems. Polyhedral Lyapunov functions provide an alternative to commonly used polynomial Lyapunov functions. Our approach first characterizes intrinsic properties of a polyhedral Lyapunov function including its “eccentricity” and “robustness” to perturbations. We then derive an algorithm that either computes a polyhedral Lyapunov function proving that the system is asymptotically stable, or concludes that no polyhedral Lyapunov function exists whose eccentricity and robustness parameters satisfy some user-provided limits. Significantly, our approach places no a-priori bound on the number of linear pieces that make up the desired polyhedral Lyapunov function. The algorithm alternates between a learning step and a verification step, always maintaining a finite set of witness states. The learning step solves a linear program to compute a candidate Lyapunov function compatible with a finite set of witness states. In the verification step, our approach verifies whether the candidate Lyapunov function is a valid Lyapunov function for the system. If verification fails, we obtain a new witness. We prove a theoretical bound on the maximum number of iterations needed by our algorithm. We demonstrate the applicability of the algorithm on numerical examples.
more »
« less
Estimating the Region of Attraction Using Polynomial Optimization: A Converse Lyapunov Result
In this paper, we propose an iterative method for using SOS programming to estimate the region of attraction of a polynomial vector field, the conjectured convergence of which necessitates the existence of polynomial Lyapunov functions whose sublevel sets approximate the true region of attraction arbitrarily well. The main technical result of the paper is the proof of existence of such a Lyapunov function. Specifically, we usetheHausdorffdistancemetrictoanalyzeconvergenceandin the main theorem demonstrate that the existence of an n-times continuously differentiable maximal Lyapunov function implies that for any ε >0, there exists a polynomial Lyapunov function and associated sub-level set which together prove stability of a set which is within ε Hausdorff distance of the true region of attraction. The proposed iterative method and probably convergence is illustrated with a numerical example.
more »
« less
- Award ID(s):
- 1538374
- PAR ID:
- 10073375
- Date Published:
- Journal Name:
- Proceedings of the IEEE Conference on Decision & Control
- ISSN:
- 2576-2370
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep learning methods have been widely used in robotic applications, making learning-enabled control design for complex nonlinear systems a promising direction. Although deep reinforcement learning methods have demonstrated impressive empirical performance, they lack the stability guarantees that are important in safety-critical situations. One way to provide these guarantees is to learn Lyapunov certificates alongside control policies. There are three related problems: 1) verify that a given Lyapunov function candidate satisfies the conditions for a given controller on a region, 2) find a valid Lyapunov function and controller on a given region, and 3) find a valid Lyapunov function and a controller such that the region of attraction is as large as possible. Previous work has shown that if the dynamics are piecewise linear, it is possible to solve problem 1) and 2) by solving a Mixed-Integer Linear Program (MILP). In this work, we build upon this method by proposing a Lyapunov neural network that considers monotonicity over half spaces in different directions. We 1) propose a specific choice of Lyapunov function architecture that ensures non-negativity and a unique global minimum by construction, and 2) show that this can be leveraged to find the controller and Lyapunov certificates faster and with a larger valid region by maximizing the size of a square inscribed in a given level set. We apply our method to a 2D inverted pendulum, unicycle path following, a 3-D feedback system, and a 4-D cart pole system, and demonstrate it can shorten the training time by half compared to the baseline, as well as find a larger ROA.more » « less
-
We propose a new distributed learning-based framework for stability assessment of a class of networked nonlinear systems, where each subsystem is dissipative. The aim is to learn, in a distributed manner, a Lyapunov function and associated region of attraction for the networked system. We begin by using a neural network function approximation to learn a storage function for each subsystem such that the subsystem satisfies a local dissipativity property. We next use a satisfiability modulo theories (SMT) solver based falsifier that verifies the local dissipativity of each subsystem by deter- mining an absence of counterexamples that violate the local dissipativity property, as established by the neural network approximation. Finally, we verify network-level stability by using an alternating direction method of multipliers (ADMM) approach to update the storage function of each subsystem in a distributed manner until a global stability condition for the network of dissipative subsystems is satisfied. This step also leads to a network-level Lyapunov function that we then use to estimate a region of attraction. We illustrate the proposed algorithm and its advantages on a microgrid interconnection with power electronics interfaces.more » « less
-
null (Ed.)This paper studies the local unstable manifold attached to an equilibrium solution of a system of delay differential equations (DDEs). Two main results are developed. The first is a general method for computing the formal Taylor series coefficients of a function parameterizing the unstable manifold. We derive linear systems of equations whose solutions are the Taylor coefficients, describe explicit formulas for assembling the linear equations for DDEs with polynomial nonlinearities. We also discuss a scheme for transforming non-polynomial DDEs into polynomial ones by appending auxiliary equations. The second main result is an a-posteriori theorem which—when combined with deliberate control of rounding errors— leads to mathematically rigorous computer assisted convergence results and error bounds for the truncated series. Our approach is based on the parameterization method for invariant manifolds and requires some mild non-resonance conditions between the unstable eigenvaluesmore » « less
-
The power of DNN has been successfully demonstrated on a wide variety of high-dimensional problems that cannot be solved by conventional control design methods. These successes also uncover some fundamental and pressing challenges in understanding the representability of deep neural networks for complex and high dimensional input–output relations. Towards the goal of understanding these fundamental questions, we applied an algebraic framework developed in our previous work to analyze ReLU neural network approximation of compositional functions. We prove that for Lipschitz continuous functions, ReLU neural networks have an approximation error upper bound that is a polynomial of the network’s complexity and the compositional features. If the compositional features do not increase exponentially with dimension, which is the case in many applications, the complexity of DNN has a polynomial growth. In addition to function approximations, we also establish ReLU network approximation results for the trajectories of control systems, and for a Lyapunov function that characterizes the domain of attraction.more » « less
An official website of the United States government

