skip to main content


Search for: All records

Award ID contains: 1836932

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Infinitesimal contraction analysis, wherein global convergence results are obtained from properties of local dynamics, is a powerful analysis tool. In this paper, we generalize infinitesimal contraction analysis to hybrid systems in which state-dependent guards trigger transitions defined by reset maps between modes that may have different norms and need not be of the same dimension. In contrast to existing literature, we do not restrict mode sequence or dwell time. We work in settings where the hybrid system flow is differentiable almost everywhere and its derivative is the solution to a jump-linear-time-varying differential equation whose jumps are defined by a saltation matrix determined from the guard, reset map, and vector field. Our main result shows that if the vector field is infinitesimally contracting, and if the saltation matrix is non-expansive, then the intrinsic distance between any two trajectories decreases exponentially in time. When bounds on dwell time are available, our approach yields a bound on the intrinsic distance between trajectories regardless of whether the dynamics are expansive or contractive. We illustrate our results using wo examples: a constrained mechanical system and an electrical circuit with an ideal diode. 
    more » « less
  2. Implicit neural networks are a general class of learning models that replace the layers in traditional feedforward models with implicit algebraic equations. Compared to traditional learning models, implicit networks offer competitive performance and reduced memory consumption. However, they can remain brittle with respect to input adversarial perturbations. This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks; our framework blends together mixed monotone systems theory and contraction theory. First, given an implicit neural network, we introduce a related embedded network and show that, given an infinity-norm box constraint on the input, the embedded network provides an infinity-norm box overapproximation for the output of the original network. Second, using infinity-matrix measures, we propose sufficient conditions for well-posedness of both the original and embedded system and design an iterative algorithm to compute the infinity-norm box robustness margins for reachability and classification problems. Third, of independent value, we show that employing a suitable relative classifier variable in our analysis will lead to tighter bounds on the certified adversarial robustness in classification problems. Finally, we perform numerical simulations on a Non-Euclidean Monotone Operator Network (NEMON) trained on the MNIST dataset. In these simulations, we compare the accuracy and run time of our mixed monotone contractive approach with the existing robustness verification approaches in the literature for estimating the certified adversarial robustness. 
    more » « less
  3. null (Ed.)