skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Learning neural contracting dynamics: Extended linearization and global guarantees
Global stability and robustness guarantees in learned dynamical systems are essential to ensure well-behavedness of the systems in the face of uncertainty. We present Extended Linearized Contracting Dynamics (ELCD), the first neural network-based dynamical system with global contractivity guarantees in arbitrary metrics. The key feature of ELCD is a parametrization of the extended linearization of the nonlinear vector field. In its most basic form, ELCD is guaranteed to be (i) globally exponentially stable,(ii) equilibrium contracting, and (iii) globally contracting with respect to some metric. To allow for contraction with respect to more general metrics in the data space, we train diffeomorphisms between the data space and a latent space and enforce contractivity in the latent space, which ensures global contractivity in the data space. We demonstrate the performance of ELCD on the high dimensional LASA, multi-link pendulum, and Rosenbrock datasets.  more » « less
Award ID(s):
2229876
PAR ID:
10662053
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
38th Conference on Neural Information Processing Systems (NeurIPS 2024)
Date Published:
Format(s):
Medium: X
Location:
Vancouver, Canada
Sponsoring Org:
National Science Foundation
More Like this
  1. It is often of interest to know which systems will approach a periodic trajectory when given a periodic input. Results are available for certain classes of systems, such as contracting systems, showing that they always entrain to periodic inputs. In contrast to this, we demonstrate that there exist systems which are globally exponentially stable yet do not entrain to a periodic input. This could be seen as surprising, as it is known that globally exponentially stable systems are in fact contracting with respect to some Riemannian metric. The paper also addresses the broader issue of entrainment when an input is added to a contractive system. 
    more » « less
  2. Recent deep clustering algorithms take advantage of self-supervised learning and self-training techniques to map the original data into a latent space, where the data embedding and clustering assignment can be jointly optimized. However, as many recent datasets are enormous and noisy, getting a clear boundary between different clusters is challenging with existing methods that mainly focus on contracting similar samples together and overlooking samples near boundary of clusters in the latent space. In this regard, we propose an end-to-end deep clustering algorithm, i.e., Locally Normalized Soft Contrastive Clustering (LNSCC). It takes advantage of similarities among each sample’s local neighborhood and globally disconnected samples to leverage positiveness and negativeness of sample pairs in a contrastive way to separate different clusters. Experimental results on various datasets illustrate that our proposed approach achieves outstanding clustering performance over most of the state-of-the-art clustering methods for both image and non-image data even without convolution. 
    more » « less
  3. Time-varying linear state-space models are powerful tools for obtaining mathematically interpretable representations of neural signals. For example, switching and decomposed models describe complex systems using latent variables that evolve according to simple locally linear dynamics. However, existing methods for latent variable estimation are not robust to dynamical noise and system nonlinearity due to noise-sensitive inference procedures and limited model formulations. This can lead to inconsistent results on signals with similar dynamics, limiting the model's ability to provide scientific insight. In this work, we address these limitations and propose a probabilistic approach to latent variable estimation in decomposed models that improves robustness against dynamical noise. Additionally, we introduce an extended latent dynamics model to improve robustness against system nonlinearities. We evaluate our approach on several synthetic dynamical systems, including an empirically-derived brain-computer interface experiment, and demonstrate more accurate latent variable inference in nonlinear systems with diverse noise conditions. Furthermore, we apply our method to a real-world clinical neurophysiology dataset, illustrating the ability to identify interpretable and coherent structure where previous models cannot. 
    more » « less
  4. Interval Markov decision processes are a class of Markov models where the transition probabilities between the states belong to intervals. In this paper, we study the problem of efficient estimation of the optimal policies in Interval Markov Decision Processes (IMDPs) with continuous action- space. Given an IMDP, we show that the pessimistic (resp. the optimistic) value iterations, i.e., the value iterations under the assumption of a competitive adversary (resp. cooperative agent), are monotone dynamical systems and are contracting with respect to the infinity-norm. Inspired by this dynamical system viewpoint, we introduce another IMDP, called the action-space relaxation IMDP. We show that the action-space relaxation IMDP has two key features: (i) its optimal value is an upper bound for the optimal value of the original IMDP, and (ii) its value iterations can be efficiently solved using tools and techniques from convex optimization. We then consider the policy optimization problems at each step of the value iterations as a feedback controller of the value function. Using this system- theoretic perspective, we propose an iteration-distributed imple- mentation of the value iterations for approximating the optimal value of the action-space relaxation IMDP. 
    more » « less
  5. The dynamic complexity of robots and mechatronic systems often pertains to the hybrid nature of dynamics, where governing equations consist of heterogenous equations that are switched depending on the state of the system. Legged robots and manipulator robots experience contact-noncontact discrete transitions, causing switching of governing equations. Analysis of these systems have been a challenge due to the lack of a global, unified model that is amenable to analysis of the global behaviors. Composition operator theory has the potential to provide a global, unified representation by converting them to linear dynamical systems in a lifted space. The current work presents a method for encoding nonlinear heterogenous dynamics into a high dimensional space of observables in the form of Koopman operator. First, a new formula is established for representing the Koopman operator in a Hilbert space by using inner products of observable functions and their composition with the governing state transition function. This formula, called Direct Encoding, allows for converting a class of heterogenous systems directly to a global, unified linear model. Unlike prevalent data-driven methods, where results can vary depending on numerical data, the proposed method is globally valid, not requiring numerical simulation of the original dynamics. A simple example validates the theoretical results, and the method is applied to a multi-cable suspension system. 
    more » « less