skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Theory for Dynamical Systems
The task of modeling and forecasting a dynamical system is one of the oldest problems, and itremains challenging. Broadly, this task has two subtasks: extracting the full dynamical informa-tion from a partial observation, and then explicitly learning the dynamics from this information.We present a mathematical framework in which the dynamical information is represented in theform of an embedding. The framework combines the two subtasks using the language of spaces,maps, and commutations. The framework also unifies two of the most common learning paradigms:delay-coordinates and reservoir computing. We use this framework as a platform for two otherinvestigations of the reconstructed system, its dynamical stability and the growth of error underiterations. We show that these questions are deeply tied to more fundamental properties of theunderlying system, i.e., the behavior of matrix cocycles over the base dynamics, its nonuniformhyperbolic behavior, and its decay of correlations. Thus, our framework bridges the gap betweenuniversally observed behavior of dynamics modeling and the spectral, differential, and ergodic prop-erties intrinsic to the dynamics.  more » « less
Award ID(s):
1854204
PAR ID:
10471091
Author(s) / Creator(s):
;
Publisher / Repository:
SIAM Journal on Applied Dynamical Systems
Date Published:
Journal Name:
SIAM Journal on Applied Dynamical Systems
Volume:
22
Issue:
3
ISSN:
1536-0040
Page Range / eLocation ID:
2082 to 2122
Subject(s) / Keyword(s):
matrix cocycle, Lyapunov exponent, reservoir computing, delay-coordinates, mixing, directforecast, iterative forecast
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed. 
    more » « less
  2. null (Ed.)
    Abstract Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent’s internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes. 
    more » « less
  3. Complex manipulation tasks often require non-trivial and coordinated movements of different parts of a robot. In this work, we address the challenges associated with learning and reproducing the skills required to execute such complex tasks. Specifically, we decompose a task into multiple subtasks and learn to reproduce the subtasks by learning stable policies from demonstrations. By leveraging the RMPflow framework for motion generation, our approach finds a stable global policy in the configuration space that enables simultaneous execution of various learned subtasks. The resulting global policy is a weighted combination of the learned policies such that the motions are coordinated and feasible under the robot's kinematic and environmental constraints. We demonstrate the necessity and efficacy of the proposed approach in the context of multiple constrained manipulation tasks performed by a Franka Emika robot. 
    more » « less
  4. In this paper, a data-driven neural hybrid system modeling framework via the Maximum Entropy partitioning approach is proposed for complex dynamical system modeling such as human motion dynamics. The sampled data collected from the system is partitioned into segmented data sets using the Maximum Entropy approach, and the mode transition logic is then defined. Then, as the local dynamical description for their corresponding partitions, a collection of small-scale neural networks is trained. Following a neural hybrid system model of the system, a set-valued reachability analysis with low computation cost is provided based on interval analysis and a split and combined process to demonstrate the benefits of our approach in computationally expensive tasks. Finally, a numerical examples of the limit cycle and a human behavior modeling example are provided to demonstrate the effectiveness and efficiency of the developed methods. 
    more » « less
  5. Real-world tasks often exhibit a compositional structure that contains a sequence of simpler sub-tasks. For instance, opening a door requires reaching, grasping, rotating, and pulling the door knob. Such compositional tasks require an agent to reason about the sub-task at hand while orchestrating global behavior accordingly. This can be cast as an online task inference problem, where the current task identity, represented by a context variable, is estimated from the agent’s past experiences with probabilistic inference. Previous approaches have employed simple latent distributions, e.g., Gaussian, to model a single context for the entire task. However, this formulation lacks the expressiveness to capture the composition and transition of the sub-tasks. We propose a variational inference framework OCEAN to perform online task inference for compositional tasks. OCEAN models global and local context variables in a joint latent space, where the global variables represent a mixture of subtasks required for the task, while the local variables capture the transitions between the subtasks. Our framework supports flexible latent distributions based on prior knowledge of the task structure and can be trained in an unsupervised manner. Experimental results show that OCEAN provides more effective task inference with sequential context adaptation and thus leads to a performance boost on complex, multi-stage tasks. 
    more » « less