skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2108856

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Constructing sparse, effective reduced-order models (ROMs) for high-dimensional dynamical data is an active area of research in applied sciences. In this work, we study an efficient approach to identifying such sparse ROMs using an information-theoretic indicator called causation entropy. Given a feature library of possible building block terms for the sought ROMs, the causation entropy ranks the importance of each term to the dynamics conveyed by the training data before a parameter estimation procedure is performed. It thus allows for an efficient construction of a hierarchy of ROMs with varying degrees of sparsity to effectively handle different tasks. This article examines the ability of the causation entropy to identify skillful sparse ROMs when a relatively high-dimensional ROM is required to emulate the dynamics conveyed by the training dataset. We demonstrate that a Gaussian approximation of the causation entropy still performs exceptionally well even in presence of highly non-Gaussian statistics. Such approximations provide an efficient way to access the otherwise hard to compute causation entropies when the selected feature library contains a large number of candidate functions. Besides recovering long-term statistics, we also demonstrate good performance of the obtained ROMs in recovering unobserved dynamics via data assimilation with partial observations, a test that has not been done before for causation-based ROMs of partial differential equations. The paradigmatic Kuramoto–Sivashinsky equation placed in a chaotic regime with highly skewed, multimodal statistics is utilized for these purposes. 
    more » « less
  2. Abstract This work proposes a general framework for analyzing noise-driven transitions in spatially extended non-equilibrium systems and explaining the emergence of coherent patterns beyond the instability onset. The framework relies on stochastic parameterization formulas to reduce the complexity of the original equations while preserving the essential dynamical effects of unresolved scales. The approach is flexible and operates for both Gaussian noise and non-Gaussian noise with jumps. Our stochastic parameterization formulas offer two key advantages. First, they can approximate stochastic invariant manifolds when these manifolds exist. Second, even when such manifolds break down, our formulas can be adapted through a simple optimization of its constitutive parameters. This allows us to handle scenarios with weak time-scale separation where the system has undergone multiple transitions, resulting in large-amplitude solutions not captured by invariant manifolds or other time-scale separation methods. The optimized stochastic parameterizations capture then how small-scale noise impacts larger scales through the system’s nonlinear interactions. This effect is achieved by the very fabric of our parameterizations incorporating non-Markovian (memory-dependent) coefficients into the reduced equation. These coefficients account for the noise’s past influence, not just its current value, using a finite memory length that is selected for optimal performance. The specific memory function, which determines how this past influence is weighted, depends on both the strength of the noise and how it interacts with the system’s nonlinearities. Remarkably, training our theory-guided reduced models on a single noise path effectively learns the optimal memory length for out-of-sample predictions. This approach retains indeed good accuracy in predicting noise-induced transitions, including rare events, when tested against a large ensemble of different noise paths. This success stems from our hybrid approach, which combines analytical understanding with data-driven learning. This combination avoids a key limitation of purely data-driven methods: their struggle to generalize to unseen scenarios, also known as the ‘extrapolation problem.’ 
    more » « less
  3. Abstract Recent years have seen a surge in interest for leveraging neural networks to parameterize small-scale or fast processes in climate and turbulence models. In this short paper, we point out two fundamental issues in this endeavor. The first concerns the difficulties neural networks may experience in capturing rare events due to limitations in how data is sampled. The second arises from the inherent multiscale nature of these systems. They combine high-frequency components (like inertia-gravity waves) with slower, evolving processes (geostrophic motion). This multiscale nature creates a significant hurdle for neural network closures. To illustrate these challenges, we focus on the atmospheric 1980 Lorenz model, a simplified version of the Primitive Equations that drive climate models. This model serves as a compelling example because it captures the essence of these difficulties. 
    more » « less
  4. Conceptual delay models have played a key role in the analysis and understanding of El Niño-Southern Oscillation (ENSO) variability. Based on such delay models, we propose in this work a novel scenario for the fabric of ENSO variability resulting from the subtle interplay between stochastic disturbances and nonlinear invariant sets emerging from bifurcations of the unperturbed dynamics. To identify these invariant sets we adopt an approach combining Galerkin–Koornwinder (GK) approximations of delay differential equations and center-unstable manifold reduction techniques. In that respect, GK approximation formulas are reviewed and synthesized, as well as analytic approximation formulas of center-unstable manifolds. The reduced systems derived thereof enable us to conduct a thorough analysis of the bifurcations arising in a standard delay model of ENSO. We identify thereby a saddle-node bifurcation of periodic orbits co-existing with a subcritical Hopf bifurcation, and a homoclinic bifurcation for this model. We show furthermore that the computation of unstable periodic orbits (UPOs) unfolding through these bifurcations is considerably simplified from the reduced systems. These dynamical insights enable us in turn to design a stochastic model whose solutions---as the delay parameter drifts slowly through its critical values---produce a wealth of temporal patterns resembling ENSO events and exhibiting also decadal variability. Our analysis dissects the origin of this variability and shows how it is tied to certain transition paths between invariant sets of the unperturbed dynamics (for ENSO’s interannual variability) or simply due to the presence of UPOs close to the homoclinic orbit (for decadal variability). In short, this study points out the role of solution paths evolving through tipping ‘‘points’’ beyond equilibria, as possible mechanisms organizing the variability of certain climate phenomena. 
    more » « less
  5. A general, variational approach to derive low-order reduced models from possibly non-autonomous systems is presented. The approach is based on the concept of optimal parameterizing manifold (OPM) that substitutes more classical notions of invariant or slow manifolds when the breakdown of “slaving” occurs, i.e., when the unresolved variables cannot be expressed as an exact functional of the resolved ones anymore. The OPM provides, within a given class of parameterizations of the unresolved variables, the manifold that averages out optimally these variables as conditioned on the resolved ones. The class of parameterizations retained here is that of continuous deformations of parameterizations rigorously valid near the onset of instability. These deformations are produced through the integration of auxiliary backward–forward systems built from the model’s equations and lead to analytic formulas for parameterizations. In this modus operandi, the backward integration time is the key parameter to select per scale/variable to parameterize in order to derive the relevant parameterizations which are doomed to be no longer exact away from instability onset due to the breakdown of slaving typically encountered, e.g., for chaotic regimes. The selection criterion is then made through data-informed minimization of a least-square parameterization defect. It is thus shown through optimization of the backward integration time per scale/variable to parameterize, that skilled OPM reduced systems can be derived for predicting with accuracy higher-order critical transitions or catastrophic tipping phenomena, while training our parameterization formulas for regimes prior to these transitions takes place. 
    more » « less
  6. A stochastic framework is exhibited to produce systematically a broadband response from periodic solutions of time-delay systems. 
    more » « less
  7. Developing suitable approximate models for analyzing and simulating complex nonlinear systems is practically important. This paper aims at exploring the skill of a rich class of nonlinear stochastic models, known as the conditional Gaussian nonlinear system (CGNS), as both a cheap surrogate model and a fast preconditioner for facilitating many computationally challenging tasks. The CGNS preserves the underlying physics to a large extent and can reproduce intermittency, extreme events, and other non-Gaussian features in many complex systems arising from practical applications. Three interrelated topics are studied. First, the closed analytic formulas of solving the conditional statistics provide an efficient and accurate data assimilation scheme. It is shown that the data assimilation skill of a suitable CGNS approximate forecast model outweighs that by applying an ensemble method even to the perfect model with strong nonlinearity, where the latter suffers from filter divergence. Second, the CGNS allows the development of a fast algorithm for simultaneously estimating the parameters and the unobserved variables with uncertainty quantification in the presence of only partial observations. Utilizing an appropriate CGNS as a preconditioner significantly reduces the computational cost in accurately estimating the parameters in the original complex system. Finally, the CGNS advances rapid and statistically accurate algorithms for computing the probability density function and sampling the trajectories of the unobserved state variables. These fast algorithms facilitate the development of an efficient and accurate data-driven method for predicting the linear response of the original system with respect to parameter perturbations based on a suitable CGNS preconditioner. 
    more » « less