skip to main content


This content will become publicly available on July 1, 2024

Title: A causality-based learning approach for discovering the underlying dynamics of complex systems from partial observations with stochastic parameterization
Discovering the underlying dynamics of complex systems from data is an important practical topic. Constrained optimization algorithms are widely utilized and lead to many successes. Yet, such purely data-driven methods may bring about incorrect physics in the presence of random noise and cannot easily handle the situation with incomplete data. In this paper, a new iterative learning algorithm for complex turbulent systems with partial observations is developed that alternates between identifying model structures, recovering unobserved variables, and estimating parameters. First, a causality-based learning approach is utilized for the sparse identification of model structures, which takes into account certain physics knowledge that is pre-learned from data. It has unique advantages in coping with indirect coupling between features and is robust to stochastic noise. A practical algorithm is designed to facilitate causal inference for high-dimensional systems. Next, a systematic nonlinear stochastic parameterization is built to characterize the time evolution of the unobserved variables. Closed analytic formula via efficient nonlinear data assimilation is exploited to sample the trajectories of the unobserved variables, which are then treated as synthetic observations to advance a rapid parameter estimation. Furthermore, the localization of the state variable dependence and the physics constraints are incorporated into the learning procedure. This mitigates the curse of dimensionality and prevents the finite time blow-up issue. Numerical experiments show that the new algorithm identifies the model structure and provides suitable stochastic parameterizations for many complex nonlinear systems with chaotic dynamics, spatiotemporal multiscale structures, intermittency, and extreme events.  more » « less
Award ID(s):
2118399
NSF-PAR ID:
10477044
Author(s) / Creator(s):
;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Physica D: Nonlinear Phenomena
Volume:
449
Issue:
C
ISSN:
0167-2789
Page Range / eLocation ID:
133743
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Developing suitable approximate models for analyzing and simulating complex nonlinear systems is practically important. This paper aims at exploring the skill of a rich class of nonlinear stochastic models, known as the conditional Gaussian nonlinear system (CGNS), as both a cheap surrogate model and a fast preconditioner for facilitating many computationally challenging tasks. The CGNS preserves the underlying physics to a large extent and can reproduce intermittency, extreme events, and other non-Gaussian features in many complex systems arising from practical applications. Three interrelated topics are studied. First, the closed analytic formulas of solving the conditional statistics provide an efficient and accurate data assimilation scheme. It is shown that the data assimilation skill of a suitable CGNS approximate forecast model outweighs that by applying an ensemble method even to the perfect model with strong nonlinearity, where the latter suffers from filter divergence. Second, the CGNS allows the development of a fast algorithm for simultaneously estimating the parameters and the unobserved variables with uncertainty quantification in the presence of only partial observations. Utilizing an appropriate CGNS as a preconditioner significantly reduces the computational cost in accurately estimating the parameters in the original complex system. Finally, the CGNS advances rapid and statistically accurate algorithms for computing the probability density function and sampling the trajectories of the unobserved state variables. These fast algorithms facilitate the development of an efficient and accurate data-driven method for predicting the linear response of the original system with respect to parameter perturbations based on a suitable CGNS preconditioner. 
    more » « less
  2. Abstract

    A hybrid data assimilation algorithm is developed for complex dynamical systems with partial observations. The method starts with applying a spectral decomposition to the entire spatiotemporal fields, followed by creating a machine learning model that builds a nonlinear map between the coefficients of observed and unobserved state variables for each spectral mode. A cheap low‐order nonlinear stochastic parameterized extended Kalman filter (SPEKF) model is employed as the forecast model in the ensemble Kalman filter to deal with each mode associated with the observed variables. The resulting ensemble members are then fed into the machine learning model to create an ensemble of the corresponding unobserved variables. In addition to the ensemble spread, the training residual in the machine learning‐induced nonlinear map is further incorporated into the state estimation, advancing the diagnostic quantification of the posterior uncertainty. The hybrid data assimilation algorithm is applied to a precipitating quasi‐geostrophic (PQG) model, which includes the effects of water vapor, clouds, and rainfall beyond the classical two‐level QG model. The complicated nonlinearities in the PQG equations prevent traditional methods from building simple and accurate reduced‐order forecast models. In contrast, the SPEKF forecast model is skillful in recovering the intermittent observed states, and the machine learning model effectively estimates the chaotic unobserved signals. Utilizing the calibrated SPEKF and machine learning models under a moderate cloud fraction, the resulting hybrid data assimilation remains reasonably accurate when applied to other geophysical scenarios with nearly clear skies or relatively heavy rainfall, implying the robustness of the algorithm for extrapolation.

     
    more » « less
  3. Abstract

    Modeling and understanding sea ice dynamics in marginal ice zones rely on measurements of sea ice. Lagrangian observations of ice floes provide insight into the dynamics of sea ice, the ocean, and the atmosphere. However, optical satellite images are susceptible to atmospheric noise, leading to gaps in the retrieved time series of floe positions. This paper presents an efficient and statistically accurate nonlinear dynamical interpolation framework for recovering missing floe observations. It exploits a balanced physics‐based and data‐driven construction to address the challenges posed by the high‐dimensional and nonlinear nature of the coupled atmosphere‐ice‐ocean system, where effective reduced‐order stochastic models, nonlinear data assimilation, and simultaneous parameter estimation are systematically integrated. The new method succeeds in recovering the locations, curvatures, angular displacements, and the associated strong non‐Gaussian distributions of the missing floes in the Beaufort Sea. It also accurately estimates floe thickness and recovers the unobserved underlying ocean field with an appropriate uncertainty quantification, advancing our understanding of Arctic climate.

     
    more » « less
  4. Modeling corrosion growth for complex systems such as the oil refinery system is a major challenge since the corrosion process of oil and gas pipelines are inherently stochastic and depends on many factors including exposures to environmental conditions, operating conditions, and electrochemical reactions. Moreover, the number of sensors is usually limited, and sensor data are incomplete and scattering, which hinders the capability of capturing the corrosion growth behaviors. Therefore, this paper proposes Multi-sensor Corrosion Growth Model with Latent Variables to predict the corrosion growth process in oil refinery piping. The proposed model is a combination of the hierarchical clustering algorithm and the vector autoregression (VAR) model. The clustering algorithm aims to find the hidden (i.e., latent) data clusters of the measured time series data, from which the time series from the same cluster will be included in the VAR model to predict the corrosion depth from multiple sensors. The model can capture the relationship between sensor time series data and identify latent variables. A real case study of an oil refinery system, in which in-line inspection (ILI) data were collected, was utilized to validate model. Regarding corrosion growth prediction, the paper compared the prediction accuracy of VAR model with other three forms of power law model, which is widely accepted to expect the time-dependent depth of corrosion such as power function (PF), PF with initiation time of corrosion (PFIT), and PF with initiation time of corrosion and covariates (PFCOV). The results showed that VAR model has the lowest prediction error based on the mean absolute percentage error (MAPE) evaluation for test data. Finally, the proposed model is believed to be useful for dealing with a complex system that has a variety of corrosion growth behaviors, such as the oil refinery system, as well as it can be applied in other real-time applications. 
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less