Models of many engineering and natural systems are imperfect. The discrepancy between the mathematical representations of a true physical system and its imperfect model is called the model error. These model errors can lead to substantial differences between the numerical solutions of the model and the state of the system, particularly in those involving nonlinear, multiscale phenomena. Thus, there is increasing interest in reducing model errors, particularly by leveraging the rapidly growing observational data to understand their physics and sources. Here, we introduce a framework named MEDIDA: Model Error Discovery with Interpretability and Data Assimilation. MEDIDA only requires a working numerical solver of the model and a small number of noisefree or noisy sporadic observations of the system. In MEDIDA, first, the model error is estimated from differences between the observed states and modelpredicted states (the latter are obtained from a number of onetimestep numerical integrations from the previous observed states). If observations are noisy, a data assimilation technique, such as the ensemble Kalman filter, is employed to provide the analysis state of the system, which is then used to estimate the model error. Finally, an equationdiscovery technique, here the relevance vector machine, a sparsitypromoting Bayesian method, is used to identify an interpretable, parsimonious, and closedform representation of the model error. Using the chaotic Kuramoto–Sivashinsky system as the test case, we demonstrate the excellent performance of MEDIDA in discovering different types of structural/parametric model errors, representing different types of missing physics, using noisefree and noisy observations.
more » « less Award ID(s):
 2005123
 NSFPAR ID:
 10367968
 Publisher / Repository:
 American Institute of Physics
 Date Published:
 Journal Name:
 Chaos: An Interdisciplinary Journal of Nonlinear Science
 Volume:
 32
 Issue:
 6
 ISSN:
 10541500
 Page Range / eLocation ID:
 Article No. 061105
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

The development of datainformed predictive models for dynamical systems is of widespread interest in many disciplines. We present a unifying framework for blending mechanistic and machinelearning approaches to identify dynamical systems from noisily and partially observed data. We compare pure datadriven learning with hybrid models which incorporate imperfect domain knowledge, referring to the discrepancy between an assumed truth model and the imperfect mechanistic model as model error. Our formulation is agnostic to the chosen machine learning model, is presented in both continuous and discretetime settings, and is compatible both with model errors that exhibit substantial memory and errors that are memoryless. First, we study memoryless linear (w.r.t. parametricdependence) model error from a learning theory perspective, defining excess risk and generalization error. For ergodic continuoustime systems, we prove that both excess risk and generalization error are bounded above by terms that diminish with the squareroot of T T , the timeinterval over which training data is specified. Secondly, we study scenarios that benefit from modeling with memory, proving universal approximation theorems for two classes of continuoustime recurrent neural networks (RNNs): both can learn memorydependent model error, assuming that it is governed by a finitedimensional hidden variable and that, together, the observed and hidden variables form a continuoustime Markovian system. In addition, we connect one class of RNNs to reservoir computing, thereby relating learning of memorydependent error to recent work on supervised learning between Banach spaces using random features. Numerical results are presented (Lorenz ’63, Lorenz ’96 Multiscale systems) to compare purely datadriven and hybrid approaches, finding hybrid methods less datahungry and more parametrically efficient. We also find that, while a continuoustime framing allows for robustness to irregular sampling and desirable domain interpretability, a discretetime framing can provide similar or better predictive performance, especially when data are undersampled and the vector field defining the true dynamics cannot be identified. Finally, we demonstrate numerically how data assimilation can be leveraged to learn hidden dynamics from noisy, partiallyobserved data, and illustrate challenges in representing memory by this approach, and in the training of such models.more » « less

Abstract Weather forecasts made with imperfect models contain state‐dependent errors. Data assimilation (DA) partially corrects these errors with new information from observations. As such, the corrections, or “analysis increments,” produced by the DA process embed information about model errors. An attempt is made here to extract that information to improve numerical weather prediction. Neural networks (NNs) are trained to predict corrections to the systematic error in the National Oceanic and Atmospheric Administration's FV3‐GFS model based on a large set of analysis increments. A simple NN focusing on an atmospheric column significantly improves the estimated model error correction relative to a linear baseline. Leveraging large‐scale horizontal flow conditions using a convolutional NN, when compared to the simple column‐oriented NN, does not improve skill in correcting model error. The sensitivity of model error correction to forecast inputs is highly localized by vertical level and by meteorological variable, and the error characteristics vary across vertical levels. Once trained, the NNs are used to apply an online correction to the forecast during model integration. Improvements are evaluated both within a cycled DA system and across a collection of 10‐day forecasts. It is found that applying state‐dependent NN‐predicted corrections to the model forecast improves the overall quality of DA and improves the 10‐day forecast skill at all lead times.

Abstract The largest obstacle to managing satellites in low Earth orbit (LEO) is accurately forecasting the neutral mass densities that appreciably impact atmospheric drag. Empirical thermospheric models are often used to estimate neutral densities but they struggle to forecast neutral densities during geomagnetic storms when they are highly variable. Physics‐based models are thus increasingly turned to for their ability to describe the dynamical evolution of neutral densities. However, these models require observations to constrain dynamical state variables to be able to forecast mass densities with adequate fidelity. The LEO environment has scarce neutral state observations. Here, we demonstrate, in simulated experiments, a reduction in orbit errors and neutral densities using a physics‐based, data assimilation approach with ionospheric observations. Using a coupled thermosphere‐ionosphere model, the Thermosphere Ionosphere Electrodynamics General Circulation Model, we assimilate Constellation Observing System for Meterology, Ionosphere, and Climate electron density profiles (EDPs) derived from radio occultation (RO) observations. We use the EDPs to directly update neutral states, improving errors for neutral temperature by 70% and neutral winds by 20%. Updated neutral temperature and neutral winds additionally improve helium composition errors by 60% and 40%, respectively. Improved neutral density estimates correspond to a reduction in orbit errors of 1.2 km over 2 days, a 70% reduction over a no‐assimilation control, and a 29 km improvement over 9 days. This study builds on the results of our earlier work to further develop and demonstrate the potential of using a vast and growing RO data source, with a physics‐based model, to overcome our limited number of neutral observations.

Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)The Neuronix highperformance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from lowend consumer grade devices such as the Nvidia GTX 970 to higherend devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely computeintensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floatingpoint calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly wellknown issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely timeconsuming for algorithm research in which a single run often taxes a computing environment to its limits. Wellknown techniques such as crossvalidation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when rerunning the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Crossvalidation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Largescale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPUaccelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are nondeterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multithreading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32bit since Python defaults to 64bit. We try to avoid using 64bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [1113]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues.more » « less

Abstract The Western United States is dominated by natural lands that play a critical role for carbon balance, water quality, and timber reserves. This region is also particularly vulnerable to forest mortality from drought, insect attack, and wildfires, thus requiring constant monitoring to assess ecosystem health. Carbon monitoring techniques are challenged by the complex mountainous terrain, thus there is an opportunity for data assimilation systems that combine land surface models and satellite‐derived observations to provide improved carbon monitoring. Here, we use the Data Assimilation Research Testbed to adjust the Community Land Model (CLM5.0) with remotely sensed observations of leaf area and above‐ground biomass. The adjusted simulation significantly reduced the above‐ground biomass and leaf area, leading to a reduction in both photosynthesis and respiration fluxes. The reduction in the carbon fluxes mostly offset, thus both the adjusted and free simulation projected a weak carbon sink to the land. This result differed from a separate observation‐constrained model (FLUXCOM) that projected strong carbon uptake to the land. Simulation diagnostics suggested water limitation had an important influence upon the magnitude and spatial pattern of carbon uptake through photosynthesis. We recommend that additional observations important for water cycling (e.g., snow water equivalent, land surface temperature) be included to improve the veracity of the spatial pattern in carbon uptake. Furthermore, the assimilation system should be enhanced to maximize the number of the simulated state variables that are adjusted, especially those related to the recommended observed quantities including water cycling and soil carbon.