Stochastic parameterizations account for uncertainty in the representation of unresolved subgrid processes by sampling from the distribution of possible subgrid forcings. Some existing stochastic parameterizations utilize data‐driven approaches to characterize uncertainty, but these approaches require significant structural assumptions that can limit their scalability. Machine learning models, including neural networks, are able to represent a wide range of distributions and build optimized mappings between a large number of inputs and subgrid forcings. Recent research on machine learning parameterizations has focused only on deterministic parameterizations. In this study, we develop a stochastic parameterization using the generative adversarial network (GAN) machine learning framework. The GAN stochastic parameterization is trained and evaluated on output from the Lorenz '96 model, which is a common baseline model for evaluating both parameterization and data assimilation techniques. We evaluate different ways of characterizing the input noise for the model and perform model runs with the GAN parameterization at weather and climate time scales. Some of the GAN configurations perform better than a baseline bespoke parameterization at both time scales, and the networks closely reproduce the spatiotemporal correlations and regimes of the Lorenz '96 system. We also find that, in general, those models which produce skillful forecasts are also associated with the best climate simulations.
- Award ID(s):
- 2009752
- PAR ID:
- 10552536
- Publisher / Repository:
- Journal of Advances in Modeling Earth Systems
- Date Published:
- Journal Name:
- Journal of Advances in Modeling Earth Systems
- Volume:
- 15
- Issue:
- 10
- ISSN:
- 1942-2466
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Coupled climate simulations that span several hundred years cannot be run at a high‐enough spatial resolution to resolve mesoscale ocean dynamics. Recently, several studies have considered Deep Learning to parameterize subgrid forcing within macroscale ocean equations using data from ocean‐only simulations with idealized geometry. We present a stochastic Deep Learning parameterization that is trained on data generated by CM2.6, a high‐resolution state‐of‐the‐art coupled climate model. We train a Convolutional Neural Network for the subgrid momentum forcing using macroscale surface velocities from a few selected subdomains with different dynamical regimes. At each location of the coarse grid, rather than predicting a single number for the subgrid momentum forcing, we predict both the mean and standard deviation of a Gaussian probability distribution. This approach requires training our neural network to minimize a negative log‐likelihood loss function rather than the Mean Square Error, which has been the standard in applications of Deep Learning to the problem of parameterizations. Each estimate of the conditional mean subgrid forcing is thus associated with an uncertainty estimate–the standard deviation—which will form the basis for a stochastic subgrid parameterization. Offline tests show that our parameterization generalizes well to the global oceans and a climate with increased
levels without further training. We then implement our learned stochastic parameterization in an eddy‐permitting idealized shallow water model. The implementation is stable and improves some statistics of the flow. Our work demonstrates the potential of combining Deep Learning tools with a probabilistic approach in parameterizing unresolved ocean dynamics. -
Abstract Subgrid processes in global climate models are represented by parameterizations which are a major source of uncertainties in simulations of climate. In recent years, it has been suggested that machine‐learning (ML) parameterizations based on high‐resolution model output data could be superior to traditional parameterizations. Currently, both traditional and ML parameterizations of subgrid processes in the atmosphere are based on a single‐column approach, which only use information from single atmospheric columns. However, single‐column parameterizations might not be ideal since certain atmospheric phenomena, such as organized convective systems, can cross multiple grid boxes and involve slantwise circulations that are not purely vertical. Here we train neural networks (NNs) using non‐local inputs spanning over 3 × 3 columns of inputs. We find that including the non‐local inputs improves the offline prediction of a range of subgrid processes. The improvement is especially notable for subgrid momentum transport and for atmospheric conditions associated with mid‐latitude fronts and convective instability. Using an interpretability method, we find that the NN improvements partly rely on using the horizontal wind divergence, and we further show that including the divergence or vertical velocity as a separate input substantially improves offline performance. However, non‐local winds continue to be useful inputs for parameterizating subgrid momentum transport even when the vertical velocity is included as an input. Overall, our results imply that the use of non‐local variables and the vertical velocity as inputs could improve the performance of ML parameterizations, and the use of these inputs should be tested in online simulations in future work.
-
Accurate representations of unknown and sub-grid physical processes through parameterizations (or closure) in numerical simulations with quantified uncertainty are critical for resolving the coarse-grained partial differential equations that govern many problems ranging from weather and climate prediction to turbulence simulations. Recent advances have seen machine learning (ML) increasingly applied to model these subgrid processes, resulting in the development of hybrid physics-ML models through the integration with numerical solvers. In this work, we introduce a novel framework for the joint estimation and uncertainty quantification of physical parameters and machine learning parameterizations in tandem, leveraging differentiable programming. Achieved through online training and efficient Bayesian inference within a high-dimensional parameter space, this approach is enabled by the capabilities of differentiable programming. This proof of concept underscores the substantial potential of differentiable programming in synergistically combining machine learning with differential equations, thereby enhancing the capabilities of hybrid physics-ML modeling.more » « less
-
Abstract Solidification phenomenon has been an integral part of the manufacturing processes of metals, where the quantification of stochastic variations and manufacturing uncertainties is critically important. Accurate molecular dynamics (MD) simulations of metal solidification and the resulting properties require excessive computational expenses for probabilistic stochastic analyses where thousands of random realizations are necessary. The adoption of inadequate model sizes and time scales in MD simulations leads to inaccuracies in each random realization, causing a large cumulative statistical error in the probabilistic results obtained through Monte Carlo (MC) simulations. In this work, we present a machine learning (ML) approach, as a data-driven surrogate to MD simulations, which only needs a few MD simulations. This efficient yet high-fidelity ML approach enables MC simulations for full-scale probabilistic characterization of solidified metal properties considering stochasticity in influencing factors like temperature and strain rate. Unlike conventional ML models, the proposed hybrid polynomial correlated function expansion here, being a Bayesian ML approach, is data efficient. Further, it can account for the effect of uncertainty in training data by exploiting mean and standard deviation of the MD simulations, which in principle addresses the issue of repeatability in stochastic simulations with low variance. Stochastic numerical results for solidified aluminum are presented here based on complete probabilistic uncertainty quantification of mechanical properties like Young’s modulus, yield strength and ultimate strength, illustrating that the proposed error-inclusive data-driven framework can reasonably predict the properties with a significant level of computational efficiency.