skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Training Physics‐Based Machine‐Learning Parameterizations With Gradient‐Free Ensemble Kalman Methods
Abstract Most machine learning applications in Earth system modeling currently rely on gradient‐based supervised learning. This imposes stringent constraints on the nature of the data used for training (typically, residual time tendencies are needed), and it complicates learning about the interactions between machine‐learned parameterizations and other components of an Earth system model. Approaching learning about process‐based parameterizations as an inverse problem resolves many of these issues, since it allows parameterizations to be trained with partial observations or statistics that directly relate to quantities of interest in long‐term climate projections. Here, we demonstrate the effectiveness of Kalman inversion methods in treating learning about parameterizations as an inverse problem. We consider two different algorithms: unscented and ensemble Kalman inversion. Both methods involve highly parallelizable forward model evaluations, converge exponentially fast, and do not require gradient computations. In addition, unscented Kalman inversion provides a measure of parameter uncertainty. We illustrate how training parameterizations can be posed as a regularized inverse problem and solved by ensemble Kalman methods through the calibration of an eddy‐diffusivity mass‐flux scheme for subgrid‐scale turbulence and convection, using data generated by large‐eddy simulations. We find the algorithms amenable to batching strategies, robust to noise and model failures, and efficient in the calibration of hybrid parameterizations that can include empirical closures and neural networks.  more » « less
Award ID(s):
1835860
PAR ID:
10370696
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Journal of Advances in Modeling Earth Systems
Volume:
14
Issue:
8
ISSN:
1942-2466
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This work integrates machine learning into an atmospheric parameterization to target uncertain mixing processes while maintaining interpretable, predictive, and well‐established physical equations. We adopt an eddy‐diffusivity mass‐flux (EDMF) parameterization for the unified modeling of various convective and turbulent regimes. To avoid drift and instability that plague offline‐trained machine learning parameterizations that are subsequently coupled with climate models, we frame learning as an inverse problem: Data‐driven models are embedded within the EDMF parameterization and trained online in a one‐dimensional vertical global climate model (GCM) column. Training is performed against output from large‐eddy simulations (LES) forced with GCM‐simulated large‐scale conditions in the Pacific. Rather than optimizing subgrid‐scale tendencies, our framework directly targets climate variables of interest, such as the vertical profiles of entropy and liquid water path. Specifically, we use ensemble Kalman inversion to simultaneously calibrate both the EDMF parameters and the parameters governing data‐driven lateral mixing rates. The calibrated parameterization outperforms existing EDMF schemes, particularly in tropical and subtropical locations of the present climate, and maintains high fidelity in simulating shallow cumulus and stratocumulus regimes under increased sea surface temperatures from AMIP4K experiments. The results showcase the advantage of physically constraining data‐driven models and directly targeting relevant variables through online learning to build robust and stable machine learning parameterizations. 
    more » « less
  2. Abstract Cloud microphysics is a critical aspect of the Earth's climate system, which involves processes at the nano‐ and micrometer scales of droplets and ice particles. In climate modeling, cloud microphysics is commonly represented by bulk models, which contain simplified process rates that require calibration. This study presents a framework for calibrating warm‐rain bulk schemes using high‐fidelity super‐droplet simulations that provide a more accurate and physically based representation of cloud and precipitation processes. The calibration framework employs ensemble Kalman methods including Ensemble Kalman Inversion and Unscented Kalman Inversion to calibrate bulk microphysics schemes with probabilistic super‐droplet simulations. We demonstrate the framework's effectiveness by calibrating a single‐moment bulk scheme, resulting in a reduction of data‐model mismatch by more than 75% compared to the model with initial parameters. Thus, this study demonstrates a powerful tool for enhancing the accuracy of bulk microphysics schemes in atmospheric models and improving climate modeling. 
    more » « less
  3. Abstract We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O ( 1 0 4 ) model runs, or more. Moreover, the forward model is often given as a black box or is impractical to differentiate. Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems. The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator. Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it. This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior. Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach. The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model. Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically. The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems. 
    more » « less
  4. Abstract There are different strategies for training neural networks (NNs) as subgrid‐scale parameterizations. Here, we use a 1D model of the quasi‐biennial oscillation (QBO) and gravity wave (GW) parameterizations as testbeds. A 12‐layer convolutional NN that predicts GW forcings for given wind profiles, when trained offline in abig‐dataregime (100‐year), produces realistic QBOs once coupled to the 1D model. In contrast, offline training of this NN in asmall‐dataregime (18‐month) yields unrealistic QBOs. However, online re‐training of just two layers of this NN using ensemble Kalman inversion and only time‐averaged QBO statistics leads to parameterizations that yield realistic QBOs. Fourier analysis of these three NNs' kernels suggests why/how re‐training works and reveals that these NNs primarily learn low‐pass, high‐pass, and a combination of band‐pass filters, potentially related to the local and non‐local dynamics in GW propagation and dissipation. These findings/strategies generally apply to data‐driven parameterizations of other climate processes. 
    more » « less
  5. Randomized algorithms exploit stochasticity to reduce computational complexity. One important example is random feature regression (RFR) that accelerates Gaussian process regression (GPR). RFR approximates an unknown function with a random neural network whose hidden weights and biases are sampled from a probability distribution. Only the final output layer is fit to data. In randomized algorithms like RFR, the hyperparameters that characterize the sampling distribution greatly impact performance, yet are not directly accessible from samples. This makes optimization of hyperparameters via standard (gradient-based) optimization tools inapplicable. Inspired by Bayesian ideas from GPR, this paper introduces a random objective function that is tailored for hyperparameter tuning of vector-valued random features. The objective is minimized with ensemble Kalman inversion (EKI). EKI is a gradient-free particle-based optimizer that is scalable to high-dimensions and robust to randomness in objective functions. A numerical study showcases the new black-box methodology to learn hyperparameter distributions in several problems that are sensitive to the hyperparameter selection: two global sensitivity analyses, integrating a chaotic dynamical system, and solving a Bayesian inverse problem from atmospheric dynamics. The success of the proposed EKI-based algorithm for RFR suggests its potential for automated optimization of hyperparameters arising in other randomized algorithms. 
    more » « less