skip to main content


Search for: All records

Award ID contains: 1835860

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Dynamical cores used to study the circulation of the atmosphere employ various numerical methods ranging from finite‐volume, spectral element, global spectral, and hybrid methods. In this work, we explore the use of Flux‐Differencing Discontinuous Galerkin (FDDG) methods to simulate a fully compressible dry atmosphere at various resolutions. We show that the method offers a judicious compromise between high‐order accuracy and stability for large‐eddy simulations and simulations of the atmospheric general circulation. In particular, filters, divergence damping, diffusion, hyperdiffusion, or sponge‐layers are not required to ensure stability; only the numerical dissipation naturally afforded by FDDG is necessary. We apply the method to the simulation of dry convection in an atmospheric boundary layer and in a global atmospheric dynamical core in the standard benchmark of Held and Suarez (1994,https://doi.org/10.1175/1520-0477(1994)075〈1825:apftio〉2.0.co;2).

     
    more » « less
  2. Abstract

    Data required to calibrate uncertain general circulation model (GCM) parameterizations are often only available in limited regions or time periods, for example, observational data from field campaigns, or data generated in local high‐resolution simulations. This raises the question of where and when to acquire additional data to be maximally informative about parameterizations in a GCM. Here we construct a new ensemble‐based parallel algorithm to automatically target data acquisition to regions and times that maximize the uncertainty reduction, or information gain, about GCM parameters. The algorithm uses a Bayesian framework that exploits a quantified distribution of GCM parameters as a measure of uncertainty. This distribution is informed by time‐averaged climate statistics restricted to local regions and times. The algorithm is embedded in the recently developed calibrate‐emulate‐sample framework, which performs efficient model calibration and uncertainty quantification with onlymodel evaluations, compared withevaluations typically needed for traditional approaches to Bayesian calibration. We demonstrate the algorithm with an idealized GCM, with which we generate surrogates of local data. In this perfect‐model setting, we calibrate parameters and quantify uncertainties in a quasi‐equilibrium convection scheme in the GCM. We consider targeted data that are (a) localized in space for statistically stationary simulations, and (b) localized in space and time for seasonally varying simulations. In these proof‐of‐concept applications, the calculated information gain reflects the reduction in parametric uncertainty obtained from Bayesian inference when harnessing a targeted sample of data. The largest information gain typically, but not always, results from regions near the intertropical convergence zone.

     
    more » « less
  3. Abstract

    Most machine learning applications in Earth system modeling currently rely on gradient‐based supervised learning. This imposes stringent constraints on the nature of the data used for training (typically, residual time tendencies are needed), and it complicates learning about the interactions between machine‐learned parameterizations and other components of an Earth system model. Approaching learning about process‐based parameterizations as an inverse problem resolves many of these issues, since it allows parameterizations to be trained with partial observations or statistics that directly relate to quantities of interest in long‐term climate projections. Here, we demonstrate the effectiveness of Kalman inversion methods in treating learning about parameterizations as an inverse problem. We consider two different algorithms: unscented and ensemble Kalman inversion. Both methods involve highly parallelizable forward model evaluations, converge exponentially fast, and do not require gradient computations. In addition, unscented Kalman inversion provides a measure of parameter uncertainty. We illustrate how training parameterizations can be posed as a regularized inverse problem and solved by ensemble Kalman methods through the calibration of an eddy‐diffusivity mass‐flux scheme for subgrid‐scale turbulence and convection, using data generated by large‐eddy simulations. We find the algorithms amenable to batching strategies, robust to noise and model failures, and efficient in the calibration of hybrid parameterizations that can include empirical closures and neural networks.

     
    more » « less
  4. Abstract

    Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically requiremodel runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires onlymodel runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter‐to‐data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate‐emulate‐sample (CES) approach in a perfect‐model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties.

     
    more » « less
  5. Abstract

    Advances in high‐performance computing have enabled large‐eddy simulations (LES) of turbulence, convection, and clouds. However, their potential to improve parameterizations in global climate models (GCMs) is only beginning to be harnessed, with relatively few canonical LES available so far. The purpose of this paper is to begin creating a public LES library that expands the training data available for calibrating and evaluating GCM parameterizations. To do so, we use an experimental setup in which LES are driven by large‐scale forcings from GCMs, which in principle can be used at any location, any time of year, and in any climate state. We use this setup to create a library of LES of clouds across the tropics and subtropics, in the present and in a warmer climate, with a focus on the transition from stratocumulus to shallow cumulus over the East Pacific. The LES results are relatively insensitive to the choice of host GCM driving the LES. Driven with large‐scale forcing under global warming, the LES simulate a positive but weak shortwave cloud feedback, adding to the accumulating evidence that low clouds amplify global warming.

     
    more » « less
  6. Abstract

    Climate models are generally calibrated manually by comparing selected climate statistics, such as the global top‐of‐atmosphere energy balance, to observations. The manual tuning only targets a limited subset of observational data and parameters. Bayesian calibration can estimate climate model parameters and their uncertainty using a larger fraction of the available data and automatically exploring the parameter space more broadly. In Bayesian learning, it is natural to exploit the seasonal cycle, which has large amplitude compared with anthropogenic climate change in many climate statistics. In this study, we develop methods for the calibration and uncertainty quantification (UQ) of model parameters exploiting the seasonal cycle, and we demonstrate a proof‐of‐concept with an idealized general circulation model (GCM). UQ is performed using the calibrate‐emulate‐sample approach, which combines stochastic optimization and machine learning emulation to speed up Bayesian learning. The methods are demonstrated in a perfect‐model setting through the calibration and UQ of a convective parameterization in an idealized GCM with a seasonal cycle. Calibration and UQ based on seasonally averaged climate statistics, compared to annually averaged, reduces the calibration error by up to an order of magnitude and narrows the spread of the non‐Gaussian posterior distributions by factors between two and five, depending on the variables used for UQ. The reduction in the spread of the parameter posterior distribution leads to a reduction in the uncertainty of climate model predictions.

     
    more » « less
  7. Abstract

    We propose a novel method for sampling and optimization tasks based on a stochastic interacting particle system. We explain how this method can be used for the following two goals: (i) generating approximate samples from a given target distribution and (ii) optimizing a given objective function. The approach is derivative‐free and affine invariant, and is therefore well‐suited for solving inverse problems defined by complex forward models: (i) allows generation of samples from the Bayesian posterior and (ii) allows determination of the maximum a posteriori estimator. We investigate the properties of the proposed family of methods in terms of various parameter choices, both analytically and by means of numerical simulations. The analysis and numerical simulation establish that the method has potential for general purpose optimization tasks over Euclidean space; contraction properties of the algorithm are established under suitable conditions, and computational experiments demonstrate wide basins of attraction for various specific problems. The analysis and experiments also demonstrate the potential for the sampling methodology in regimes in which the target distribution is unimodal and close to Gaussian; indeed we prove that the method recovers a Laplace approximation to the measure in certain parametric regimes and provide numerical evidence that this Laplace approximation attracts a large set of initial conditions in a number of examples.

     
    more » « less
  8. Abstract

    The uncertainty in polar cloud feedbacks calls for process understanding of the cloud response to climate warming. As an initial step toward improved process understanding, we investigate the seasonal cycle of polar clouds in the current climate by adopting a novel modeling framework using large eddy simulations (LES), which explicitly resolve cloud dynamics. Resolved horizontal and vertical advection of heat and moisture from an idealized general circulation model (GCM) are prescribed as forcing in the LES. The LES are also forced with prescribed sea ice thickness, but surface temperature, atmospheric temperature, and moisture evolve freely without nudging. A semigray radiative transfer scheme without water vapor and cloud feedbacks allows the GCM and LES to achieve closed energy budgets more easily than would be possible with more complex schemes. This enables the mean states in the two models to be consistently compared, without the added complications from interaction with more comprehensive radiation. We show that the LES closely follow the GCM seasonal cycle, and the seasonal cycle of low‐level clouds in the LES resembles observations: maximum cloud liquid occurs in late summer and early autumn, and winter clouds are dominated by ice in the upper troposphere. Large‐scale advection of moisture provides the main source of water vapor for the liquid‐containing clouds in summer, while a temperature advection peak in winter makes the atmosphere relatively dry and reduces cloud condensate. The framework we develop and employ can be used broadly for studying cloud processes and the response of polar clouds to climate warming.

     
    more » « less
  9. Abstract

    Because of their limited spatial resolution, numerical weather prediction and climate models have to rely on parameterizations to represent atmospheric turbulence and convection. Historically, largely independent approaches have been used to represent boundary layer turbulence and convection, neglecting important interactions at the subgrid scale. Here we build on an eddy‐diffusivity mass‐flux (EDMF) scheme that represents all subgrid‐scale mixing in a unified manner, partitioning subgrid‐scale fluctuations into contributions from local diffusive mixing and coherent advective structures and allowing them to interact within a single framework. The EDMF scheme requires closures for the interaction between the turbulent environment and the plumes and for local mixing. A second‐order equation for turbulence kinetic energy (TKE) provides one ingredient for the diffusive local mixing closure, leaving a mixing length to be parameterized. Here, we propose a new mixing length formulation, based on constraints derived from the TKE balance. It expresses local mixing in terms of the same physical processes in all regimes of boundary layer flow. The formulation is tested at a range of resolutions and across a wide range of boundary layer regimes, including a stably stratified boundary layer, a stratocumulus‐topped marine boundary layer, and dry convection. Comparison with large eddy simulations (LES) shows that the EDMF scheme with this diffusive mixing parameterization accurately captures the structure of the boundary layer and clouds in all cases considered.

     
    more » « less
  10. Abstract

    We demonstrate that an extended eddy‐diffusivity mass‐flux (EDMF) scheme can be used as a unified parameterization of subgrid‐scale turbulence and convection across a range of dynamical regimes, from dry convective boundary layers, through shallow convection, to deep convection. Central to achieving this unified representation of subgrid‐scale motions are entrainment and detrainment closures. We model entrainment and detrainment rates as a combination of turbulent and dynamical processes. Turbulent entrainment/detrainment is represented as downgradient diffusion between plumes and their environment. Dynamical entrainment/detrainment is proportional to a ratio of a relative buoyancy of a plume and a vertical velocity scale, that is modulated by heuristic nondimensional functions which represent their relative magnitudes and the enhanced detrainment due to evaporation from clouds in drier environment. We first evaluate the closures offline against entrainment and detrainment rates diagnosed from large‐eddy simulations (LES) in which tracers are used to identify plumes, their turbulent environment, and mass and tracer exchanges between them. The LES are of canonical test cases of a dry convective boundary layer, shallow convection, and deep convection, thus spanning a broad range of regimes. We then compare the LES with the full EDMF scheme, including the new closures, in a single column model (SCM). The results show good agreement between the SCM and LES in quantities that are key for climate models, including thermodynamic profiles, cloud liquid water profiles, and profiles of higher moments of turbulent statistics. The SCM also captures well the diurnal cycle of convection and the onset of precipitation.

     
    more » « less