skip to main content


Title: A Bin and a Bulk Microphysics Scheme Can Be More Alike Than Two Bin Schemes
Abstract

Bin and bulk schemes are the two primary methods to parameterize cloud microphysical processes. This study attempts to reveal how their structural differences (size‐resolved vs. moment‐resolved) manifest in terms of cloud and precipitation properties. We use a bulk scheme, the Arbitrary Moment Predictor (AMP), which uses process parameterizations identical to those in a bin scheme but predicts only moments of the size distribution like a bulk scheme. As such, differences between simulations using AMP's bin scheme and simulations using AMP itself must come from their structural differences. In one‐dimensional kinematic simulations, the overall difference between AMP (bulk) and bin schemes is found to be small. Full‐microphysics AMP and bin simulations have similar mean liquid water path (mean percent difference <4%), but AMP simulates significantly lower mean precipitation rate (−35%) than the bin scheme due to slower precipitation onset. Individual processes are also tested. Condensation is represented almost perfectly with AMP, and only small AMP‐bin differences emerge due to nucleation, evaporation, and sedimentation. Collision‐coalescence is the single biggest reason for AMP‐bin divergence. Closer inspection shows that this divergence is primarily a result of autoconversion and not of accretion. In full microphysics simulations, lowering the diameter threshold separating cloud and rain category in AMP fromtoreduces the largest AMP‐bin difference to ∼10%, making the effect of structural differences between AMP (and perhaps triple‐moment bulk schemes generally) and bin even smaller than the parameterization differences between the two bin schemes.

 
more » « less
Award ID(s):
2025103
NSF-PAR ID:
10400070
Author(s) / Creator(s):
 ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Journal of Advances in Modeling Earth Systems
Volume:
15
Issue:
3
ISSN:
1942-2466
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Data required to calibrate uncertain general circulation model (GCM) parameterizations are often only available in limited regions or time periods, for example, observational data from field campaigns, or data generated in local high‐resolution simulations. This raises the question of where and when to acquire additional data to be maximally informative about parameterizations in a GCM. Here we construct a new ensemble‐based parallel algorithm to automatically target data acquisition to regions and times that maximize the uncertainty reduction, or information gain, about GCM parameters. The algorithm uses a Bayesian framework that exploits a quantified distribution of GCM parameters as a measure of uncertainty. This distribution is informed by time‐averaged climate statistics restricted to local regions and times. The algorithm is embedded in the recently developed calibrate‐emulate‐sample framework, which performs efficient model calibration and uncertainty quantification with onlymodel evaluations, compared withevaluations typically needed for traditional approaches to Bayesian calibration. We demonstrate the algorithm with an idealized GCM, with which we generate surrogates of local data. In this perfect‐model setting, we calibrate parameters and quantify uncertainties in a quasi‐equilibrium convection scheme in the GCM. We consider targeted data that are (a) localized in space for statistically stationary simulations, and (b) localized in space and time for seasonally varying simulations. In these proof‐of‐concept applications, the calculated information gain reflects the reduction in parametric uncertainty obtained from Bayesian inference when harnessing a targeted sample of data. The largest information gain typically, but not always, results from regions near the intertropical convergence zone.

     
    more » « less
  2. Abstract

    A reference or “no‐feedback” radiative response to warming is fundamental to understanding how much global warming will occur for a given change in greenhouse gases or solar radiation incident on the Earth. The simplest estimate of this radiative response is given by the Stefan‐Boltzmann law as W m−2 K−1for Earth's present climate, whereis a global effective emission temperature. The comparable radiative response in climate models, widely called the “Planck feedback,” averages −3.3 W m−2 K−1. This difference of 0.5 W m−2 K−1is large compared to the uncertainty in the net climate feedback, yet it has not been studied carefully. We use radiative transfer models to analyze these two radiative feedbacks to warming, and find that the difference arises primarily from the lack of stratospheric warming assumed in calculations of the Planck feedback (traditionally justified by differing constraints on and time scales of stratospheric adjustment relative to surface and tropospheric warming). The Planck feedback is thus masked for wavelengths with non‐negligible stratospheric opacity, and this effect implicitly acts to amplify warming in current feedback analysis of climate change. Other differences between Planck and Stefan‐Boltzmann feedbacks arise from temperature‐dependent gas opacities, and several artifacts of nonlinear averaging across wavelengths, heights, and different locations; these effects partly cancel but as a whole slightly destabilize the Planck feedback. Our results point to an important role played by stratospheric opacity in Earth's climate sensitivity, and clarify a long‐overlooked but notable gap in our understanding of Earth's reference radiative response to warming.

     
    more » « less
  3. Abstract

    As droughts have widespread social and ecological impacts, it is critical to develop long‐term adaptation and mitigation strategies to reduce drought vulnerability. Climate models are important in quantifying drought changes. Here, we assess the ability of 285 CMIP6 historical simulations, from 17 models, to reproduce drought duration and severity in three observational data sets using the Standardized Precipitation Index (SPI). We used summary statistics beyond the mean and standard deviation, and devised a novel probabilistic framework, based on the Hellinger distance, to quantify the difference between observed and simulated drought characteristics. Results show that many simulations have less thanerror in reproducing the observed drought summary statistics. The hypothesis that simulations and observations are described by the same distribution cannot be rejected for more thanof the grids based on ourdistance framework. No single model stood out as demonstrating consistently better performance over large regions of the globe. The variance in drought statistics among the simulations is higher in the tropics compared to other latitudinal zones. Though the models capture the characteristics of dry spells well, there is considerable bias in low precipitation values. Good model performance in terms of SPI does not imply good performance in simulating low precipitation. Our study emphasizes the need to probabilistically evaluate climate model simulations in order to both pinpoint model weaknesses and identify a subset of best‐performing models that are useful for impact assessments.

     
    more » « less
  4. Abstract

    Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically requiremodel runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires onlymodel runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter‐to‐data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate‐emulate‐sample (CES) approach in a perfect‐model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties.

     
    more » « less
  5. Abstract

    Depth‐averaged eddy buoyancy diffusivities across continental shelves and slopes are investigated using a suite of eddy‐resolving, process‐oriented simulations of prograde frontal currents characterized by isopycnals tilted in the opposite direction to the seafloor, a flow regime commonly found along continental margins under downwelling‐favorable winds or occupied by buoyant boundary currents. The diagnosed cross‐slope eddy diffusivity varies by up to three orders of magnitude, decaying fromin the relatively flat‐bottomed region toover the steep continental slope, consistent with previously reported suppression effects of steep topography on baroclinic eddy fluxes. To theoretically constrain the simulated cross‐slope eddy fluxes, we examine extant scalings for eddy buoyancy diffusivities across prograde shelf/slope fronts and in flat‐bottomed oceans. Among all tested scalings, the GEOMETRIC framework developed by D. P. Marshall et al. (2012,https://doi.org/10.1175/JPO-D-11-048.1) and a parametrically similar Eady scale‐based scaling proposed by Jansen et al. (2015,https://doi.org/10.1016/j.ocemod.2015.05.007) most accurately reproduce the diagnosed eddy diffusivities across the entire shelf‐to‐open‐ocean analysis regions in our simulations. This result relies upon the incorporation of the topographic suppression effects on eddy fluxes, quantified via analytical functions of the slope Burger number, into the scaling prefactor coefficients. The predictive skills of the GEOMETRIC and Eady scale‐based scalings are shown to be insensitive to the presence of along‐slope topographic corrugations. This work lays a foundation for parameterizing eddy buoyancy fluxes across large‐scale prograde shelf/slope fronts in coarse‐resolution ocean models.

     
    more » « less