skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Training Warm‐Rain Bulk Microphysics Schemes Using Super‐Droplet Simulations
Abstract Cloud microphysics is a critical aspect of the Earth's climate system, which involves processes at the nano‐ and micrometer scales of droplets and ice particles. In climate modeling, cloud microphysics is commonly represented by bulk models, which contain simplified process rates that require calibration. This study presents a framework for calibrating warm‐rain bulk schemes using high‐fidelity super‐droplet simulations that provide a more accurate and physically based representation of cloud and precipitation processes. The calibration framework employs ensemble Kalman methods including Ensemble Kalman Inversion and Unscented Kalman Inversion to calibrate bulk microphysics schemes with probabilistic super‐droplet simulations. We demonstrate the framework's effectiveness by calibrating a single‐moment bulk scheme, resulting in a reduction of data‐model mismatch by more than 75% compared to the model with initial parameters. Thus, this study demonstrates a powerful tool for enhancing the accuracy of bulk microphysics schemes in atmospheric models and improving climate modeling.  more » « less
Award ID(s):
1835860
PAR ID:
10547948
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Wiley Periodicals LLC
Date Published:
Journal Name:
Journal of Advances in Modeling Earth Systems
Volume:
16
Issue:
7
ISSN:
1942-2466
Page Range / eLocation ID:
e2023MS004028
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In the atmosphere,microphysicsrefers to the microscale processes that affect cloud and precipitation particles and is a key linkage among the various components of Earth's atmospheric water and energy cycles. The representation of microphysical processes in models continues to pose a major challenge leading to uncertainty in numerical weather forecasts and climate simulations. In this paper, the problem of treating microphysics in models is divided into two parts: (i) how to represent the population of cloud and precipitation particles, given the impossibility of simulating all particles individually within a cloud, and (ii) uncertainties in the microphysical process rates owing to fundamental gaps in knowledge of cloud physics. The recently developed Lagrangian particle‐based method is advocated as a way to address several conceptual and practical challenges of representing particle populations using traditional bulk and bin microphysics parameterization schemes. For addressing critical gaps in cloud physics knowledge, sustained investment for observational advances from laboratory experiments, new probe development, and next‐generation instruments in space is needed. Greater emphasis on laboratory work, which has apparently declined over the past several decades relative to other areas of cloud physics research, is argued to be an essential ingredient for improving process‐level understanding. More systematic use of natural cloud and precipitation observations to constrain microphysics schemes is also advocated. Because it is generally difficult to quantify individual microphysical process rates from these observations directly, this presents an inverse problem that can be viewed from the standpoint of Bayesian statistics. Following this idea, a probabilistic framework is proposed that combines elements from statistical and physical modeling. Besides providing rigorous constraint of schemes, there is an added benefit of quantifying uncertainty systematically. Finally, a broader hierarchical approach is proposed to accelerate improvements in microphysics schemes, leveraging the advances described in this paper related to process modeling (using Lagrangian particle‐based schemes), laboratory experimentation, cloud and precipitation observations, and statistical methods. 
    more » « less
  2. Abstract Bin microphysics schemes are useful tools for cloud simulations and are often considered to provide a benchmark for model intercomparison. However, they may experience issues with numerical diffusion, which are not well quantified, and the transport of hydrometeors depends on the choice of advection scheme, which can also change cloud simulation results. Here, an atmospheric large‐eddy simulation model is adapted to simulate a statistically steady‐state cloud in a convection cloud chamber under well‐constrained conditions. Two bin microphysics schemes, a spectral bin method and the method of moments, as well as several advection methods for the transport of the microphysical variables are employed for model intercomparison. Results show that different combinations of microphysics and advection schemes can lead to considerable differences in simulated cloud properties, such as cloud droplet number concentration. We find that simulations using the advection scheme that suffers more from numerical diffusion tends to have a smaller droplet number concentration and liquid water content, while simulation with the microphysics scheme that suffers more from numerical diffusion tends to have a broader size distribution and thus larger mean droplet sizes. Sensitivities of simulations to bin resolution, spatial resolution, and temporal resolution are also tested. We find that refining the microphysical bin resolution leads to a broader cloud droplet size distribution due to the advection of hydrometeors. Our results provide insight for using different advection and microphysics schemes in cloud chamber simulations, which might also help understand the uncertainties of the schemes used in atmospheric cloud simulations. 
    more » « less
  3. Abstract Recent in situ observations show that haze particles exist in a convection cloud chamber. The microphysics schemes previously used for large‐eddy simulations of the cloud chamber could not fully resolve haze particles and the associated processes, including their activation and deactivation. Specifically, cloud droplet activation was modeled based on Twomey‐type parameterizations, wherein cloud droplets were formed when a critical supersaturation for the available cloud condensation nuclei (CCN) was exceeded and haze particles were not explicitly resolved. Here, we develop and adapt haze‐capable bin and Lagrangian microphysics schemes to properly resolve the activation and deactivation processes. Results are compared with the Twomey‐type CCN‐based bin microphysics scheme in which haze particles are not fully resolved. We find that results from the haze‐capable bin microphysics scheme agree well with those from the Lagrangian microphysics scheme. However, both schemes significantly differ from those from a CCN‐based bin microphysics scheme unless CCN recycling is considered. Haze particles from the recycling of deactivated cloud droplets can strongly enhance cloud droplet number concentration due to a positive feedback in haze‐cloud interactions in the cloud chamber. Haze particle size distributions are more realistic when considering solute and curvature effects that enable representing the complete physics of the activation process. Our study suggests that haze particles and their interactions with cloud droplets may have a strong impact on cloud properties when supersaturation fluctuations are comparable to mean supersaturation, as is the case in the cloud chamber and likely is the case in the atmosphere, especially in polluted conditions. 
    more » « less
  4. Abstract Most machine learning applications in Earth system modeling currently rely on gradient‐based supervised learning. This imposes stringent constraints on the nature of the data used for training (typically, residual time tendencies are needed), and it complicates learning about the interactions between machine‐learned parameterizations and other components of an Earth system model. Approaching learning about process‐based parameterizations as an inverse problem resolves many of these issues, since it allows parameterizations to be trained with partial observations or statistics that directly relate to quantities of interest in long‐term climate projections. Here, we demonstrate the effectiveness of Kalman inversion methods in treating learning about parameterizations as an inverse problem. We consider two different algorithms: unscented and ensemble Kalman inversion. Both methods involve highly parallelizable forward model evaluations, converge exponentially fast, and do not require gradient computations. In addition, unscented Kalman inversion provides a measure of parameter uncertainty. We illustrate how training parameterizations can be posed as a regularized inverse problem and solved by ensemble Kalman methods through the calibration of an eddy‐diffusivity mass‐flux scheme for subgrid‐scale turbulence and convection, using data generated by large‐eddy simulations. We find the algorithms amenable to batching strategies, robust to noise and model failures, and efficient in the calibration of hybrid parameterizations that can include empirical closures and neural networks. 
    more » « less
  5. Abstract This work integrates machine learning into an atmospheric parameterization to target uncertain mixing processes while maintaining interpretable, predictive, and well‐established physical equations. We adopt an eddy‐diffusivity mass‐flux (EDMF) parameterization for the unified modeling of various convective and turbulent regimes. To avoid drift and instability that plague offline‐trained machine learning parameterizations that are subsequently coupled with climate models, we frame learning as an inverse problem: Data‐driven models are embedded within the EDMF parameterization and trained online in a one‐dimensional vertical global climate model (GCM) column. Training is performed against output from large‐eddy simulations (LES) forced with GCM‐simulated large‐scale conditions in the Pacific. Rather than optimizing subgrid‐scale tendencies, our framework directly targets climate variables of interest, such as the vertical profiles of entropy and liquid water path. Specifically, we use ensemble Kalman inversion to simultaneously calibrate both the EDMF parameters and the parameters governing data‐driven lateral mixing rates. The calibrated parameterization outperforms existing EDMF schemes, particularly in tropical and subtropical locations of the present climate, and maintains high fidelity in simulating shallow cumulus and stratocumulus regimes under increased sea surface temperatures from AMIP4K experiments. The results showcase the advantage of physically constraining data‐driven models and directly targeting relevant variables through online learning to build robust and stable machine learning parameterizations. 
    more » « less