skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Implicit learning of convective organization explains precipitation stochasticity
Accurate prediction of precipitation intensity is crucial for both human and natural systems, especially in a warming climate more prone to extreme precipitation. Yet, climate models fail to accurately predict precipitation intensity, particularly extremes. One missing piece of information in traditional climate model parameterizations is subgrid-scale cloud structure and organization, which affects precipitation intensity and stochasticity at coarse resolution. Here, using global storm-resolving simulations and machine learning, we show that, by implicitly learning subgrid organization, we can accurately predict precipitation variability and stochasticity with a low-dimensional set of latent variables. Using a neural network to parameterize coarse-grained precipitation, we find that the overall behavior of precipitation is reasonably predictable using large-scale quantities only; however, the neural network cannot predict the variability of precipitation ( R 2 ∼ 0.45) and underestimates precipitation extremes. The performance is significantly improved when the network is informed by our organization metric, correctly predicting precipitation extremes and spatial variability ( R 2 ∼ 0.9). The organization metric is implicitly learned by training the algorithm on a high-resolution precipitable water field, encoding the degree of subgrid organization. The organization metric shows large hysteresis, emphasizing the role of memory created by subgrid-scale structures. We demonstrate that this organization metric can be predicted as a simple memory process from information available at the previous time steps. These findings stress the role of organization and memory in accurate prediction of precipitation intensity and extremes and the necessity of parameterizing subgrid-scale convective organization in climate models to better project future changes of water cycle and extremes.  more » « less
Award ID(s):
2019625
PAR ID:
10430170
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
120
Issue:
20
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Physical parameterizations (or closures) are used as representations of unresolved subgrid processes within weather and global climate models or coarse-scale turbulent models, whose resolutions are too coarse to resolve small-scale processes. These parameterizations are typically grounded on physically based, yet empirical, representations of the underlying small-scale processes. Machine learning-based parameterizations have recently been proposed as an alternative solution and have shown great promise to reduce uncertainties associated with the parameterization of small-scale processes. Yet, those approaches still show some important mismatches that are often attributed to the stochasticity of the considered process. This stochasticity can be due to coarse temporal resolution, unresolved variables, or simply to the inherent chaotic nature of the process. To address these issues, we propose a new type of parameterization (closure), which is built using memory-based neural networks, to account for the non-instantaneous response of the closure and to enhance its stability and prediction accuracy. We apply the proposed memory-based parameterization, with differentiable solver, to the Lorenz ’96 model in the presence of a coarse temporal resolution and show its capacity to predict skillful forecasts over a long time horizon of the resolved variables compared to instantaneous parameterizations. This approach paves the way for the use of memory-based parameterizations for closure problems. 
    more » « less
  2. Subgrid parameterizations, which represent physical processes occurring below the resolu- tion of current climate models, are an important component in producing accurate, long-term predictions for the climate. A variety of approaches have been tested to design these com- ponents, including deep learning methods. In this work, we evaluate a proof of concept illustrating a multiscale approach to this prediction problem. We train neural networks to predict subgrid forcing values on a testbed model and examine improvements in prediction accuracy that can be obtained by using additional information in both fine-to-coarse and coarse-to-fine directions. 
    more » « less
  3. Abstract A promising approach to improve climate‐model simulations is to replace traditional subgrid parameterizations based on simplified physical models by machine learning algorithms that are data‐driven. However, neural networks (NNs) often lead to instabilities and climate drift when coupled to an atmospheric model. Here, we learn an NN parameterization from a high‐resolution atmospheric simulation in an idealized domain by accurately calculating subgrid terms through coarse graining. The NN parameterization has a structure that ensures physical constraints are respected, such as by predicting subgrid fluxes instead of tendencies. The NN parameterization leads to stable simulations that replicate the climate of the high‐resolution simulation with similar accuracy to a successful random‐forest parameterization while needing far less memory. We find that the simulations are stable for different horizontal resolutions and a variety of NN architectures, and that an NN with substantially reduced numerical precision could decrease computational costs without affecting the quality of simulations. 
    more » « less
  4. Abstract Extreme winds associated with tropical cyclones (TCs) can cause significant loss of life and economic damage globally, highlighting the need for accurate, high‐resolution modeling and forecasting for wind. However, due to their coarse horizontal resolution, most global climate and weather models suffer from chronic underprediction of TC wind speeds, limiting their use for impact analysis and energy modeling. In this study, we introduce a cascading deep learning framework designed to downscale high‐resolution TC wind fields given low‐resolution data. Our approach maps 85 TC events from ERA5 data (0.25° resolution) to high‐resolution (0.05° resolution) observations at 6‐hr intervals. The initial component is a debiasing neural network designed to model accurate wind speed observations using ERA5 data. The second component employs a generative super‐resolution strategy based on a conditional denoising diffusion probabilistic model (DDPM) to enhance the spatial resolution and to produce ensemble estimates. The model is able to accurately model intensity and produce realistic radial profiles and fine‐scale spatial structures of wind fields, with a percentage mean bias of −3.74% compared to the high‐resolution observations. Our downscaling framework enables the prediction of high‐resolution wind fields using widely available low‐resolution and intensity wind data, allowing for the modeling of past events and the assessment of future TC risks. 
    more » « less
  5. Abstract Global climate models (GCMs) and Earth system models (ESMs) exhibit biases, with resolutions too coarse to capture local variability for fine-scale, reliable drought and climate impact assessment. However, conventional bias correction approaches may cause implausible climate change signals due to unrealistic representations of spatial and intervariable dependences. While purely data-driven deep learning has achieved significant progress in improving climate and earth system simulations and predictions, they cannot reliably learn the circumstances (e.g., extremes) that are largely unseen in historical climate but likely becoming more frequent in the future climate (i.e., climate non-stationarity). This study shows an integrated trend-preserving deep learning approach that can address the spatial and intervariable dependences and climate non-stationarity issues for downscaling and bias correcting GCMs/ESMs. Here we combine the super-resolution deep residual network (SRDRN) with the trend-preserving quantile delta mapping (QDM) to downscale and bias correct six primary climate variables at once (including daily precipitation, maximum temperature, minimum temperature, relative humidity, solar radiation, and wind speed) from five state-of-the-art GCMs/ESMs in the Coupled Model Intercomparison Project Phase 6 (CMIP6). We found that the SRDRN-QDM approach greatly reduced GCMs/ESMs biases in spatial and intervariable dependences while significantly better-reducing biases in extremes compared to deep learning. The estimated drought based on the six bias-corrected and downscaled variables captured the observed drought intensity and frequency, which outperformed state-of-the-art multivariate bias correction approaches, demonstrating its capability for correcting GCMs/ESMs biases in spatial and multivariable dependences and extremes. 
    more » « less