skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Memory-based parameterization with differentiable solver: Application to Lorenz ’96

Physical parameterizations (or closures) are used as representations of unresolved subgrid processes within weather and global climate models or coarse-scale turbulent models, whose resolutions are too coarse to resolve small-scale processes. These parameterizations are typically grounded on physically based, yet empirical, representations of the underlying small-scale processes. Machine learning-based parameterizations have recently been proposed as an alternative solution and have shown great promise to reduce uncertainties associated with the parameterization of small-scale processes. Yet, those approaches still show some important mismatches that are often attributed to the stochasticity of the considered process. This stochasticity can be due to coarse temporal resolution, unresolved variables, or simply to the inherent chaotic nature of the process. To address these issues, we propose a new type of parameterization (closure), which is built using memory-based neural networks, to account for the non-instantaneous response of the closure and to enhance its stability and prediction accuracy. We apply the proposed memory-based parameterization, with differentiable solver, to the Lorenz ’96 model in the presence of a coarse temporal resolution and show its capacity to predict skillful forecasts over a long time horizon of the resolved variables compared to instantaneous parameterizations. This approach paves the way for the use of memory-based parameterizations for closure problems.

 
more » « less
Award ID(s):
2218197
PAR ID:
10521155
Author(s) / Creator(s):
;
Publisher / Repository:
AIP Publishing
Date Published:
Journal Name:
Chaos: An Interdisciplinary Journal of Nonlinear Science
Volume:
33
Issue:
7
ISSN:
1054-1500
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Accurate prediction of precipitation intensity is crucial for both human and natural systems, especially in a warming climate more prone to extreme precipitation. Yet, climate models fail to accurately predict precipitation intensity, particularly extremes. One missing piece of information in traditional climate model parameterizations is subgrid-scale cloud structure and organization, which affects precipitation intensity and stochasticity at coarse resolution. Here, using global storm-resolving simulations and machine learning, we show that, by implicitly learning subgrid organization, we can accurately predict precipitation variability and stochasticity with a low-dimensional set of latent variables. Using a neural network to parameterize coarse-grained precipitation, we find that the overall behavior of precipitation is reasonably predictable using large-scale quantities only; however, the neural network cannot predict the variability of precipitation ( R 2 ∼ 0.45) and underestimates precipitation extremes. The performance is significantly improved when the network is informed by our organization metric, correctly predicting precipitation extremes and spatial variability ( R 2 ∼ 0.9). The organization metric is implicitly learned by training the algorithm on a high-resolution precipitable water field, encoding the degree of subgrid organization. The organization metric shows large hysteresis, emphasizing the role of memory created by subgrid-scale structures. We demonstrate that this organization metric can be predicted as a simple memory process from information available at the previous time steps. These findings stress the role of organization and memory in accurate prediction of precipitation intensity and extremes and the necessity of parameterizing subgrid-scale convective organization in climate models to better project future changes of water cycle and extremes. 
    more » « less
  2. Abstract. Accelerated progress in climate modeling is urgently needed for proactive and effective climate change adaptation. The central challenge lies in accurately representing processes that are small in scale yet climatically important, such as turbulence and cloud formation. These processes will not be explicitly resolvable for the foreseeable future, necessitating the use of parameterizations. We propose a balanced approach that leverages the strengths of traditional process-based parameterizations and contemporary artificial intelligence (AI)-based methods to model subgrid-scale processes. This strategy employs AI to derive data-driven closure functions from both observational and simulated data, integrated within parameterizations that encode system knowledge and conservation laws. In addition, increasing the resolution to resolve a larger fraction of small-scale processes can aid progress toward improved and interpretable climate predictions outside the observed climate distribution. However, currently feasible horizontal resolutions are limited to O(10 km) because higher resolutions would impede the creation of the ensembles that are needed for model calibration and uncertainty quantification, for sampling atmospheric and oceanic internal variability, and for broadly exploring and quantifying climate risks. By synergizing decades of scientific development with advanced AI techniques, our approach aims to significantly boost the accuracy, interpretability, and trustworthiness of climate predictions.

     
    more » « less
  3. A general, variational approach to derive low-order reduced models from possibly non-autonomous systems is presented. The approach is based on the concept of optimal parameterizing manifold (OPM) that substitutes more classical notions of invariant or slow manifolds when the breakdown of “slaving” occurs, i.e., when the unresolved variables cannot be expressed as an exact functional of the resolved ones anymore. The OPM provides, within a given class of parameterizations of the unresolved variables, the manifold that averages out optimally these variables as conditioned on the resolved ones. The class of parameterizations retained here is that of continuous deformations of parameterizations rigorously valid near the onset of instability. These deformations are produced through the integration of auxiliary backward–forward systems built from the model’s equations and lead to analytic formulas for parameterizations. In this modus operandi, the backward integration time is the key parameter to select per scale/variable to parameterize in order to derive the relevant parameterizations which are doomed to be no longer exact away from instability onset due to the breakdown of slaving typically encountered, e.g., for chaotic regimes. The selection criterion is then made through data-informed minimization of a least-square parameterization defect. It is thus shown through optimization of the backward integration time per scale/variable to parameterize, that skilled OPM reduced systems can be derived for predicting with accuracy higher-order critical transitions or catastrophic tipping phenomena, while training our parameterization formulas for regimes prior to these transitions takes place.

     
    more » « less
  4. Abstract

    This study utilizes Deep Neural Networks (DNN) to improve the K‐Profile Parameterization (KPP) for the vertical mixing effects in the ocean's surface boundary layer turbulence. The deep neural networks were trained using 11‐year turbulence‐resolving solutions, obtained by running a large eddy simulation model for Ocean Station Papa, to predict the turbulence velocity scale coefficient and unresolved shear coefficient in the KPP. The DNN‐augmented KPP schemes (KPP_DNN) have been implemented in the General Ocean Turbulence Model (GOTM). The KPP_DNN is stable for long‐term integration and more efficient than existing variants of KPP schemes with wave effects. Three different KPP_DNN schemes, each differing in their input and output variables, have been developed and trained. The performance of models utilizing the KPP_DNN schemes is compared to those employing traditional deterministic first‐order and second‐moment closure turbulent mixing parameterizations. Solution comparisons indicate that the simulated mixed layer becomes cooler and deeper when wave effects are included in parameterizations, aligning closer with observations. In the KPP framework, the velocity scale of unresolved shear, which is used to calculate ocean surface boundary layer depth, has a greater impact on the simulated mixed layer than the magnitude of diffusivity does. In the KPP_DNN, unresolved shear depends not only on wave forcing, but also on the mixed layer depth and buoyancy forcing.

     
    more » « less
  5. Effective and timely monitoring of croplands is critical for managing food supply. While remote sensing data from earth-observing satellites can be used to monitor croplands over large regions, this task is challenging for small-scale croplands as they cannot be captured precisely using coarse resolution data. On the other hand, the remote sensing data in higher resolution are collected less frequently and contain missing or disturbed data. Hence, traditional sequential models cannot be directly applied on high-resolution data to extract temporal patterns, which are essential to identify crops. In this work, we propose a generative model to combine multi-scale remote sensing data to detect croplands at high resolution. During the learning process, we leverage the temporal patterns learned from coarse-resolution data to generate missing high-resolution data. Additionally, the proposed model can track classification confidence in real-time and potentially lead to an early detection The evaluation in an intensively cultivated region demonstrates the effectiveness of the proposed method in cropland detection. 
    more » « less