Abstract Prediction of the spatial‐temporal dynamics of the fluid flow in complex subsurface systems, such as geologic storage, is typically performed using advanced numerical simulation methods that solve the underlying governing physical equations. However, numerical simulation is computationally demanding and can limit the implementation of standard field management workflows, such as model calibration and optimization. Standard deep learning models, such as RUNET, have recently been proposed to alleviate the computational burden of physics‐based simulation models. Despite their powerful learning capabilities and computational appeal, deep learning models have important limitations, including lack of interpretability, extensive data needs, weak extrapolation capacity, and physical inconsistency that can affect their adoption in practical applications. We develop a Fluid Flow‐based Deep Learning (FFDL) architecture for spatial‐temporal prediction of important state variables in subsurface flow systems. The new architecture consists of a physics‐based encoder to construct physically meaningful latent variables, and a residual‐based processor to predict the evolution of the state variables. It uses physical operators that serve as nonlinear activation functions and imposes the general structure of the fluid flow equations to facilitate its training with data pertaining to the specific subsurface flow application of interest. A comprehensive investigation of FFDL, based on a field‐scale geologic storage model, is used to demonstrate the superior performance of FFDL compared to RUNET as a standard deep learning model. The results show that FFDL outperforms RUNET in terms of prediction accuracy, extrapolation power, and training data needs.
more »
« less
From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling
Abstract The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
more »
« less
- PAR ID:
- 10305836
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Nature Communications
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2041-1723
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Hydrogeologic models generally require gridded subsurface properties, however these inputs are often difficult to obtain and highly uncertain. Parametrizing computationally expensive models where extensive calibration is computationally infeasible is a long standing challenge in hydrogeology. Here we present a machine learning framework to address this challenge. We train an inversion model to learn the relationship between water table depth and hydraulic conductivity using a small number of physical simulations. For a 31M grid cell model of the US we demonstrate that the inversion model can produce a reliable K field using only 30 simulations for training. Furthermore, we show that the inversion model captures physically realistic relationships between variables, even for relationships that were not directly trained on. While there are still limitations for out of sample parameters, the general framework presented here provides a promising approach for parametrizing expensive models.more » « less
-
Abstract Climate models are generally calibrated manually by comparing selected climate statistics, such as the global top‐of‐atmosphere energy balance, to observations. The manual tuning only targets a limited subset of observational data and parameters. Bayesian calibration can estimate climate model parameters and their uncertainty using a larger fraction of the available data and automatically exploring the parameter space more broadly. In Bayesian learning, it is natural to exploit the seasonal cycle, which has large amplitude compared with anthropogenic climate change in many climate statistics. In this study, we develop methods for the calibration and uncertainty quantification (UQ) of model parameters exploiting the seasonal cycle, and we demonstrate a proof‐of‐concept with an idealized general circulation model (GCM). UQ is performed using the calibrate‐emulate‐sample approach, which combines stochastic optimization and machine learning emulation to speed up Bayesian learning. The methods are demonstrated in a perfect‐model setting through the calibration and UQ of a convective parameterization in an idealized GCM with a seasonal cycle. Calibration and UQ based on seasonally averaged climate statistics, compared to annually averaged, reduces the calibration error by up to an order of magnitude and narrows the spread of the non‐Gaussian posterior distributions by factors between two and five, depending on the variables used for UQ. The reduction in the spread of the parameter posterior distribution leads to a reduction in the uncertainty of climate model predictions.more » « less
-
Pollard, Tom J. (Ed.)Modern predictive models require large amounts of data for training and evaluation, absence of which may result in models that are specific to certain locations, populations in them and clinical practices. Yet, best practices for clinical risk prediction models have not yet considered such challenges to generalizability. Here we ask whether population- and group-level performance of mortality prediction models vary significantly when applied to hospitals or geographies different from the ones in which they are developed. Further, what characteristics of the datasets explain the performance variation? In this multi-center cross-sectional study, we analyzed electronic health records from 179 hospitals across the US with 70,126 hospitalizations from 2014 to 2015. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by the race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm “Fast Causal Inference” that infers paths of causal influence while identifying potential influences associated with unmeasured variables. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st-3rd quartile or IQR; median 0.801); calibration slope from 0.725 to 0.983 (IQR; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (IQR; median 0.092). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. The race variable also mediated differences in the relationship between clinical variables and mortality, by hospital/region. In conclusion, group-level performance should be assessed during generalizability checks to identify potential harms to the groups. Moreover, for developing methods to improve model performance in new environments, a better understanding and documentation of provenance of data and health processes are needed to identify and mitigate sources of variation.more » « less
-
Thenkabail, Prasad S. (Ed.)Physically based hydrologic models require significant effort and extensive information for development, calibration, and validation. The study explored the use of the random forest regression (RFR), a supervised machine learning (ML) model, as an alternative to the physically based Soil and Water Assessment Tool (SWAT) for predicting streamflow in the Rio Grande Headwaters near Del Norte, a snowmelt-dominated mountainous watershed of the Upper Rio Grande Basin. Remotely sensed data were used for the random forest machine learning analysis (RFML) and RStudio for data processing and synthesizing. The RFML model outperformed the SWAT model in accuracy and demonstrated its capability in predicting streamflow in this region. We implemented a customized approach to the RFR model to assess the model’s performance for three training periods, across 1991–2010, 1996–2010, and 2001–2010; the results indicated that the model’s accuracy improved with longer training periods, implying that the model trained on a more extended period is better able to capture the parameters’ variability and reproduce streamflow data more accurately. The variable importance (i.e., IncNodePurity) measure of the RFML model revealed that the snow depth and the minimum temperature were consistently the top two predictors across all training periods. The paper also evaluated how well the SWAT model performs in reproducing streamflow data of the watershed with a conventional approach. The SWAT model needed more time and data to set up and calibrate, delivering acceptable performance in annual mean streamflow simulation, with satisfactory index of agreement (d), coefficient of determination (R2), and percent bias (PBIAS) values, but monthly simulation warrants further exploration and model adjustments. The study recommends exploring snowmelt runoff hydrologic processes, dust-driven sublimation effects, and more detailed topographic input parameters to update the SWAT snowmelt routine for better monthly flow estimation. The results provide a critical analysis for enhancing streamflow prediction, which is valuable for further research and water resource management, including snowmelt-driven semi-arid regions.more » « less
An official website of the United States government
