skip to main content


Search for: All records

Award ID contains: 1940190

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Predictions of hydrologic variables across the entire water cycle have significant value for water resources management as well as downstream applications such as ecosystem and water quality modeling. Recently, purely data‐driven deep learning models like long short‐term memory (LSTM) showed seemingly insurmountable performance in modeling rainfall runoff and other geoscientific variables, yet they cannot predict untrained physical variables and remain challenging to interpret. Here, we show that differentiable, learnable, process‐based models (calledδmodels here) can approach the performance level of LSTM for the intensively observed variable (streamflow) with regionalized parameterization. We use a simple hydrologic model HBV as the backbone and use embedded neural networks, which can only be trained in a differentiable programming framework, to parameterize, enhance, or replace the process‐based model's modules. Without using an ensemble or post‐processor,δmodels can obtain a median Nash‐Sutcliffe efficiency of 0.732 for 671 basins across the USA for the Daymet forcing data set, compared to 0.748 from a state‐of‐the‐art LSTM model with the same setup. For another forcing data set, the difference is even smaller: 0.715 versus 0.722. Meanwhile, the resulting learnable process‐based models can output a full set of untrained variables, for example, soil and groundwater storage, snowpack, evapotranspiration, and baseflow, and can later be constrained by their observations. Both simulated evapotranspiration and fraction of discharge from baseflow agreed decently with alternative estimates. The general framework can work with models with various process complexity and opens up the path for learning physics from big data.

     
    more » « less
  2. Abstract

    Hydroelectric power (hydropower) is unique in that it can function as both a conventional source of electricity and as backup storage (pumped hydroelectric storage and large reservoir storage) for providing energy in times of high demand on the grid (S. Rehman, L M Al-Hadhrami, and M M Alam), (2015Renewable and Sustainable Energy Reviews,44, 586–98). This study examines the impact of hydropower on system electricity price and price volatility in the region served by the New England Independent System Operator (ISONE) from 2014-2020 (ISONE,ISO New England Web Services API v1.1.”https://webservices.iso-ne.com/docs/v1.1/, 2021. Accessed: 2021-01-10). We perform a robust holistic analysis of the mean and quantile effects, as well as the marginal contributing effects of hydropower in the presence of solar and wind resources. First, the price data is adjusted for deterministic temporal trends, correcting for seasonal, weekend, and diurnal effects that may obscure actual representative trends in the data. Using multiple linear regression and quantile regression, we observe that hydropower contributes to a reduction in the system electricity price and price volatility. While hydropower has a weak impact on decreasing price and volatility at the mean, it has greater impact at extreme quantiles (>70th percentile). At these higher percentiles, we find that hydropower provides a stabilizing effect on price volatility in the presence of volatile resources such as wind. We conclude with a discussion of the observed relationship between hydropower and system electricity price and volatility.

     
    more » « less
  3. Abstract

    Deep learning (DL) models trained on hydrologic observations can perform extraordinarily well, but they can inherit deficiencies of the training data, such as limited coverage of in situ data or low resolution/accuracy of satellite data. Here we propose a novel multiscale DL scheme learning simultaneously from satellite and in situ data to predict 9 km daily soil moisture (5 cm depth). Based on spatial cross‐validation over sites in the conterminous United States, the multiscale scheme obtained a median correlation of 0.901 and root‐mean‐square error of 0.034 m3/m3. It outperformed the Soil Moisture Active Passive satellite mission's 9 km product, DL models trained on in situ data alone, and land surface models. Our 9 km product showed better accuracy than previous 1 km satellite downscaling products, highlighting limited impacts of improving resolution. Not only is our product useful for planning against floods, droughts, and pests, our scheme is generically applicable to geoscientific domains with data on multiple scales, breaking the confines of individual data sets.

     
    more » « less
  4. Abstract

    The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.

     
    more » « less
  5. Abstract. As a genre of physics-informed machine learning, differentiable process-based hydrologic models (abbreviated as δ or delta models) with regionalized deep-network-based parameterization pipelines were recently shown to provide daily streamflow prediction performance closely approaching that of state-of-the-art long short-term memory (LSTM) deep networks. Meanwhile, δ models provide a full suite of diagnostic physical variables and guaranteed mass conservation. Here, we ran experiments to test (1) their ability to extrapolate to regions far from streamflow gauges and (2) their ability to make credible predictions of long-term (decadal-scale) change trends. We evaluated the models based on daily hydrograph metrics (Nash–Sutcliffe model efficiency coefficient, etc.) and predicted decadal streamflow trends. For prediction in ungauged basins (PUB; randomly sampled ungauged basins representing spatial interpolation), δ models either approached or surpassed the performance of LSTM in daily hydrograph metrics, depending on the meteorological forcing data used. They presented a comparable trend performance to LSTM for annual mean flow and high flow but worse trends for low flow. For prediction in ungauged regions (PUR; regional holdout test representing spatial extrapolation in a highly data-sparse scenario), δ models surpassed LSTM in daily hydrograph metrics, and their advantages in mean and high flow trends became prominent. In addition, an untrained variable, evapotranspiration, retained good seasonality even for extrapolated cases. The δ models' deep-network-based parameterization pipeline produced parameter fields that maintain remarkably stable spatial patterns even in highly data-scarce scenarios, which explains their robustness. Combined with their interpretability and ability to assimilate multi-source observations, the δ models are strong candidates for regional and global-scale hydrologic simulations and climate change impact assessment. 
    more » « less
  6. Abstract. Climate change threatens our ability to grow food for an ever-increasing population. There is aneed for high-quality soil moisture predictions in under-monitored regionslike Africa. However, it is unclear if soil moisture processes are globallysimilar enough to allow our models trained on available in situ data tomaintain accuracy in unmonitored regions. We present a multitask longshort-term memory (LSTM) model that learns simultaneously from globalsatellite-based data and in situ soil moisture data. This model is evaluated inboth random spatial holdout mode and continental holdout mode (trained onsome continents, tested on a different one). The model compared favorably tocurrent land surface models, satellite products, and a candidate machinelearning model, reaching a global median correlation of 0.792 for the randomspatial holdout test. It behaved surprisingly well in Africa and Australia,showing high correlation even when we excluded their sites from the trainingset, but it performed relatively poorly in Alaska where rapid changes areoccurring. In all but one continent (Asia), the multitask model in theworst-case scenario test performed better than the soil moisture activepassive (SMAP) 9 km product. Factorial analysis has shown that the LSTM model'saccuracy varies with terrain aspect, resulting in lower performance for dryand south-facing slopes or wet and north-facing slopes. This knowledgehelps us apply the model while understanding its limitations. This model isbeing integrated into an operational agricultural assistance applicationwhich currently provides information to 13 million African farmers. 
    more » « less
  7. null (Ed.)
    Basin-centric long short-term memory (LSTM) network models have recently been shown to be an exceptionally powerful tool for stream temperature (Ts) temporal prediction (training in one period and making predictions for another period at the same sites). However, spatial extrapolation is a well-known challenge to modeling Ts and it is uncertain how an LSTM-based daily Ts model will perform in unmonitored or dammed basins. Here we compiled a new benchmark dataset consisting of >400 basins across the contiguous United States in different data availability groups (DAG, meaning the daily sampling frequency) with or without major dams and studied how to assemble suitable training datasets for predictions in basins with or without temperature monitoring. For prediction in unmonitored basins (PUB), LSTM produced an RMSE of 1.129 °C and R2 of 0.983. While these metrics declined from LSTM's temporal prediction performance, they far surpassed traditional models' PUB values, and were competitive with traditional models' temporal prediction on calibrated sites. Even for unmonitored basins with major reservoirs, we obtained a median RMSE of 1.202°C and an R2 of 0.984. For temporal prediction, the most suitable training set was the matching DAG that the basin could be grouped into, e.g., the 60% DAG for a basin with 61% data availability. However, for PUB, a training dataset including all basins with data is consistently preferred. An input-selection ensemble moderately mitigated attribute overfitting. Our results indicate there are influential latent processes not sufficiently described by the inputs (e.g., geology, wetland covers), but temporal fluctuations are well predictable, and LSTM appears to be a highly accurate Ts modeling tool even for spatial extrapolation. 
    more » « less
  8. null (Ed.)
  9. null (Ed.)
    The electric power grid is a critical societal resource connecting multiple infrastructural domains such as agriculture, transportation, and manufacturing. The electrical grid as an infrastructure is shaped by human activity and public policy in terms of demand and supply requirements. Further, the grid is subject to changes and stresses due to diverse factors including solar weather, climate, hydrology, and ecology. The emerging interconnected and complex network dependencies make such interactions increasingly dynamic, posing novel risks, and presenting new challenges to manage the coupled human–natural system. This paper provides a survey of models and methods that seek to explore the significant interconnected impact of the electric power grid and interdependent domains. We also provide relevant critical risk indicators (CRIs) across diverse domains that may be used to assess risks to electric grid reliability, including climate, ecology, hydrology, finance, space weather, and agriculture. We discuss the convergence of indicators from individual domains to explore possible systemic risk, i.e., holistic risk arising from cross-domain interconnections. Further, we propose a compositional approach to risk assessment that incorporates diverse domain expertise and information, data science, and computer science to identify domain-specific CRIs and their union in systemic risk indicators. Our study provides an important first step towards data-driven analysis and predictive modeling of risks in interconnected human–natural systems. 
    more » « less
  10. null (Ed.)