skip to main content


Title: A Deterministic Approach for Approximating the Diurnal Cycle of Precipitation for Use in Large-Scale Hydrological Modeling

Accurate characterization of precipitation P at subdaily temporal resolution is important for a wide range of hydrological applications, yet large-scale gridded observational datasets primarily contain daily total P. Unfortunately, a widely used deterministic approach that disaggregates P uniformly over the day grossly mischaracterizes the diurnal cycle of P, leading to potential biases in simulated runoff Q. Here we present Precipitation Isosceles Triangle (PITRI), a two-parameter deterministic approach in which the hourly hyetograph is modeled with an isosceles triangle with prescribed duration and time of peak intensity. Monthly duration and peak time were derived from meteorological observations at U.S. Climate Reference Network (USCRN) stations and extended across the United States, Mexico, and southern Canada at 6-km resolution via linear regression against historical climate statistics. Across the USCRN network (years 2000–13), simulations using the Variable Infiltration Capacity (VIC) model, driven by P disaggregated via PITRI, yielded nearly unbiased estimates of annual Q relative to simulations driven by observed P. In contrast, simulations using the uniform method had a Q bias of −11%, through overestimating canopy evaporation and underestimating throughfall. One limitation of the PITRI approach is a potential bias in snow accumulation when a high proportion of P falls on days with a mix of temperatures above and below freezing, for which the partitioning of P into rain and snow is sensitive to event timing within the diurnal cycle. Nevertheless, the good overall performance of PITRI suggests that a deterministic approach may be sufficiently accurate for large-scale hydrologic applications.

 
more » « less
NSF-PAR ID:
10086579
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Journal of Hydrometeorology
Volume:
20
Issue:
2
ISSN:
1525-755X
Page Range / eLocation ID:
p. 297-317
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    We have simulated 0–5 Hz deterministic wave propagation for a suite of 17 models of the 2014 Mw 5.1 La Habra, CA, earthquake with the Southern California Earthquake Center Community Velocity Model Version S4.26-M01 using a finite-fault source. Strong motion data at 259 sites within a 148 km × 140 km area are used to validate our simulations. Our simulations quantify the effects of statistical distributions of small-scale crustal heterogeneities (SSHs), frequency-dependent attenuation Q(f), surface topography and near-surface low-velocity material (via a 1-D approximation) on the resulting ground motion synthetics. The shear wave quality factor QS(f) is parametrized as QS, 0 and QS, 0fγ for frequencies less than and higher than 1 Hz, respectively. We find the most favourable fit to data for models using ratios of QS, 0 to shear wave velocity VS of 0.075–1.0 and γ values less than 0.6, with the best-fitting amplitude drop-off for the higher frequencies obtained for γ values of 0.2–0.4. Models including topography and a realistic near-surface weathering layer tend to increase peak velocities at mountain peaks and ridges, with a corresponding decrease behind the peaks and ridges in the direction of wave propagation. We find a clear negative correlation between the effects on peak ground velocity amplification and duration lengthening, suggesting that topography redistributes seismic energy from the large-amplitude first arrivals to the adjacent coda waves. A weathering layer with realistic near-surface low velocities is found to enhance the amplification at mountain peaks and ridges, and may partly explain the underprediction of the effects of topography on ground motions found in models. Our models including topography tend to improve the fit to data, as compared to models with a flat free surface, while our distributions of SSHs with constraints from borehole data fail to significantly improve the fit. Accuracy of the velocity model, particularly the near-surface low velocities, as well as the source description, controls the resolution with which the anelastic attenuation can be determined. Our results demonstrate that it is feasible to use fully deterministic physics-based simulations to estimate ground motions for seismic hazard analysis up to 5 Hz. Here, the effects of, and trade-offs with, near-surface low-velocity material, topography, SSHs and Q(f) become increasingly important as frequencies increase towards 5 Hz, and should be included in the calculations. Future improvement in community velocity models, wider access to computational resources, more efficient numerical codes and guidance from this study are bound to further constrain the ground motion models, leading to more accurate seismic hazard analysis.

     
    more » « less
  2. Abstract

    We explore the potential of feed‐forward deep neural networks (DNNs) for emulating cloud superparameterization in realistic geography, using offline fits to data from the superparameterized community atmospheric model. To identify the network architecture of greatest skill, we formally optimize hyperparameters using ∼250 trials. Our DNN explains over 70% of the temporal variance at the 15‐min sampling scale throughout the mid‐to‐upper troposphere. Autocorrelation timescale analysis compared against DNN skill suggests the less good fit in the tropical, marine boundary layer is driven by neural network difficulty emulating fast, stochastic signals in convection. However, spectral analysis in the temporal domain indicates skillful emulation of signals on diurnal to synoptic scales. A closer look at the diurnal cycle reveals correct emulation of land‐sea contrasts and vertical structure in the heating and moistening fields, but some distortion of precipitation. Sensitivity tests targeting precipitation skill reveal complementary effects of adding positive constraints versus hyperparameter tuning, motivating the use of both in the future. A first attempt to force an offline land model with DNN emulated atmospheric fields produces reassuring results further supporting neural network emulation viability in real‐geography settings. Overall, the fit skill is competitive with recent attempts by sophisticated Residual and Convolutional Neural Network architectures trained on added information, including memory of past states. Our results confirm the parameterizability of superparameterized convection with continents through machine learning and we highlight the advantages of casting this problem locally in space and time for accurate emulation and hopefully quick implementation of hybrid climate models.

     
    more » « less
  3. Abstract. Advances in ambient environmental monitoring technologies are enabling concerned communities and citizens to collect data to better understand their local environment and potential exposures. These mobile, low-cost tools make it possible to collect data with increased temporal and spatial resolution, providing data on a large scale with unprecedented levels of detail. This type of data has the potential to empower people to make personal decisions about their exposure and support the development of local strategies for reducing pollution and improving health outcomes. However, calibration of these low-cost instruments has been a challenge. Often, a sensor package is calibrated via field calibration. This involves colocating the sensor package with a high-quality reference instrument for an extended period and then applying machine learning or other model fitting technique such as multiple linear regression to develop a calibration model for converting raw sensor signals to pollutant concentrations. Although this method helps to correct for the effects of ambient conditions (e.g., temperature) and cross sensitivities with nontarget pollutants, there is a growing body of evidence that calibration models can overfit to a given location or set of environmental conditions on account of the incidental correlation between pollutant levels and environmental conditions, including diurnal cycles. As a result, a sensor package trained at a field site may provide less reliable data when moved, or transferred, to a different location. This is a potential concern for applications seeking to perform monitoring away from regulatory monitoring sites, such as personal mobile monitoring or high-resolution monitoring of a neighborhood. We performed experiments confirming that transferability is indeed a problem and show that it can be improved by collecting data from multiple regulatory sites and building a calibration model that leverages data from a more diverse data set. We deployed three sensor packages to each of three sites with reference monitors (nine packages total) and then rotated the sensor packages through the sites over time. Two sites were in San Diego, CA, with a third outside of Bakersfield, CA, offering varying environmental conditions, general air quality composition, and pollutant concentrations. When compared to prior single-site calibration, the multisite approach exhibits better model transferability for a range of modeling approaches. Our experiments also reveal that random forest is especially prone to overfitting and confirm prior results that transfer is a significant source of both bias and standard error. Linear regression, on the other hand, although it exhibits relatively high error, does not degrade much in transfer. Bias dominated in our experiments, suggesting that transferability might be easily increased by detecting and correcting for bias. Also, given that many monitoring applications involve the deployment of many sensor packages based on the same sensing technology, there is an opportunity to leverage the availability of multiple sensors at multiple sites during calibration to lower the cost of training and better tolerate transfer. We contribute a new neural network architecture model termed split-NN that splits the model into two stages, in which the first stage corrects for sensor-to-sensor variation and the second stage uses the combined data of all the sensors to build a model for a single sensor package. The split-NN modeling approach outperforms multiple linear regression, traditional two- and four-layer neural networks, and random forest models. Depending on the training configuration, compared to random forest the split-NN method reduced error 0 %–11 % for NO2 and 6 %–13 % for O3. 
    more » « less
  4. Abstract. This study examines the diurnal variation in precipitation over Hainan Island in the South China Sea using gauge observations from 1951 to 2012 and Climate Prediction Center MORPHing technique (CMORPH) satellite estimates from 2006 to 2015, as well as numerical simulations. The simulations are the first to use climatological mean initial and lateral boundary conditions to study the dynamic and thermodynamic processes (and the impacts of land–sea breeze circulations) that control the rainfall distribution and climatology. Precipitation is most significant from April to October and exhibits a strong diurnal cycle resulting from land–sea breeze circulations. More than 60% of the total annual precipitation over the island is attributable to the diurnal cycle with a significant monthly variability. The CMORPH and gauge datasets agree well, except that the CMORPH data underestimate precipitation and have a 1h peak delay. The diurnal cycle of the rainfall and the related land–sea breeze circulations during May and June were well captured by convection-permitting numerical simulations with the Weather Research and Forecasting (WRF) model, which were initiated from a 10-year average ERA-Interim reanalysis. The simulations have a slight overestimation of rainfall amounts and a 1h delay in peak rainfall time. The diurnal cycle of precipitation is driven by the occurrence of moist convection around noontime owing to low-level convergence associated with the sea-breeze circulations. The precipitation intensifies rapidly thereafter and peaks in the afternoon with the collisions of sea-breeze fronts from different sides of the island. Cold pools of the convective storms contribute to the inland propagation of the sea breeze. Generally, precipitation dissipates quickly in the evening due to the cooling and stabilization of the lower troposphere and decrease of boundary layer moisture. Interestingly, the rather high island orography is not a dominant factor in the diurnal variation in precipitation over the island.

     
    more » « less
  5. Accurate prediction of precipitation intensity is crucial for both human and natural systems, especially in a warming climate more prone to extreme precipitation. Yet, climate models fail to accurately predict precipitation intensity, particularly extremes. One missing piece of information in traditional climate model parameterizations is subgrid-scale cloud structure and organization, which affects precipitation intensity and stochasticity at coarse resolution. Here, using global storm-resolving simulations and machine learning, we show that, by implicitly learning subgrid organization, we can accurately predict precipitation variability and stochasticity with a low-dimensional set of latent variables. Using a neural network to parameterize coarse-grained precipitation, we find that the overall behavior of precipitation is reasonably predictable using large-scale quantities only; however, the neural network cannot predict the variability of precipitation ( R 2 ∼ 0.45) and underestimates precipitation extremes. The performance is significantly improved when the network is informed by our organization metric, correctly predicting precipitation extremes and spatial variability ( R 2 ∼ 0.9). The organization metric is implicitly learned by training the algorithm on a high-resolution precipitable water field, encoding the degree of subgrid organization. The organization metric shows large hysteresis, emphasizing the role of memory created by subgrid-scale structures. We demonstrate that this organization metric can be predicted as a simple memory process from information available at the previous time steps. These findings stress the role of organization and memory in accurate prediction of precipitation intensity and extremes and the necessity of parameterizing subgrid-scale convective organization in climate models to better project future changes of water cycle and extremes. 
    more » « less