skip to main content

Title: A Nonstationary Standardized Precipitation Index (NSPI) Using Bayesian Splines

The standardized precipitation index (SPI) measures meteorological drought relative to historical climatology by normalizing accumulated precipitation. Longer record lengths improve parameter estimates, but these longer records may include signals of anthropogenic climate change and multidecadal natural climate fluctuations. Historically, climate nonstationarity has either been ignored or incorporated into the SPI using a quasi-stationary reference period, such as the WMO 30-yr period. This study introduces and evaluates a novel nonstationary SPI model based on Bayesian splines, designed to both improve parameter estimates for stationary climates and to explicitly incorporate nonstationarity. Using synthetically generated precipitation, this study directly compares the proposed Bayesian SPI model with existing SPI approaches based on maximum likelihood estimation for stationary and nonstationary climates. The proposed model not only reproduced the performance of existing SPI models but improved upon them in several key areas: reducing parameter uncertainty and noise, simultaneously modeling the likelihood of zero and positive precipitation, and capturing nonlinear trends and seasonal shifts across all parameters. Further, the fully Bayesian approach ensures all parameters have uncertainty estimates, including zero precipitation likelihood. The study notes that the zero precipitation parameter is too sensitive and could be improved in future iterations. The study concludes with an application more » of the proposed Bayesian nonstationary SPI model for nine gauges across a range of hydroclimate zones in the United States. Results of this experiment show that the model is stable and reproduces nonstationary patterns identified in prior studies, while also indicating new findings, particularly for the shape and zero precipitation parameters.

Significance Statement

We typically measure how bad a drought is by comparing it with the historical record. With long-term changes in climate or other factors, however, a typical drought today may not have been typical in the recent past. The purpose of this study is to build a model that measures drought relative to a changing climate. Our results confirm that the model is accurate and captures previously noted climate change patterns—a drier western United States, a wetter eastern United States, earlier summer weather, and more extreme wet seasons. This is significant because this model can improve drought measurement and identify recent changes in drought.

« less
Publication Date:
Journal Name:
Journal of Applied Meteorology and Climatology
Page Range or eLocation-ID:
p. 761-779
American Meteorological Society
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Satellite precipitation products, as all quantitative estimates, come with some inherent degree of uncertainty. To associate a quantitative value of the uncertainty to each individual estimate, error modeling is necessary. Most of the error models proposed so far compute the uncertainty as a function of precipitation intensity only, and only at one specific spatiotemporal scale. We propose a spectral error model that accounts for the neighboring space–time dynamics of precipitation into the uncertainty quantification. Systematic distortions of the precipitation signal and random errors are characterized distinctively in every frequency–wavenumber band in the Fourier domain, to accurately characterize error across scales. The systematic distortions are represented as a deterministic space–time linear filtering term. The random errors are represented as a nonstationary additive noise. The spectral error model is applied to the IMERG multisatellite precipitation product, and its parameters are estimated empirically through a system identification approach using the GV-MRMS gauge–radar measurements as reference (“truth”) over the eastern United States. The filtering term is found to be essentially low-pass (attenuating the fine-scale variability). While traditional error models attribute most of the error variance to random errors, it is found here that the systematic filtering term explains 48% of the error variancemore »at the native resolution of IMERG. This fact confirms that, at high resolution, filtering effects in satellite precipitation products cannot be ignored, and that the error cannot be represented as a purely random additive or multiplicative term. An important consequence is that precipitation estimates derived from different sources shall not be expected to automatically have statistically independent errors.

    Significance Statement

    Satellite precipitation products are nowadays widely used for climate and environmental research, water management, risk analysis, and decision support at the local, regional, and global scales. For all these applications, knowledge about the accuracy of the products is critical for their usability. However, products are not systematically provided with a quantitative measure of the uncertainty associated with each individual estimate. Various parametric error models have been proposed for uncertainty quantification, mostly assuming that the uncertainty is only a function of the precipitation intensity at the pixel and time of interest. By projecting satellite precipitation fields and their retrieval errors into the Fourier frequency–wavenumber domain, we show that we can explicitly take into account the neighboring space–time multiscale dynamics of precipitation and compute a scale-dependent uncertainty.

    « less
  2. Abstract

    Snowpack provides the majority of predictive information for water supply forecasts (WSFs) in snow-dominated basins across the western United States. Drought conditions typically accompany decreased snowpack and lowered runoff efficiency, negatively impacting WSFs. Here, we investigate the relationship between snow water equivalent (SWE) and April–July streamflow volume (AMJJ-V) during drought in small headwater catchments, using observations from 31 USGS streamflow gauges and 54 SNOTEL stations. A linear regression approach is used to evaluate forecast skill under different historical climatologies used for model fitting, as well as with different forecast dates. Experiments are constructed in which extreme hydrological drought years are withheld from model training, that is, years with AMJJ-V below the 15th percentile. Subsets of the remaining years are used for model fitting to understand how the climatology of different training subsets impacts forecasts of extreme drought years. We generally report overprediction in drought years. However, training the forecast model on drier years, that is, below-median years (P15,P57.5], minimizes residuals by an average of 10% in drought year forecasts, relative to a baseline case, with the highest median skill obtained in mid- to late April for colder regions. We report similar findings using a modified National Resources Conservation Servicemore »(NRCS) procedure in nine large Upper Colorado River basin (UCRB) basins, highlighting the importance of the snowpack–streamflow relationship in streamflow predictability. We propose an “adaptive sampling” approach of dynamically selecting training years based on antecedent SWE conditions, showing error reductions of up to 20% in historical drought years relative to the period of record. These alternate training protocols provide opportunities for addressing the challenges of future drought risk to water supply planning.

    Significance Statement

    Seasonal water supply forecasts based on the relationship between peak snowpack and water supply exhibit unique errors in drought years due to low snow and streamflow variability, presenting a major challenge for water supply prediction. Here, we assess the reliability of snow-based streamflow predictability in drought years using a fixed forecast date or fixed model training period. We critically evaluate different training protocols that evaluate predictive performance and identify sources of error during historical drought years. We also propose and test an “adaptive sampling” application that dynamically selects training years based on antecedent SWE conditions providing to overcome persistent errors and provide new insights and strategies for snow-guided forecasts.

    « less
  3. Abstract

    Maximum likelihood estimation in phylogenetics requires a means of handling unknown ancestral states. Classical maximum likelihood averages over these unknown intermediate states, leading to provably consistent estimation of the topology and continuous model parameters. Recently, a computationally efficient approach has been proposed to jointly maximize over these unknown states and phylogenetic parameters. Although this method of joint maximum likelihood estimation can obtain estimates more quickly, its properties as an estimator are not yet clear. In this article, we show that this method of jointly estimating phylogenetic parameters along with ancestral states is not consistent in general. We find a sizeable region of parameter space that generates data on a four-taxon tree for which this joint method estimates the internal branch length to be exactly zero, even in the limit of infinite-length sequences. More generally, we show that this joint method only estimates branch lengths correctly on a set of measure zero. We show empirically that branch length estimates are systematically biased downward, even for short branches.

  4. Abstract

    Tree die-off, driven by extreme drought and exacerbated by a warming climate, is occurring rapidly across every wooded continent—threatening carbon sinks and other ecosystem services provided by forests and woodlands. Forecasting the spatial patterns of tree die-off in response to drought is a priority for the management and conservation of forested ecosystems under projected future hotter and drier climates. Several thresholds derived from drought-metrics have been proposed to predict mortality ofPinus edulis,a model tree species in many studies of drought-induced tree die-off. To improve future capacity to forecast tree mortality, we used a severe drought as a natural experiment. We compared the ability of existing mortality thresholds derived from four drought metrics (the Forest Drought Severity Index (FDSI), the Standardized Precipitation Evapotranspiration Index, and raw values of precipitation (PPT) and vapor pressure deficit, calculated using 4 km PRISM data) to predict areas ofP. edulisdie-off following an extreme drought in 2018 across the southwestern US. Using aerial detection surveys of tree mortality in combination with gridded climate data, we calculated the agreement between these four proposed thresholds and the presence and absence of regional-scale tree die-off using sensitivity, specificity, and the area under the curve (AUC). Overall, existing mortality thresholdsmore »tended to over predict the spatial extent of tree die-off across the landscape, yet some retain moderate skill in discriminating between areas that experienced and did not experience tree die-off. The simple PPT threshold had the highest AUC score (71%) as well as fair sensitivity and specificity, but the FDSI had the greatest sensitivity to die-off (85.9%). We highlight that empirically derived climate thresholds may be useful forecasting tools to identify vulnerable areas to drought induced die-off, allowing for targeted responses to future droughts and improved management of at-risk areas.

    « less
  5. In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments. In this paper, we study causal discovery and forecasting for nonstationary time series. By exploiting a particular type of state-space model to represent the processes, we show that nonstationarity helps to identify the causal structure, and that forecasting naturally benefits from learned causal knowledge. Specifically, we allow changes in both causal strengths and noise variances in the nonlinear state-space models, which, interestingly, renders both the causal structure and model parameters identifiable. Given the causal model, we treat forecasting as a problem in Bayesian inference in the causal model, which exploits the time-varying property of the data and adapts to new observations in a principled manner. Experimental results on synthetic and real-world data sets demonstrate the efficacy of the proposed methods.