Estimating and predicting the state of the atmosphere is a probabilistic problem for which an ensemble modeling approach often is taken to represent uncertainty in the system. Common methods for examining uncertainty and assessing performance for ensembles emphasize pointwise statistics or marginal distributions. However, these methods lose specific information about individual ensemble members. This paper explores contour band depth (cBD), a method of analyzing uncertainty in terms of contours of scalar fields. cBD is fully nonparametric and induces an ordering on ensemble members that leads to box-and-whisker-plot-type visualizations of uncertainty for two-dimensional data. By applying cBD to synthetic ensembles, we demonstrate that it provides enhanced information about the spatial structure of ensemble uncertainty. We also find that the usefulness of the cBD analysis depends on the presence of multiple modes and multiple scales in the ensemble of contours. Finally, we apply cBD to compare various convection-permitting forecasts from different ensemble prediction systems and find that the value it provides in real-world applications compared to standard analysis methods exhibits clear limitations. In some cases, contour boxplots can provide deeper insight into differences in spatial characteristics between the different ensemble forecasts. Nevertheless, identification of outliers using cBD is not always intuitive, and the method can be especially challenging to implement for flow that exhibits multiple spatial scales (e.g., discrete convective cells embedded within a mesoscale weather system).
Predictions of Earth’s atmosphere inherently come with some degree of uncertainty owing to incomplete observations and the chaotic nature of the system. Understanding that uncertainty is critical when drawing scientific conclusions or making policy decisions from model predictions. In this study, we explore a method for describing model uncertainty when the quantities of interest are well represented by contours. The method yields a quantitative visualization of uncertainty in both the location and the shape of contours to an extent that is not possible with standard uncertainty quantification methods and may eventually prove useful for the development of more robust techniques for evaluating and validating numerical weather models.
- NSF-PAR ID:
- Publisher / Repository:
- American Meteorological Society
- Date Published:
- Journal Name:
- Monthly Weather Review
- Page Range / eLocation ID:
- p. 2097-2113
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
null (Ed.)Abstract. Most available verification metrics for ensemble forecasts focus on univariate quantities. That is, they assess whether the ensemble provides anadequate representation of the forecast uncertainty about the quantity of interest at a particular location and time. For spatially indexed ensemble forecasts, however, it is also important that forecast fields reproduce the spatial structure of the observed field and represent the uncertainty about spatial properties such as the size of the area for which heavy precipitation, high winds, critical fire weather conditions, etc., areexpected. In this article we study the properties of the fraction of threshold exceedance (FTE) histogram, a new diagnostic tool designed forspatially indexed ensemble forecast fields. Defined as the fraction of grid points where a prescribed threshold is exceeded, the FTE is calculated for the verification field and separately for each ensemble member. It yields a projection of a – possibly high-dimensional – multivariatequantity onto a univariate quantity that can be studied with standard tools like verification rank histograms. This projection is appealing since itreflects a spatial property that is intuitive and directly relevant in applications, though it is not obvious whether the FTE is sufficientlysensitive to misrepresentation of spatial structure in the ensemble. In a comprehensive simulation study we find that departures from uniformity ofthe FTE histograms can indeed be related to forecast ensembles with biased spatial variability and that these histograms detect shortcomings in the spatial structure of ensemble forecast fields that are not obvious by eye. For demonstration, FTE histograms are applied in the context of spatiallydownscaled ensemble precipitation forecast fields from NOAA's Global Ensemble Forecast System.more » « less
Abstract Weather prediction models currently operate within a probabilistic framework for generating forecasts conditioned on recent measurements of Earth’s atmosphere. This framework can be conceptualized as one that approximates parts of a Bayesian posterior density estimated under assumptions of Gaussian errors. Gaussian error approximations are appropriate for synoptic-scale atmospheric flow, which experiences quasi-linear error evolution over time scales depicted by measurements, but are often hypothesized to be inappropriate for highly nonlinear, sparsely-observed mesoscale processes. The current study adopts an experimental regional modeling system to examine the impact of Gaussian prior error approximations, which are adopted by ensemble Kalman filters (EnKFs) to generate probabilistic predictions. The analysis is aided by results obtained using recently-introduced particle filter (PF) methodology that relies on an implicit non-parametric representation of prior probability densities—but with added computational expense. The investigation focuses on EnKF and PF comparisons over month-long experiments performed using an extensive domain, which features the development and passage of numerous extratropical and tropical cyclones. The experiments reveal spurious small-scale corrections in EnKF members, which come about from inappropriate Gaussian approximations for priors dominated by alignment uncertainty in mesoscale weather systems. Similar behavior is found in PF members, owing to the use of a localization operator, but to a much lesser extent. This result is reproduced and studied using a low-dimensional model, which permits the use of large sample estimates of the Bayesian posterior distribution. Findings from this study motivate the use of data assimilation techniques that provide a more appropriate specification of multivariate non-Gaussian prior densities or a multi-scale treatment of alignment errors during data assimilation.more » « less
Numerical weather prediction models and high-performance computing have significantly improved our ability to model near-surface variables, but their uncertainty quantification still remains a challenging task. Ensembles are usually produced to depict a series of possible future states of the atmosphere, as a means to quantify the prediction uncertainty, but this requires multiple instantiation of the model, leading to an increased computational cost. Weather analogs, alternatively, can be used to generate ensembles without repeated model runs. The analog ensemble (AnEn) is a technique to identify similar weather patterns for near-surface variables and quantify forecast uncertainty. Analogs are chosen based on a similarity metric that calculates the weighted multivariate Euclidean distance. However, identifying optimal weights for similarity metric becomes a bottleneck because it involves performing a constrained exhaustive search. As a result, only a few predictors were selected and optimized in previous AnEn studies. A new machine learning similarity metric is proposed to improve the theoretical framework on how weather analogs are identified. First, a deep learning network is trained to generate latent features using all the temporal multivariate input predictors. Analogs are then selected in this latent space, rather than the original predictor space. The proposed method does not require prior predictor selection and an exhaustive search, thus presenting a significant computational benefit and scalability. It is tested for surface wind speed and solar irradiance forecasts in Pennsylvania from 2017 to 2019. Results show that the proposed method is capable of handling a large number of predictors, and it outperforms the original similarity metric in RMSE, bias, and CRPS. Since the data-driven transformation network is trained using the historical record, the proposed method has been found to be more flexible for searching through a longer record.
Space weather indices are used commonly to drive forecasts of thermosphere density, which affects objects in low‐Earth orbit (LEO) through atmospheric drag. One commonly used space weather proxy,
F10.7cm, correlates well with solar extreme ultra‐violet (EUV) energy deposition into the thermosphere. Currently, the USAF contracts Space Environment Technologies (SET), which uses a linear algorithm to forecast F10.7cm. In this work, we introduce methods using neural network ensembles with multi‐layer perceptrons (MLPs) and long‐short term memory (LSTMs) to improve on the SET predictions. We make predictions only from historical F10.7cmvalues. We investigate data manipulation methods (backwards averaging and lookback) as well as multi step and dynamic forecasting. This work shows an improvement over the popular persistence and the operational SET model when using ensemble methods. The best models found in this work are ensemble approaches using multi step or a combination of multi step and dynamic predictions. Nearly all approaches offer an improvement, with the best models improving between 48% and 59% on relative MSE with respect to persistence. Other relative error metrics were shown to improve greatly when ensembles methods were used. We were also able to leverage the ensemble approach to provide a distribution of predicted values; allowing an investigation into forecast uncertainty. Our work found models that produced less biased predictions at elevated and high solar activity levels. Uncertainty was also investigated through the use of a calibration error score metric (CES), our best ensemble reached similar CES as other work.
For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance.
To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study.