skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Enhancing Probabilistic Solar PV Forecasting: Integrating the NB-DST Method with Deterministic Models
Accurate quantification of uncertainty in solar photovoltaic (PV) generation forecasts is imperative for the efficient and reliable operation of the power grid. In this paper, a data-driven non-parametric probabilistic method based on the Naïve Bayes (NB) classification algorithm and Dempster–Shafer theory (DST) of evidence is proposed for day-ahead probabilistic PV power forecasting. This NB-DST method extends traditional deterministic solar PV forecasting methods by quantifying the uncertainty of their forecasts by estimating the cumulative distribution functions (CDFs) of their forecast errors and forecast variables. The statistical performance of this method is compared with the analog ensemble method and the persistence ensemble method under three different weather conditions using real-world data. The study results reveal that the proposed NB-DST method coupled with an artificial neural network model outperforms the other methods in that its estimated CDFs have lower spread, higher reliability, and sharper probabilistic forecasts with better accuracy.  more » « less
Award ID(s):
1845523
PAR ID:
10601069
Author(s) / Creator(s):
; ; ;
Editor(s):
Barambones, Oscar
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Energies
Volume:
17
Issue:
10
ISSN:
1996-1073
Page Range / eLocation ID:
2392
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Power grid operators rely on solar irradiance forecasts to manage uncertainty and variability associated with solar power. Meteorological factors such as cloud cover, wind direction, and wind speed affect irradiance and are associated with a high degree of variability and uncertainty. Statistical models fail to accurately capture the dependence between these factors and irradiance. In this paper, we introduce the idea of applying multivariate Gated Recurrent Units (GRU) to forecast Direct Normal Irradiance (DNI) hourly. The proposed GRU-based forecasting method is evaluated against traditional Long Short-Term Memory (LSTM) using historical irradiance data (i.e., weather variables that include cloud cover, wind direction, and wind speed) to forecast irradiance forecasting over intra-hour and inter-hour intervals. Our evaluation on one of the sites from Measurement and Instrumentation Data Center indicate that both GRU and LSTM improved DNI forecasting performance when evaluated under different conditions. Moreover, including wind direction and wind speed can have substantial improvement in the accuracy of DNI forecasts. Besides, the forecasting model can accurately forecast irradiance values over multiple forecasting horizons. 
    more » « less
  2. To guide the selection of probabilistic solar power forecasting methods for day-ahead power grid operations, the performance of four methods, i.e., Bayesian model averaging (BMA), Analog ensemble (AnEn), ensemble learning method (ELM), and persistence ensemble (PerEn) is compared in this paper. A real-world hourly solar generation dataset from a rooftop solar plant is used to train and validate the methods under clear, partially cloudy, and overcast weather conditions. Comparisons have been made on a one-year testing set using popular performance metrics for probabilistic forecasts. It is found that the ELM method outperforms other methods by offering better reliability, higher resolution, and narrower prediction interval width under all weather conditions with a slight compromise in accuracy. The BMA method performs well under overcast and partially cloudy weather conditions, although it is outperformed by the ELM method under clear conditions. 
    more » « less
  3. Abstract An ensemble postprocessing method is developed for the probabilistic prediction of severe weather (tornadoes, hail, and wind gusts) over the conterminous United States (CONUS). The method combines conditional generative adversarial networks (CGANs), a type of deep generative model, with a convolutional neural network (CNN) to postprocess convection-allowing model (CAM) forecasts. The CGANs are designed to create synthetic ensemble members from deterministic CAM forecasts, and their outputs are processed by the CNN to estimate the probability of severe weather. The method is tested using High-Resolution Rapid Refresh (HRRR) 1–24-h forecasts as inputs and Storm Prediction Center (SPC) severe weather reports as targets. The method produced skillful predictions with up to 20% Brier skill score (BSS) increases compared to other neural-network-based reference methods using a testing dataset of HRRR forecasts in 2021. For the evaluation of uncertainty quantification, the method is overconfident but produces meaningful ensemble spreads that can distinguish good and bad forecasts. The quality of CGAN outputs is also evaluated. Results show that the CGAN outputs behave similarly to a numerical ensemble; they preserved the intervariable correlations and the contribution of influential predictors as in the original HRRR forecasts. This work provides a novel approach to postprocess CAM output using neural networks that can be applied to severe weather prediction. Significance StatementWe use a new machine learning (ML) technique to generate probabilistic forecasts of convective weather hazards, such as tornadoes and hailstorms, with the output from high-resolution numerical weather model forecasts. The new ML system generates an ensemble of synthetic forecast fields from a single forecast, which are then used to train ML models for convective hazard prediction. Using this ML-generated ensemble for training leads to improvements of 10%–20% in severe weather forecast skills compared to using other ML algorithms that use only output from the single forecast. This work is unique in that it explores the use of ML methods for producing synthetic forecasts of convective storm events and using these to train ML systems for high-impact convective weather prediction. 
    more » « less
  4. Abstract Synthetic ensemble forecasts are an important tool for testing the robustness of forecast‐informed reservoir operations (FIRO). These forecasts are statistically generated to mimic the skill of hindcasts derived from operational ensemble forecasting systems, but they can be created for time periods when hindcast data are unavailable, allowing for a more comprehensive evaluation of FIRO policies. Nevertheless, it remains unclear how to determine whether a candidate synthetic ensemble forecasting approach is sufficiently representative of its real‐world counterpart to support FIRO policy evaluation. This highlights a need for formalfit‐for‐purposevalidation frameworks to advance synthetic forecasting as a generalizable risk analysis strategy. We address this research gap by first introducing a novel operations‐based validation framework, where reservoir storage and release simulations under a FIRO policy are compared when forced with a single ensemble hindcast and many different synthetic ensembles. We evaluate the suitability of synthetic forecasts based on formal probabilistic verification of the operational outcomes. Second, we develop a new synthetic ensemble forecasting algorithm and compare it to a previous algorithm using this validation framework across a set of stylized, hydrologically diverse reservoir systems in California. Results reveal clear differences in operational suitability, with the new method consistently outperforming the previous one. These findings demonstrate the promise of the newer synthetic forecasting approach as a generalizable tool for FIRO policy evaluation and robustness testing. They also underscore the value of the proposed validation framework in benchmarking and guiding future improvements in synthetic forecast development. 
    more » « less
  5. null (Ed.)
    Abstract Background Ensemble modeling aims to boost the forecasting performance by systematically integrating the predictive accuracy across individual models. Here we introduce a simple-yet-powerful ensemble methodology for forecasting the trajectory of dynamic growth processes that are defined by a system of non-linear differential equations with applications to infectious disease spread. Methods We propose and assess the performance of two ensemble modeling schemes with different parametric bootstrapping procedures for trajectory forecasting and uncertainty quantification. Specifically, we conduct sequential probabilistic forecasts to evaluate their forecasting performance using simple dynamical growth models with good track records including the Richards model, the generalized-logistic growth model, and the Gompertz model. We first test and verify the functionality of the method using simulated data from phenomenological models and a mechanistic transmission model. Next, the performance of the method is demonstrated using a diversity of epidemic datasets including scenario outbreak data of the Ebola Forecasting Challenge and real-world epidemic data outbreaks of including influenza, plague, Zika, and COVID-19. Results We found that the ensemble method that randomly selects a model from the set of individual models for each time point of the trajectory of the epidemic frequently outcompeted the individual models as well as an alternative ensemble method based on the weighted combination of the individual models and yields broader and more realistic uncertainty bounds for the trajectory envelope, achieving not only better coverage rate of the 95% prediction interval but also improved mean interval scores across a diversity of epidemic datasets. Conclusion Our new methodology for ensemble forecasting outcompete component models and an alternative ensemble model that differ in how the variance is evaluated for the generation of the prediction intervals of the forecasts. 
    more » « less