skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Just What is “Good”? Musings on Hail Forecast Verification Through Evaluation of FV3-HAILCAST Hail Forecasts
Abstract Hail forecasts produced by the CAM-HAILCAST pseudo-Lagrangian hail size forecasting model were evaluated during the 2019, 2020, and 2021 NOAA HazardousWeather Testbed Spring Forecasting Experiments. As part of this evaluation, HWT SFE participants were polled about their definition of a “good” hail forecast. Participants were presented with two different verification methods conducted over three different spatiotemporal scales, and were then asked to subjectively evaluate the hail forecast as well as the different verificaiton methods themselves. Results recommended use of multiple verification methods tailored to the type of forecast expected by the end-user interpreting and applying the forecast. The hail forecasts evaluated during this period included an implementation of CAM-HAILCAST in the Limited Area Model of the Unified Forecast System with the Finite Volume 3 (FV3) dynamical core. Evaluation of FV3-HAILCAST over both 1-h and 24-h periods found continued improvement from 2019 to 2021. The improvement was largely a result of wide intervariability among FV3 ensemble members with different microphysics parameterizations in 2019 lessening significantly during 2020 and 2021. Overprediction throughout the diurnal cycle also lessened by 2021. A combination of both upscaling neighborhood verification and an object-based technique that only retained matched convective objects was necessary to understand the improvement., agreeing with the HWT SFE participants’ recommendations for multiple verification methods.  more » « less
Award ID(s):
1855050
PAR ID:
10393840
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Weather and Forecasting
ISSN:
0882-8156
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract As an increasing number of machine learning (ML) products enter the research-to-operations (R2O) pipeline, researchers have anecdotally noted a perceived hesitancy by operational forecasters to adopt this relatively new technology. One explanation often cited in the literature is that this perceived hesitancy derives from the complex and opaque nature of ML methods. Because modern ML models are trained to solve tasks by optimizing a potentially complex combination of mathematical weights, thresholds, and nonlinear cost functions, it can be difficult to determine how these models reach a solution from their given input. However, it remains unclear to what degree a model’s transparency may influence a forecaster’s decision to use that model or if that impact differs between ML and more traditional (i.e., non-ML) methods. To address this question, a survey was offered to forecaster and researcher participants attending the 2021 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiment (SFE) with questions about how participants subjectively perceive and compare machine learning products to more traditionally derived products. Results from this study revealed few differences in how participants evaluated machine learning products compared to other types of guidance. However, comparing the responses between operational forecasters, researchers, and academics exposed notable differences in what factors the three groups considered to be most important for determining the operational success of a new forecast product. These results support the need for increased collaboration between the operational and research communities. Significance StatementParticipants of the 2021 Hazardous Weather Testbed Spring Forecasting Experiment were surveyed to assess how machine learning products are perceived and evaluated in operational settings. The results revealed little difference in how machine learning products are evaluated compared to more traditional methods but emphasized the need for explainable product behavior and comprehensive end-user training. 
    more » « less
  2. Abstract An ensemble postprocessing method is developed for the probabilistic prediction of severe weather (tornadoes, hail, and wind gusts) over the conterminous United States (CONUS). The method combines conditional generative adversarial networks (CGANs), a type of deep generative model, with a convolutional neural network (CNN) to postprocess convection-allowing model (CAM) forecasts. The CGANs are designed to create synthetic ensemble members from deterministic CAM forecasts, and their outputs are processed by the CNN to estimate the probability of severe weather. The method is tested using High-Resolution Rapid Refresh (HRRR) 1–24-h forecasts as inputs and Storm Prediction Center (SPC) severe weather reports as targets. The method produced skillful predictions with up to 20% Brier skill score (BSS) increases compared to other neural-network-based reference methods using a testing dataset of HRRR forecasts in 2021. For the evaluation of uncertainty quantification, the method is overconfident but produces meaningful ensemble spreads that can distinguish good and bad forecasts. The quality of CGAN outputs is also evaluated. Results show that the CGAN outputs behave similarly to a numerical ensemble; they preserved the intervariable correlations and the contribution of influential predictors as in the original HRRR forecasts. This work provides a novel approach to postprocess CAM output using neural networks that can be applied to severe weather prediction. Significance StatementWe use a new machine learning (ML) technique to generate probabilistic forecasts of convective weather hazards, such as tornadoes and hailstorms, with the output from high-resolution numerical weather model forecasts. The new ML system generates an ensemble of synthetic forecast fields from a single forecast, which are then used to train ML models for convective hazard prediction. Using this ML-generated ensemble for training leads to improvements of 10%–20% in severe weather forecast skills compared to using other ML algorithms that use only output from the single forecast. This work is unique in that it explores the use of ML methods for producing synthetic forecasts of convective storm events and using these to train ML systems for high-impact convective weather prediction. 
    more » « less
  3. Power grid operators rely on solar irradiance forecasts to manage uncertainty and variability associated with solar power. Meteorological factors such as cloud cover, wind direction, and wind speed affect irradiance and are associated with a high degree of variability and uncertainty. Statistical models fail to accurately capture the dependence between these factors and irradiance. In this paper, we introduce the idea of applying multivariate Gated Recurrent Units (GRU) to forecast Direct Normal Irradiance (DNI) hourly. The proposed GRU-based forecasting method is evaluated against traditional Long Short-Term Memory (LSTM) using historical irradiance data (i.e., weather variables that include cloud cover, wind direction, and wind speed) to forecast irradiance forecasting over intra-hour and inter-hour intervals. Our evaluation on one of the sites from Measurement and Instrumentation Data Center indicate that both GRU and LSTM improved DNI forecasting performance when evaluated under different conditions. Moreover, including wind direction and wind speed can have substantial improvement in the accuracy of DNI forecasts. Besides, the forecasting model can accurately forecast irradiance values over multiple forecasting horizons. 
    more » « less
  4. Larremore, Daniel B (Ed.)
    During the COVID-19 pandemic, forecasting COVID-19 trends to support planning and response was a priority for scientists and decision makers alike. In the United States, COVID-19 forecasting was coordinated by a large group of universities, companies, and government entities led by the Centers for Disease Control and Prevention and the US COVID-19 Forecast Hub (https://covid19forecasthub.org). We evaluated approximately 9.7 million forecasts of weekly state-level COVID-19 cases for predictions 1–4 weeks into the future submitted by 24 teams from August 2020 to December 2021. We assessed coverage of central prediction intervals and weighted interval scores (WIS), adjusting for missing forecasts relative to a baseline forecast, and used a Gaussian generalized estimating equation (GEE) model to evaluate differences in skill across epidemic phases that were defined by the effective reproduction number. Overall, we found high variation in skill across individual models, with ensemble-based forecasts outperforming other approaches. Forecast skill relative to the baseline was generally higher for larger jurisdictions (e.g., states compared to counties). Over time, forecasts generally performed worst in periods of rapid changes in reported cases (either in increasing or decreasing epidemic phases) with 95% prediction interval coverage dropping below 50% during the growth phases of the winter 2020, Delta, and Omicron waves. Ideally, case forecasts could serve as a leading indicator of changes in transmission dynamics. However, while most COVID-19 case forecasts outperformed a naïve baseline model, even the most accurate case forecasts were unreliable in key phases. Further research could improve forecasts of leading indicators, like COVID-19 cases, by leveraging additional real-time data, addressing performance across phases, improving the characterization of forecast confidence, and ensuring that forecasts were coherent across spatial scales. In the meantime, it is critical for forecast users to appreciate current limitations and use a broad set of indicators to inform pandemic-related decision making. 
    more » « less
  5. Abstract Background West Nile virus (WNV) is the leading cause of mosquito-borne illness in the continental USA. WNV occurrence has high spatiotemporal variation, and current approaches to targeted control of the virus are limited, making forecasting a public health priority. However, little research has been done to compare strengths and weaknesses of WNV disease forecasting approaches on the national scale. We used forecasts submitted to the 2020 WNV Forecasting Challenge, an open challenge organized by the Centers for Disease Control and Prevention, to assess the status of WNV neuroinvasive disease (WNND) prediction and identify avenues for improvement. Methods We performed a multi-model comparative assessment of probabilistic forecasts submitted by 15 teams for annual WNND cases in US counties for 2020 and assessed forecast accuracy, calibration, and discriminatory power. In the evaluation, we included forecasts produced by comparison models of varying complexity as benchmarks of forecast performance. We also used regression analysis to identify modeling approaches and contextual factors that were associated with forecast skill. Results Simple models based on historical WNND cases generally scored better than more complex models and combined higher discriminatory power with better calibration of uncertainty. Forecast skill improved across updated forecast submissions submitted during the 2020 season. Among models using additional data, inclusion of climate or human demographic data was associated with higher skill, while inclusion of mosquito or land use data was associated with lower skill. We also identified population size, extreme minimum winter temperature, and interannual variation in WNND cases as county-level characteristics associated with variation in forecast skill. Conclusions Historical WNND cases were strong predictors of future cases with minimal increase in skill achieved by models that included other factors. Although opportunities might exist to specifically improve predictions for areas with large populations and low or high winter temperatures, areas with high case-count variability are intrinsically more difficult to predict. Also, the prediction of outbreaks, which are outliers relative to typical case numbers, remains difficult. Further improvements to prediction could be obtained with improved calibration of forecast uncertainty and access to real-time data streams (e.g. current weather and preliminary human cases). Graphical Abstract 
    more » « less