skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2425735

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract It is widely agreed that subseasonal-to-seasonal (S2S) predictability arises from the atmospheric initial state during early lead times and from the land and ocean during long lead times. We test this hypothesis for the large-scale mid-latitude atmosphere by training numerous XGBoost models to predict weather regimes (WRs) over North America at 1-to-8-week lead times. Each model uses a different predictor from one Earth system component (atmosphere, ocean, or land) sourced from reanalysis. According to the models, the atmosphere provides more predictability during the first two forecast weeks, and the three components performed similarly afterward. However, the skill and sources of predictability are highly dependent on the season and target WR. Our results show greater WR predictability in fall and winter, particularly for the Pacific Trough and Pacific Ridge regimes, driven primarily by the ocean (e.g., El Niño-Southern Oscillation and sea ice). For the Pacific Ridge in winter, the stratosphere also contributes significantly to predictability across most S2S lead times. Additionally, the initial large-scale tropospheric structure (encompassing the tropics and extra-tropics, e.g., Madden-Julian Oscillation) and soil conditions play a relevant role—most notably for the Greenland High regime in winter. This study highlights previously identified sources of predictability for the large-scale atmosphere and gives insight into new sources for future study. Given how closely linked WRs are to surface precipitation and temperature anomalies, storm tracks, and extreme events, the study results contribute to improving S2S prediction of surface weather. 
    more » « less
    Free, publicly-accessible full text available June 19, 2026
  2. Abstract AI-based algorithms are emerging in many meteorological applications that produce imagery as output, including for global weather forecasting models. However, the imagery produced by AI algorithms, especially by convolutional neural networks (CNNs), is often described as too blurry to look realistic, partly because CNNs tend to represent uncertainty as blurriness. This blurriness can be undesirable since it might obscure important meteorological features. More complex AI models, such as Generative AI models, produce images that appear to be sharper. However, improved sharpness may come at the expense of a decline in other performance criteria, such as standard forecast verification metrics. To navigate any trade-off between sharpness and other performance metrics it is important to quantitatively assess those other metrics along with sharpness. While there is a rich set of forecast verification metrics available for meteorological images, none of them focus on sharpness. This paper seeks to fill this gap by 1) exploring a variety of sharpness metrics from other fields, 2) evaluating properties of these metrics, 3) proposing the new concept of Gaussian Blur Equivalence as a tool for their uniform interpretation, and 4) demonstrating their use for sample meteorological applications, including a CNN that emulates radar imagery from satellite imagery (GREMLIN) and an AI-based global weather forecasting model (GraphCast). 
    more » « less
    Free, publicly-accessible full text available June 9, 2026
  3. Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to highstakes applications such as medical imaging and weather forecasting. Conditional diffusion models’ breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting. Source code is publicly available at https://github.com/matthewachan/hyperdm. 
    more » « less
    Free, publicly-accessible full text available December 13, 2025
  4. Free, publicly-accessible full text available December 1, 2025