The intensity and frequency of wildfires in California (CA) have increased in recent years, causing significant damage to human health and property. In October 2007, a number of small fire events, collectively referred to as the Witch Creek Fire or Witch Fire started in Southern CA and intensified under strong Santa Ana winds. As a test of current mesoscale modeling capabilities, we use the Weather Research and Forecasting (WRF) model to simulate the 2007 wildfire event in terms of meteorological conditions. The main objectives of the present study are to investigate the impact of horizontal grid resolution and planetary boundary layer (PBL) scheme on the model simulation of meteorological conditions associated with a Mega fire. We evaluate the predictive capability of the WRF model to simulate key meteorological and fire-weather forecast parameters such as wind, moisture, and temperature. Results of this study suggest that more accurate predictions of temperature and wind speed relevant for better prediction of wildfire spread can be achieved by downscaling regional numerical weather prediction products to 1 km resolution. Furthermore, accurate prediction of near-surface conditions depends on the choice of the planetary boundary layer parameterization. The MYNN parameterization yields more accurate prediction as compared to the YSU parameterization. WRF simulations at 1 km resolution result in better predictions of temperature and wind speed than relative humidity during the 2007 Witch Fire. In summary, the MYNN PBL parameterization scheme with finer grid resolution simulations improves the prediction of near-surface meteorological conditions during a wildfire event.
more »
« less
Downscaling numerical weather models with conditional generative adversarial networks.
Abstract—Numerical simulation of weather is resolution-constrained due to the high computational cost of integrating the coupled PDEs that govern atmospheric motion. For example, the most highly-resolved numerical weather prediction models are limited to approximately 3 km. However many weather and climate impacts occur over much finer scales, especially in urban areas and regions with high topographic complexity like mountains or coastal regions. Thus several statistical methods have been developed in the climate community to downscale numerical model output to finer resolutions. This is conceptually similar to image super-resolution (SR) [1] and in this work we report the results of applying SR methods to the downscaling problem. In particular we test the extent to which a SR method based on a Generative Adversarial Network (GAN) can recover a grid of wind speed from an artificially downsampled version, compared against a standard bicubic upsampling approach and another machine learning based approach, SR-CNN [1]. We use ESRGAN ([2]) to learn to downscale wind speeds by a factor of 4 from a coarse grid. We find that we can recover spatial details with higher fidelity than bicubic upsampling or SR-CNN. The bicubic and SR-CNN methods perform better than ESRGAN on coarse metrics such as MSE. However, the high frequency power spectrum is captured remarkably well by the ESRGAN, virtually identical to the real data, while bicubic and SR-CNN fidelity drops significantly at high frequency. This indicates that SR is considerably better at matching the higher-order statistics of the dataset, consistent with the observation that the generated images are of superior visual quality compared with SR-CNN.
more »
« less
- Award ID(s):
- 1843103
- PAR ID:
- 10137368
- Date Published:
- Journal Name:
- CLI info
- ISSN:
- 1623-2666
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Climate and weather data such as precipitation derived from Global Climate Models (GCMs) and satellite observations are essential for the global and local hydrological assessment. However, most climatic popular precipitation products (with spatial resolutions coarser than 10km) are too coarse for local impact studies and require “downscaling” to obtain higher resolutions. Traditional precipitation downscaling methods such as statistical and dynamic downscaling require an input of additional meteorological variables, and very few are applicable for downscaling hourly precipitation for higher spatial resolution. Based on dynamic dictionary learning, we propose a new downscaling method, PreciPatch, to address this challenge by producing spatially distributed higher resolution precipitation fields with only precipitation input from GCMs at hourly temporal resolution and a large geographical extent. Using aggregated Integrated Multi-satellitE Retrievals for GPM (IMERG) data, an experiment was conducted to evaluate the performance of PreciPatch, in comparison with bicubic interpolation using RainFARM—a stochastic downscaling method, and DeepSD—a Super-Resolution Convolutional Neural Network (SRCNN) based downscaling method. PreciPatch demonstrates better performance than other methods for downscaling short-duration precipitation events (used historical data from 2014 to 2017 as the training set to estimate high-resolution hourly events in 2018).more » « less
-
Abstract Extreme winds associated with tropical cyclones (TCs) can cause significant loss of life and economic damage globally, highlighting the need for accurate, high‐resolution modeling and forecasting for wind. However, due to their coarse horizontal resolution, most global climate and weather models suffer from chronic underprediction of TC wind speeds, limiting their use for impact analysis and energy modeling. In this study, we introduce a cascading deep learning framework designed to downscale high‐resolution TC wind fields given low‐resolution data. Our approach maps 85 TC events from ERA5 data (0.25° resolution) to high‐resolution (0.05° resolution) observations at 6‐hr intervals. The initial component is a debiasing neural network designed to model accurate wind speed observations using ERA5 data. The second component employs a generative super‐resolution strategy based on a conditional denoising diffusion probabilistic model (DDPM) to enhance the spatial resolution and to produce ensemble estimates. The model is able to accurately model intensity and produce realistic radial profiles and fine‐scale spatial structures of wind fields, with a percentage mean bias of −3.74% compared to the high‐resolution observations. Our downscaling framework enables the prediction of high‐resolution wind fields using widely available low‐resolution and intensity wind data, allowing for the modeling of past events and the assessment of future TC risks.more » « less
-
Recent supervised point cloud upsampling methods are re-stricted by the size of training data and are limited in terms of covering all object shapes. Besides the challenges faced due to data acquisition, the networks also struggle to gener-alize on unseen records. In this paper, we present an internal point cloud upsampling approach at a holistic level referred to as “Zero-Shot” Point Cloud Upsampling (ZSPU). Our approach is data agnostic and relies solely on the internal infor-mation provided by a particular point cloud without patching in both self-training and testing phases. This single-stream design significantly reduces the training time by learning the relation between low resolution (LR) point clouds and their high (original) resolution (HR) counterparts. This association will then provide super resolution (SR) outputs when origi-nal point clouds are loaded as input. ZSPU achieves com-petitive/superior quantitative and qualitative performances on benchmark datasets when compared with other upsampling methods.more » « less
-
null (Ed.)We propose a novel end-to-end deep scene flow model, called PointPWC-Net, that directly processes 3D point cloud scenes with large motions in a coarse-to-fine fashion. Flow computed at the coarse level is upsampled and warped to a finer level, enabling the algorithm to accommodate for large motion without a prohibitive search space. We introduce novel cost volume, upsampling, and warping layers to efficiently handle 3D point cloud data. Unlike traditional cost volumes that require exhaustively computing all the cost values on a high-dimensional grid, our point-based formulation discretizes the cost volume onto input 3D points, and a PointConv operation efficiently computes convolutions on the cost volume. Experiment results on FlyingThings3D and KITTI outperform the state-of-the-art by a large margin. We further explore novel self-supervised losses to train our model and achieve comparable results to state-of-the-art trained with supervised loss. Without any fine-tuning, our method also shows great generalization ability on the KITTI Scene Flow 2015 dataset, outperforming all previous methods. The code is released at https://github.com/DylanWusee/PointPWC.more » « less
An official website of the United States government

