Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Accurate precipitation retrieval using satellite sensors is still challenging due to the limitations on spatio-temporal sampling of the applied parametric retrieval algorithms. In this research, we propose a deep learning framework for precipitation retrieval using the observations from Advanced Baseline Imager (ABI), and Geostationary Lightning Mapper (GLM) on GOES-R satellite series. In particular, two deep Convolutional Neural Network (CNN) models are designed to detect and estimate the precipitation using the cloud-top brightness temperature from ABI and lightning flash rate from GLM. The precipitation estimates from the ground-based Multi-Radar/Multi-Sensor (MRMS) system are used as the target labels in the training phase. The experimental results show that in the testing phase, the proposed framework offers more accurate precipitation estimates than the current operational Rainfall Rate Quantitative Precipitation Estimate (RRQPE) product from GOES-R.more » « less
-
Abstract Atmospheric processes involve both space and time. Thus, humans looking at atmospheric imagery can often spot important signals in an animated loop of an image sequence not apparent in an individual (static) image. Utilizing such signals with automated algorithms requires the ability to identify complex spatiotemporal patterns in image sequences. That is a very challenging task due to the endless possibilities of patterns in both space and time. Here, we review different concepts and techniques that are useful to extract spatiotemporal signals from meteorological image sequences to expand the effectiveness of AI algorithms for classification and prediction tasks. We first present two applications that motivate the need for these approaches in meteorology, namely the detection of convection from satellite imagery and solar forecasting. Then we provide an overview of concepts and techniques that are helpful for the interpretation of meteorological image sequences, such as (a) feature engineering methods using (i) meteorological knowledge, (ii) classic image processing, (iii) harmonic analysis, and (iv) topological data analysis; (b) ways to use convolutional neural networks for this purpose with emphasis on discussing different convolution filters (2D/3D/LSTM-convolution); and (c) a brief survey of several other concepts, including the concept of “attention” in neural networks and its utility for the interpretation of image sequences and strategies from self-supervised and transfer learning to reduce the need for large labeled datasets. We hope that presenting an overview of these tools—many of which are not new but underutilized in this context—will accelerate progress in this area.
-
Satellite sensors have been widely used for precipitation retrieval, and a number of precipitation retrieval algorithms have been developed using observations from various satellite sensors. The current operational rainfall rate quantitative precipitation estimate (RRQPE) product from the geostationary operational environmental satellite (GOES) offers full disk rainfall rate estimates based on the observations from the advanced baseline imager (ABI) aboard the GOES-R series. However, accurate precipitation retrieval using satellite sensors is still challenging due to the limitations on spatio-temporal sampling of the satellite sensors and/or the uncertainty associated with the applied parametric retrieval algorithms. In this article, we propose a deep learning framework for precipitation retrieval using the combined observations from the ABI and geostationary lightning mapper (GLM) on the GOES-R series to improve the current operational RRQPE product. Particularly, the proposed deep learning framework is composed of two deep convolutional neural networks (CNNs) that are designed for precipitation detection and quantification. The cloud-top brightness temperature from multiple ABI channels and the lightning flash rate from the GLM measurement are used as inputs to the deep learning framework. To train the designed CNNs, the precipitation product multiradar multi-sensor (MRMS) system from the National Oceanic and Atmospheric Administration (NOAA) is used as target labels to optimize the network parameters. The experimental results show that the precipitation retrieval performance of the proposed framework is superior to the currently operational GOES RRQPE product in the selected study domain, and the performance is dramatically enhanced after incorporating the lightning data into the deep learning model. Using the independent MRMS product as a reference, the deep learning model can reduce the retrieval uncertainty in the operational RRQPE product by at least 31% in terms of the mean squared error and normalized mean absolute error, and the improvement is more significant in moderate to heavy rain regions. Therefore, the proposed deep learning framework can potentially serve as an alternative approach for GOES precipitation retrievals.more » « less
-
Abstract Increases in wildfire activity and the resulting impacts have prompted the development of high-resolution wildfire behavior models for forecasting fire spread. Recent progress in using satellites to detect fire locations further provides the opportunity to use measurements toward improving fire spread forecasts from numerical models through data assimilation. This work develops a physics-informed approach for inferring the history of a wildfire from satellite measurements, providing the necessary information to initialize coupled atmosphere–wildfire models from a measured wildfire state. The fire arrival time, which is the time the fire reaches a given spatial location, acts as a succinct representation of the history of a wildfire. In this work, a conditional Wasserstein generative adversarial network (cWGAN), trained with WRF–SFIRE simulations, is used to infer the fire arrival time from satellite active fire data. The cWGAN is used to produce samples of likely fire arrival times from the conditional distribution of arrival times given satellite active fire detections. Samples produced by the cWGAN are further used to assess the uncertainty of predictions. The cWGAN is tested on four California wildfires occurring between 2020 and 2022, and predictions for fire extent are compared against high-resolution airborne infrared measurements. Further, the predicted ignition times are compared with reported ignition times. An average Sørensen’s coefficient of 0.81 for the fire perimeters and an average ignition time difference of 32 min suggest that the method is highly accurate.
Significance Statement To initialize coupled atmosphere–wildfire simulations in a physically consistent way based on satellite measurements of active fire locations, it is critical to ensure the state of the fire and atmosphere aligns at the start of the forecast. If known, the history of a wildfire may be used to develop an atmospheric state matching the wildfire state determined from satellite data in a process known as spinup. In this paper, we present a novel method for inferring the early stage history of a wildfire based on satellite active fire measurements. Here, inference of the fire history is performed in a probabilistic sense and physics is further incorporated through the use of training data derived from a coupled atmosphere–wildfire model.
-
null (Ed.)Producing high-resolution near-real-time forecasts of fire behavior and smoke impact that are useful for fire and air quality management requires accurate initialization of the fire location. One common representation of the fire progression is through the fire arrival time, which defines the time that the fire arrives at a given location. Estimating the fire arrival time is critical for initializing the fire location within coupled fire-atmosphere models. We present a new method that utilizes machine learning to estimate the fire arrival time from satellite data in the form of burning/not burning/no data rasters. The proposed method, based on a support vector machine (SVM), is tested on the 10 largest California wildfires of the 2020 fire season, and evaluated using independent observed data from airborne infrared (IR) fire perimeters. The SVM method results indicate a good agreement with airborne fire observations in terms of the fire growth and a spatial representation of the fire extent. A 12% burned area absolute percentage error, a 5% total burned area mean percentage error, a 0.21 False Alarm Ratio average, a 0.86 Probability of Detection average, and a 0.82 Sørensen’s coefficient average suggest that this method can be used to monitor wildfires in near-real-time and provide accurate fire arrival times for improving fire modeling even in the absence of IR fire perimeters.more » « less
-
null (Ed.)Abstract The method of neural networks (aka deep learning) has opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image-to-image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks for working with meteorological images, such as best practices for evaluation, tuning, and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of receptive fields, underutilized meteorological performance measures, and methods for neural network interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative meteorologist-driven discovery process that builds on experimental design and hypothesis generation and testing. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image-to-image translation.more » « less
-
We present an interactive HPC framework for coupled fire and weather simulations. The system is suitable for urgent simulations and forecast of wildfire propagation and smoke. It does not require expert knowledge to set up and run the forecasts. The core of the system is a coupled weather, wildland fire, fuel moisture, and smoke model, running in an interactive workflow and data management system. The system automates job setup, data acquisition, preprocessing, and simulation on an HPC cluster. It provides animated visualization of the results on a dedicated mapping portal in the cloud, and as GIS files or Google Earth KML files. The system also serves as an extensible framework for further research, including data assimilation and applications of machine learning to initialize the simulations from satellite data. Index Terms—WRF-SFIRE, coupled atmosphere-fire model, MODIS, VIIRS, satellite data, fire arrival time, data assimilation, machine learningmore » « less