skip to main content

Title: Daytime Rainy Cloud Detection and Convective Precipitation Delineation Based on a Deep Neural Network Method Using GOES-16 ABI Images
Precipitation, especially convective precipitation, is highly associated with hydrological disasters (e.g., floods and drought) that have negative impacts on agricultural productivity, society, and the environment. To mitigate these negative impacts, it is crucial to monitor the precipitation status in real time. The new Advanced Baseline Imager (ABI) onboard the GOES-16 satellite provides such a precipitation product in higher spatiotemporal and spectral resolutions, especially during the daytime. This research proposes a deep neural network (DNN) method to classify rainy and non-rainy clouds based on the brightness temperature differences (BTDs) and reflectances (Ref) derived from ABI. Convective and stratiform rain clouds are also separated using similar spectral parameters expressing the characteristics of cloud properties. The precipitation events used for training and validation are obtained from the IMERG V05B data, covering the southeastern coast of the U.S. during the 2018 rainy season. The performance of the proposed method is compared with traditional machine learning methods, including support vector machines (SVMs) and random forest (RF). For rainy area detection, the DNN method outperformed the other methods, with a critical success index (CSI) of 0.71 and a probability of detection (POD) of 0.86. For convective precipitation delineation, the DNN models also show a better performance, with a CSI of 0.58 and POD of 0.72. This automatic cloud classification system could be deployed for extreme rainfall event detection, real-time forecasting, and decision-making support in rainfall-related disasters.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Remote Sensing
Page Range / eLocation ID:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. An ability to accurately detect convective regions isessential for initializing models for short-term precipitation forecasts.Radar data are commonly used to detect convection, but radars that providehigh-temporal-resolution data are mostly available over land, and the qualityof the data tends to degrade over mountainous regions. On the other hand,geostationary satellite data are available nearly anywhere and in near-realtime. Current operational geostationary satellites, the GeostationaryOperational Environmental Satellite-16 (GOES-16) and Satellite-17, provide high-spatial- and high-temporal-resolution data but only of cloud top properties; 1 min data, however, allow us to observe convection from visible andinfrared data even without vertical information of the convective system.Existing detection algorithms using visible and infrared data look forstatic features of convective clouds such as overshooting top or lumpy cloudtop surface or cloud growth that occurs over periods of 30 min to anhour. This study represents a proof of concept that artificial intelligence(AI) is able, when given high-spatial- and high-temporal-resolution data fromGOES-16, to learn physical properties of convective clouds and automate thedetection process. A neural network model with convolutional layers is proposed to identifyconvection from the high-temporal resolution GOES-16 data. The model takesfive temporal images from channel 2 (0.65 µm) and 14 (11.2 µm) asinputs and produces a map of convective regions. In order to provideproducts comparable to the radar products, it is trained against Multi-RadarMulti-Sensor (MRMS), which is a radar-based product that uses a rathersophisticated method to classify precipitation types. Two channels fromGOES-16, each related to cloud optical depth (channel 2) and cloud topheight (channel 14), are expected to best represent features of convectiveclouds: high reflectance, lumpy cloud top surface, and low cloud toptemperature. The model has correctly learned those features of convectiveclouds and resulted in a reasonably low false alarm ratio (FAR) and highprobability of detection (POD). However, FAR and POD can vary depending onthe threshold, and a proper threshold needs to be chosen based on thepurpose. 
    more » « less
  2. Abstract

    Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model.

    more » « less
  3. Abstract

    There is a need for long-term observations of cloud and precipitation fall speeds in validating and improving rainfall forecasts from climate models. To this end, the U.S. Department of Energy Atmospheric Radiation Measurement (ARM) user facility Southern Great Plains (SGP) site at Lamont, Oklahoma, hosts five ARM Doppler lidars that can measure cloud and aerosol properties. In particular, the ARM Doppler lidars record Doppler spectra that contain information about the fall speeds of cloud and precipitation particles. However, due to bandwidth and storage constraints, the Doppler spectra are not routinely stored. This calls for the automation of cloud and rain detection in ARM Doppler lidar data so that the spectral data in clouds can be selectively saved and further analyzed. During the ARMing the Edge field experiment, a Waggle node capable of performing machine learning applications in situ was deployed at the ARM SGP site for this purpose. In this paper, we develop and test four algorithms for the Waggle node to automatically classify ARM Doppler lidar data. We demonstrate that supervised learning using a ResNet50-based classifier will classify 97.6% of the clear-air images and 94.7% of cloudy images correctly, outperforming traditional peak detection methods. We also show that a convolutional autoencoder paired withk-means clustering identifies 10 clusters in the ARM Doppler lidar data. Three clusters correspond to mostly clear conditions with scattered high clouds, and seven others correspond to cloudy conditions with varying cloud-base heights.

    more » « less
  4. Abstract

    Spatial aggregation of deep convection and its possible role in larger-scale atmospheric behavior have received growing attention. Here we seek aggregation-correlated statistical properties of convective events in 5° × 5° boxes over the tropical Indian Ocean. Events are identified by box-averaged rainfall exceeding 5 mm day−1at the center of a 4-day time window, and aggregation is estimated by an index [simple convective aggregation index (SCAI)] based on contiguous cold cloud areas and their geometrical distances in infrared imagery. A physical framework using gross moist stability (GMS) helps to interpret relationships between aggregation, box-scale ascent profiles, moist static energy budgets, and time evolution both within composite events and on longer time scales. For a given precipitation rate, more-aggregated events (with fewer and larger cloud objects on average) exhibit a drier area mean, greater horizontal gradient of moisture, more bottom-heavy ascent profile, and a greater prevalence of low-altitude cloud tops, especially for lower rain rates. In the GMS budget, this bottom-heavy ascent implies net energy import into the atmospheric column during the 4-day event composite. Consistently, net energy variations filtered to reveal longer time scales do indeed exhibit more-aggregated rain events in their growth phase than in their flat and decaying phases. More-aggregated scenes also have more drying by analysis than less-aggregated scenes in MERRA-2’s assimilation budgets. This suggests that parameterized convection (lacking any organization effect) is raining out less water than nature’s real, aggregated convection in such scenes.

    more » « less
  5. Abstract

    Stochastic parameterizations are increasingly becoming skillful in representing unresolved atmospheric processes for global climate models. The stochastic multicloud model, used to simulate the life cycle of the three most common cloud types (cumulus congestus, deep convective, and stratiform) in tropical convective systems, is one example. In this model, these clouds interact with each other and with their environment according to intuitive‐probabilistic rules determined by a set of predictors, depending on the large‐scale atmospheric state and a set of transition time scale parameters. Here we use a Bayesian statistical method to infer these parameters from radar data. The Bayesian approach is applied to precipitation data collected by the Shared Mobile Atmospheric Research and Teaching Radar truck‐mounted C‐band radar located in the Maldives archipelago, while the corresponding large‐scale predictors were derived from meteorological soundings taken during the Dynamics of the Madden‐Julian Oscillation field campaign. The transition time scales were inferred from three different phases of the Madden‐Julian Oscillation (suppressed, initiation, and active) and compared with previous studies. The performance of the stochastic multicloud model is also assessed, in a stand‐alone mode, where the cloud model is forced directly by the observed predictors without feedback into the environmental variables. The results showed a wide spread in the inferred parameter values due in part to the lack of the desired sensitivity of the model to the predictors and the shortness of the training periods that did not include both active and suppressed convection phases simultaneously. Nonetheless, the resemblance of the stand‐alone simulated cloud fraction time series to the radar data is encouraging.

    more » « less