skip to main content


Title: Superresolution of GOES-16 ABI Bands to a Common High Resolution with a Convolutional Neural Network
Abstract

Superresolution is the general task of artificially increasing the spatial resolution of an image. The recent surge in machine learning (ML) research has yielded many promising ML-based approaches for performing single-image superresolution including applications to satellite remote sensing. We develop a convolutional neural network (CNN) to superresolve the 1- and 2-km bands on the GOES-R series Advanced Baseline Imager (ABI) to a common high resolution of 0.5 km. Access to 0.5-km imagery from ABI band 2 enables the CNN to realistically sharpen lower-resolution bands without significant blurring. We first train the CNN on a proxy task, which allows us to only use ABI imagery, namely, degrading the resolution of ABI bands and training the CNN to restore the original imagery. Comparisons at reduced resolution and at full resolution withLandsat-8/Landsat-9observations illustrate that the CNN produces images with realistic high-frequency detail that is not present in a bicubic interpolation baseline. Estimating all ABI bands at 0.5-km resolution allows for more easily combining information across bands without reconciling differences in spatial resolution. However, more analysis is needed to determine impacts on derived products or multispectral imagery that use superresolved bands. This approach is extensible to other remote sensing instruments that have bands with different spatial resolutions and requires only a small amount of data and knowledge of each channel’s modulation transfer function.

Significance Statement

Satellite remote sensing instruments often have bands with different spatial resolutions. This work shows that we can artificially increase the resolution of some lower-resolution bands by taking advantage of the texture of higher-resolution bands on theGOES-16ABI instrument using a convolutional neural network. This may help reconcile differences in spatial resolution when combining information across bands, but future analysis is needed to precisely determine impacts on derived products that might use superresolved bands.

 
more » « less
Award ID(s):
2019758
PAR ID:
10499816
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Artificial Intelligence for the Earth Systems
Volume:
3
Issue:
2
ISSN:
2769-7525
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This study investigates high-frequency mapping of downward shortwave radiation (DSR) at the Earth’s surface using the advanced baseline imager (ABI) instrument mounted on Geo- stationary Operational Environmental Satellite—R Series (GOES- R). The existing GOES-R DSR product (DSRABI) offers hourly temporal resolution and spatial resolution of 0.25°. To enhance these resolutions, we explore machine learning (ML) for DSR estimation at the native temporal resolution of GOES-R Level-2 cloud and moisture imagery product (5 min) and its native spatial resolution of 2 km at nadir. We compared four common ML regres- sion models through the leave-one-out cross-validation algorithm for robust model assessment against ground measurements from AmeriFlux and SURFRAD networks. Results show that gradient boosting regression (GBR) achieves the best performance (R2 = 0.916, RMSE = 88.05 W·m−2) with more efficient computation compared to long short-term memory, which exhibited similar performance. DSR estimates from the GBR model through the ABI live imaging of vegetated ecosystems workflow (DSRALIVE) outperform DSRABI across various temporal resolutions and sky conditions. DSRALIVE agreement with ground measurements at SURFRAD networks exhibits high accuracy at high temporal res- olutions (5-min intervals) with R2 exceeding 0.85 and RMSE = 122 W·m−2 . We conclude that GBR offers a promising approach for high-frequency DSR mapping from GOES-R, enabling improved applications for near-real-time monitoring of terrestrial carbon and water fluxes. 
    more » « less
  2. State-of-the-art deep learning technology has been successfully applied to relatively small selected areas of very high spatial resolution (0.15 and 0.25 m) optical aerial imagery acquired by a fixed-wing aircraft to automatically characterize ice-wedge polygons (IWPs) in the Arctic tundra. However, any mapping of IWPs at regional to continental scales requires images acquired on different sensor platforms (particularly satellite) and a refined understanding of the performance stability of the method across sensor platforms through reliable evaluation assessments. In this study, we examined the transferability of a deep learning Mask Region-Based Convolutional Neural Network (R-CNN) model for mapping IWPs in satellite remote sensing imagery (~0.5 m) covering 272 km2 and unmanned aerial vehicle (UAV) (0.02 m) imagery covering 0.32 km2. Multi-spectral images were obtained from the WorldView-2 satellite sensor and pan-sharpened to ~0.5 m, and a 20 mp CMOS sensor camera onboard a UAV, respectively. The training dataset included 25,489 and 6022 manually delineated IWPs from satellite and fixed-wing aircraft aerial imagery near the Arctic Coastal Plain, northern Alaska. Quantitative assessments showed that individual IWPs were correctly detected at up to 72% and 70%, and delineated at up to 73% and 68% F1 score accuracy levels for satellite and UAV images, respectively. Expert-based qualitative assessments showed that IWPs were correctly detected at good (40–60%) and excellent (80–100%) accuracy levels for satellite and UAV images, respectively, and delineated at excellent (80–100%) level for both images. We found that (1) regardless of spatial resolution and spectral bands, the deep learning Mask R-CNN model effectively mapped IWPs in both remote sensing satellite and UAV images; (2) the model achieved a better accuracy in detection with finer image resolution, such as UAV imagery, yet a better accuracy in delineation with coarser image resolution, such as satellite imagery; (3) increasing the number of training data with different resolutions between the training and actual application imagery does not necessarily result in better performance of the Mask R-CNN in IWPs mapping; (4) and overall, the model underestimates the total number of IWPs particularly in terms of disjoint/incomplete IWPs. 
    more » « less
  3. Abstract

    The terrestrial carbon cycle varies dynamically on hourly to weekly scales, making it difficult to observe. Geostationary (“weather”) satellites like the Geostationary Environmental Operational Satellite - R Series (GOES-R) deliver near-hemispheric imagery at a ten-minute cadence. The Advanced Baseline Imager (ABI) aboard GOES-R measures visible and near-infrared spectral bands that can be used to estimate land surface properties and carbon dioxide flux. However, GOES-R data are designed for real-time dissemination and are difficult to link with eddy covariance time series of land-atmosphere carbon dioxide exchange. We compiled three-year time series of GOES-R land surface attributes including visible and near-infrared reflectances, land surface temperature (LST), and downwelling shortwave radiation (DSR) at 314 ABI fixed grid pixels containing eddy covariance towers. We demonstrate how to best combine satellite andin-situdatasets and show how ABI attributes useful for ecosystem monitoring vary across space and time. By connecting observation networks that infer rapid changes to the carbon cycle, we can gain a richer understanding of the processes that control it.

     
    more » « less
  4. Accurate precipitation retrieval using satellite sensors is still challenging due to the limitations on spatio-temporal sampling of the applied parametric retrieval algorithms. In this research, we propose a deep learning framework for precipitation retrieval using the observations from Advanced Baseline Imager (ABI), and Geostationary Lightning Mapper (GLM) on GOES-R satellite series. In particular, two deep Convolutional Neural Network (CNN) models are designed to detect and estimate the precipitation using the cloud-top brightness temperature from ABI and lightning flash rate from GLM. The precipitation estimates from the ground-based Multi-Radar/Multi-Sensor (MRMS) system are used as the target labels in the training phase. The experimental results show that in the testing phase, the proposed framework offers more accurate precipitation estimates than the current operational Rainfall Rate Quantitative Precipitation Estimate (RRQPE) product from GOES-R. 
    more » « less
  5. Cloud detection is an inextricable pre-processing step in remote sensing image analysis workflows. Most of the traditional rule-based and machine-learning-based algorithms utilize low-level features of the clouds and classify individual cloud pixels based on their spectral signatures. Cloud detection using such approaches can be challenging due to a multitude of factors including harsh lighting conditions, the presence of thin clouds, the context of surrounding pixels, and complex spatial patterns. In recent studies, deep convolutional neural networks (CNNs) have shown outstanding results in the computer vision domain. These methods are practiced for better capturing the texture, shape as well as context of images. In this study, we propose a deep learning CNN approach to detect cloud pixels from medium-resolution satellite imagery. The proposed CNN accounts for both the low-level features, such as color and texture information as well as high-level features extracted from successive convolutions of the input image. We prepared a cloud-pixel dataset of approximately 7273 randomly sampled 320 by 320 pixels image patches taken from a total of 121 Landsat-8 (30m) and Sentinel-2 (20m) image scenes. These satellite images come with cloud masks. From the available data channels, only blue, green, red, and NIR bands are fed into the model. The CNN model was trained on 5300 image patches and validated on 1973 independent image patches. As the final output from our model, we extract a binary mask of cloud pixels and non-cloud pixels. The results are benchmarked against established cloud detection methods using standard accuracy metrics.

     
    more » « less