skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Leveraging spatial textures, through machine learning, to identify aerosols and distinct cloud types from multispectral observations
Abstract. Current cloud and aerosol identification methods for multispectral radiometers, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS), employ multichannel spectral tests on individual pixels (i.e., fields of view). The use of the spatial information in cloud and aerosol algorithms has been primarily through statistical parameters such as nonuniformity tests of surrounding pixels with cloud classification provided by the multispectral microphysical retrievals such as phase and cloud top height. With these methodologies there is uncertainty in identifying optically thick aerosols, since aerosols and clouds have similar spectral properties in coarse-spectral-resolution measurements. Furthermore, identifying clouds regimes (e.g., stratiform, cumuliform) from just spectral measurements is difficult, since low-altitude cloud regimes have similar spectral properties. Recent advances in computer vision using deep neural networks provide a new opportunity to better leverage the coherent spatial information in multispectral imagery. Using a combination of machine learning techniques combined with a new methodology to create the necessary training data, we demonstrate improvements in the discrimination between cloud and severe aerosols and an expanded capability to classify cloud types. The labeled training dataset was created from an adapted NASA Worldview platform that provides an efficient user interface to assemble a human-labeled database of cloud and aerosol types. The convolutional neural network (CNN) labeling accuracy of aerosols and cloud types was quantified using independent Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and MODIS cloud and aerosol products. By harnessing CNNs with a unique labeled dataset, we demonstrate the improvement of the identification of aerosols and distinct cloud types from MODIS and VIIRS images compared to a per-pixel spectral and standard deviation thresholding method. The paper concludes with case studies that compare the CNN methodology results with the MODIS cloud and aerosol products.  more » « less
Award ID(s):
2023109 1934637 1930049
PAR ID:
10284831
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Atmospheric Measurement Techniques
Volume:
13
Issue:
10
ISSN:
1867-8548
Page Range / eLocation ID:
5459 to 5480
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract. We trained two Random Forest (RF) machine learning models for cloud mask andcloud thermodynamic-phase detection using spectral observations from Visible InfraredImaging Radiometer Suite (VIIRS)on board Suomi National Polar-orbiting Partnership (SNPP). Observations from Cloud-Aerosol Lidarwith Orthogonal Polarization (CALIOP) were carefully selected toprovide reference labels. The two RF models were trained for all-day anddaytime-only conditions using a 4-year collocated VIIRS and CALIOP dataset from2013 to 2016. Due to the orbit difference, the collocated CALIOP and SNPPVIIRS training samples cover a broad-viewing zenith angle range, which is agreat benefit to overall model performance. The all-day model uses three VIIRSinfrared (IR) bands (8.6, 11, and 12 µm), and the daytime model uses fiveNear-IR (NIR) and Shortwave-IR (SWIR) bands (0.86, 1.24, 1.38, 1.64, and 2.25 µm) together with the three IR bands to detect clear, liquid water, and icecloud pixels. Up to seven surface types, i.e., ocean water, forest, cropland,grassland, snow and ice, barren desert, and shrubland, were consideredseparately to enhance performance for both models. Detection of cloudypixels and thermodynamic phase with the two RF models was compared againstcollocated CALIOP products from 2017. It is shown that, when using a conservativescreening process that excludes the most challenging cloudy pixels forpassive remote sensing, the two RF models have high accuracy rates incomparison to the CALIOP reference for both cloud detection andthermodynamic phase. Other existing SNPP VIIRS and Aqua MODIS cloud mask andphase products are also evaluated, with results showing that the two RFmodels and the MODIS MYD06 optical property phase product are the top threealgorithms with respect to lidar observations during the daytime. During thenighttime, the RF all-day model works best for both cloud detection andphase, particularly for pixels over snow and ice surfaces. The present RFmodels can be extended to other similar passive instruments if trainingsamples can be collected from CALIOP or other lidars. However, the qualityof reference labels and potential sampling issues that may impact modelperformance would need further attention. 
    more » « less
  2. null (Ed.)
    Identifying dust aerosols from passive satellite images is of great interest for many applications. In this study, we developed five different machine-learning (ML) based algorithms, including Logistic Regression, K Nearest Neighbor, Random Forest (RF), Feed Forward Neural Network (FFNN), and Convolutional Neural Network (CNN), to identify dust aerosols in the daytime satellite images from the Visible Infrared Imaging Radiometer Suite (VIIRS) under cloud-free conditions on a global scale. In order to train the ML algorithms, we collocated the state-of-the-art dust detection product from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) with the VIIRS observations along the CALIOP track. The 16 VIIRS M-band observations with the center wavelength ranging from deep blue to thermal infrared, together with solar-viewing geometries and pixel time and locations, are used as the predictor variables. Four different sets of training input data are constructed based on different combinations of VIIRS pixel and predictor variables. The validation and comparison results based on the collocated CALIOP data indicate that the FFNN method based on all available predictor variables is the best performing one among all methods. It has an averaged dust detection accuracy of about 81%, 89%, and 85% over land, ocean and whole globe, respectively, compared with collocated CALIOP. When applied to off-track VIIRS pixels, the FFNN method retrieves geographical distributions of dust that are in good agreement with on-track results as well as CALIOP statistics. For further evaluation, we compared our results based on the ML algorithms to NOAA’s Aerosol Detection Product (ADP), which is a product that classifies dust, smoke, and ash using physical-based methods. The comparison reveals both similarity and differences. Overall, this study demonstrates the great potential of ML methods for dust detection and proves that these methods can be trained on the CALIOP track and then applied to the whole granule of VIIRS granule. 
    more » « less
  3. Satellite remote sensing of aerosol optical depth (AOD) is essential for detection, characterization, and forecasting of wildfire smoke. In this work, we evaluate the AOD (550 nm) retrievals during the extreme wildfire events over the western U.S. in September 2020. Three products are analyzed, including the Moderate-resolution Imaging Spectroradiometers (MODIS) Multi-Angle Implementation of Atmospheric Correction (MAIAC) product collections C6.0 and C6.1, and the NOAA-20 Visible Infrared Imaging Radiometer (VIIRS) AOD from the NOAA Enterprise Processing System (EPS) algorithm. Compared with the Aerosol Robotic Network (AERONET) data, all three products show strong linear correlations with MAIAC C6.1 and VIIRS presenting overall low bias (<0.06). The accuracy of MAIAC C6.1 is found to be substantially improved with respect to MAIAC C6.0 that drastically underestimated AOD over thick smoke, which validates the effectiveness of updates made in MAIAC C6.1 in terms of an improved representation of smoke aerosol optical properties. VIIRS AOD exhibits comparable uncertainty with MAIAC C6.1 with a slight tendency of increased positive bias over the AERONET AOD range of 0.5–3.0. Averaging coincident retrievals from MAIAC C6.1 and VIIRS provides a lower root mean square error and higher correlation than for the individual products, motivating the benefit of blending these datasets. MAIAC C6.1 and VIIRS are further compared to provide insights on their retrieval strategy. When gridded at 0.1° resolution, MAIAC C6.1 and VIIRS provide similar monthly AOD distribution patterns and the latter exhibits a slightly higher domain average. On daily scale, over thick plumes near fire sources, MAIAC C6.1 reports more valid retrievals where VIIRS tends to have retrievals designated as low or medium quality, which tends to be due to internal quality checks. Over transported smoke near scattered clouds, VIIRS provides better retrieval coverage than MAIAC C6.1 owing to its higher spatial resolution, pixel-level processing, and less strict cloud masking. These results can be used as a guide for applications of satellite AOD retrievals during wildfire events and provide insights on future improvement of retrieval algorithms under heavy smoke conditions. 
    more » « less
  4. The CloudPatch-7 Hyperspectral Dataset comprises a manually curated collection of hyperspectral images, focused on pixel classification of atmospheric cloud classes. This labeled dataset features 380 patches, each a 50x50 pixel grid, derived from 28 larger, unlabeled parent images approximately 5000x1500 pixels in size. Captured using the Resonon PIKA XC2 camera, these images span 462 spectral bands from 400 to 1000 nm. Each patch is extracted from a parent image ensuring that its pixels fall within one of seven atmospheric conditions: Dense Dark Cumuliform Cloud, Dense Bright Cumuliform Cloud, Semi-transparent Cumuliform Cloud, Dense Cirroform Cloud, Semi-transparent Cirroform Cloud, Clear Sky - Low Aerosol Scattering (dark), and Clear Sky - Moderate to High Aerosol Scattering (bright). Incorporating contextual information from surrounding pixels enhances pixel classification into these 7 classes, making this dataset a valuable resource for spectral analysis, environmental monitoring, atmospheric science research, and testing machine learning applications that require contextual data. Parent images are very big in size, but they can be made available upon request. 
    more » « less
  5. Cloud detection is an inextricable pre-processing step in remote sensing image analysis workflows. Most of the traditional rule-based and machine-learning-based algorithms utilize low-level features of the clouds and classify individual cloud pixels based on their spectral signatures. Cloud detection using such approaches can be challenging due to a multitude of factors including harsh lighting conditions, the presence of thin clouds, the context of surrounding pixels, and complex spatial patterns. In recent studies, deep convolutional neural networks (CNNs) have shown outstanding results in the computer vision domain. These methods are practiced for better capturing the texture, shape as well as context of images. In this study, we propose a deep learning CNN approach to detect cloud pixels from medium-resolution satellite imagery. The proposed CNN accounts for both the low-level features, such as color and texture information as well as high-level features extracted from successive convolutions of the input image. We prepared a cloud-pixel dataset of approximately 7273 randomly sampled 320 by 320 pixels image patches taken from a total of 121 Landsat-8 (30m) and Sentinel-2 (20m) image scenes. These satellite images come with cloud masks. From the available data channels, only blue, green, red, and NIR bands are fed into the model. The CNN model was trained on 5300 image patches and validated on 1973 independent image patches. As the final output from our model, we extract a binary mask of cloud pixels and non-cloud pixels. The results are benchmarked against established cloud detection methods using standard accuracy metrics. 
    more » « less