- Award ID(s):
- 1726023
- NSF-PAR ID:
- 10200872
- Date Published:
- Journal Name:
- Atmospheric Measurement Techniques
- Volume:
- 13
- Issue:
- 5
- ISSN:
- 1867-8548
- Page Range / eLocation ID:
- 2257 to 2277
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Identifying dust aerosols from passive satellite images is of great interest for many applications. In this study, we developed five different machine-learning (ML) based algorithms, including Logistic Regression, K Nearest Neighbor, Random Forest (RF), Feed Forward Neural Network (FFNN), and Convolutional Neural Network (CNN), to identify dust aerosols in the daytime satellite images from the Visible Infrared Imaging Radiometer Suite (VIIRS) under cloud-free conditions on a global scale. In order to train the ML algorithms, we collocated the state-of-the-art dust detection product from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) with the VIIRS observations along the CALIOP track. The 16 VIIRS M-band observations with the center wavelength ranging from deep blue to thermal infrared, together with solar-viewing geometries and pixel time and locations, are used as the predictor variables. Four different sets of training input data are constructed based on different combinations of VIIRS pixel and predictor variables. The validation and comparison results based on the collocated CALIOP data indicate that the FFNN method based on all available predictor variables is the best performing one among all methods. It has an averaged dust detection accuracy of about 81%, 89%, and 85% over land, ocean and whole globe, respectively, compared with collocated CALIOP. When applied to off-track VIIRS pixels, the FFNN method retrieves geographical distributions of dust that are in good agreement with on-track results as well as CALIOP statistics. For further evaluation, we compared our results based on the ML algorithms to NOAA’s Aerosol Detection Product (ADP), which is a product that classifies dust, smoke, and ash using physical-based methods. The comparison reveals both similarity and differences. Overall, this study demonstrates the great potential of ML methods for dust detection and proves that these methods can be trained on the CALIOP track and then applied to the whole granule of VIIRS granule.more » « less
-
Abstract. In this study, we developed a novel algorithm based on the collocatedModerate Resolution Imaging Spectroradiometer (MODIS) thermal infrared (TIR)observations and dust vertical profiles from the Cloud–Aerosol Lidar withOrthogonal Polarization (CALIOP) to simultaneously retrieve dust aerosoloptical depth at 10 µm (DAOD10 µm) and the coarse-mode dusteffective diameter (Deff) over global oceans. The accuracy of theDeff retrieval is assessed by comparing the dust lognormal volumeparticle size distribution (PSD) corresponding to retrieved Deff withthe in situ-measured dust PSDs from the AERosol Properties – Dust(AER-D), Saharan Mineral Dust Experiment (SAMUM-2), and Saharan Aerosol Long-Range Transport and Aerosol–Cloud-InteractionExperiment (SALTRACE) fieldcampaigns through case studies. The new DAOD10 µm retrievals wereevaluated first through comparisons with the collocated DAOD10.6 µmretrieved from the combined Imaging Infrared Radiometer (IIR) and CALIOPobservations from our previous study (Zheng et al., 2022). The pixel-to-pixelcomparison of the two DAOD retrievals indicates a good agreement(R∼0.7) and a significant reduction in (∼50 %) retrieval uncertainties largely thanks to the better constraint ondust size. In a climatological comparison, the seasonal and regional(2∘×5∘) mean DAOD10 µm retrievals basedon our combined MODIS and CALIOP method are in good agreement with the twoindependent Infrared Atmospheric Sounding Interferometer (IASI) productsover three dust transport regions (i.e., North Atlantic (NA; R=0.9),Indian Ocean (IO; R=0.8) and North Pacific (NP; R=0.7)). Using the new retrievals from 2013 to 2017, we performed a climatologicalanalysis of coarse-mode dust Deff over global oceans. We found thatdust Deff over IO and NP is up to 20 % smaller than that over NA.Over NA in summer, we found a ∼50 % reduction in the numberof retrievals with Deff>5 µm from 15 to35∘ W and a stable trend of Deff average at 4.4 µm from35∘ W throughout the Caribbean Sea (90∘ W). Over NP inspring, only ∼5 % of retrieved pixels with Deff>5 µm are found from 150 to 180∘ E, whilethe mean Deff remains stable at 4.0 µm throughout eastern NP. To the best of our knowledge, this study is the first to retrieve both DAOD andcoarse-mode dust particle size over global oceans for multiple years. Thisretrieval dataset provides insightful information for evaluating dustlongwave radiative effects and coarse-mode dust particle size in models.
-
Domain adaptation techniques using deep neural networks have been mainly used to solve the distribution shift problem in homogeneous domains where data usually share similar feature spaces and have the same dimensionalities. Nevertheless, real world applications often deal with heterogeneous domains that come from completely different feature spaces with different dimensionalities. In our remote sensing application, two remote sensing datasets collected by an active sensor and a passive one are heterogeneous. In particular, CALIOP actively measures each atmospheric column. In this study, 25 measured variables/features that are sensitive to cloud phase are used and they are fully labeled. VIIRS is an imaging radiometer, which collects radiometric measurements of the surface and atmosphere in the visible and infrared bands. Recent studies have shown that passive sensors may have difficulties in prediction cloud/aerosol types in complicated atmospheres (e.g., overlapping cloud and aerosol layers, cloud over snow/ice surface, etc.). To overcome the challenge of the cloud property retrieval in passive sensor, we develop a novel VAE based approach to learn domain invariant representation that capture the spatial pattern from multiple satellite remote sensing data (VDAM), to build a domain invariant cloud property retrieval method to accurately classify different cloud types (labels) in the passive sensing dataset. We further exploit the weight based alignment method on the label space to learn a powerful domain adaptation technique that is pertinent to the remote sensing application. Experiments demonstrate our method outperforms other state-of-the-art machine learning methods and achieves higher accuracy in cloud property retrieval in the passive satellite dataset.more » « less
-
null (Ed.)Abstract. Current cloud and aerosol identification methods for multispectral radiometers, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS), employ multichannel spectral tests on individual pixels (i.e., fields of view). The use of the spatial information in cloud and aerosol algorithms has been primarily through statistical parameters such as nonuniformity tests of surrounding pixels with cloud classification provided by the multispectral microphysical retrievals such as phase and cloud top height. With these methodologies there is uncertainty in identifying optically thick aerosols, since aerosols and clouds have similar spectral properties in coarse-spectral-resolution measurements. Furthermore, identifying clouds regimes (e.g., stratiform, cumuliform) from just spectral measurements is difficult, since low-altitude cloud regimes have similar spectral properties. Recent advances in computer vision using deep neural networks provide a new opportunity to better leverage the coherent spatial information in multispectral imagery. Using a combination of machine learning techniques combined with a new methodology to create the necessary training data, we demonstrate improvements in the discrimination between cloud and severe aerosols and an expanded capability to classify cloud types. The labeled training dataset was created from an adapted NASA Worldview platform that provides an efficient user interface to assemble a human-labeled database of cloud and aerosol types. The convolutional neural network (CNN) labeling accuracy of aerosols and cloud types was quantified using independent Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and MODIS cloud and aerosol products. By harnessing CNNs with a unique labeled dataset, we demonstrate the improvement of the identification of aerosols and distinct cloud types from MODIS and VIIRS images compared to a per-pixel spectral and standard deviation thresholding method. The paper concludes with case studies that compare the CNN methodology results with the MODIS cloud and aerosol products.more » « less
-
Abstract Land surface phenology (LSP) products are currently of large uncertainties due to cloud contaminations and other impacts in temporal satellite observations and they have been poorly validated because of the lack of spatially comparable ground measurements. This study provided a reference dataset of gap-free time series and phenological dates by fusing the Harmonized Landsat 8 and Sentinel-2 (HLS) observations with near-surface PhenoCam time series for 78 regions of 10 × 10 km2across ecosystems in North America during 2019 and 2020. The HLS-PhenoCam LSP (HP-LSP) reference dataset at 30 m pixels is composed of: (1) 3-day synthetic gap-free EVI2 (two-band Enhanced Vegetation Index) time series that are physically meaningful to monitor the vegetation development across heterogeneous levels, train models (e.g., machine learning) for land surface mapping, and extract phenometrics from various methods; and (2) four key phenological dates (accuracy ≤5 days) that are spatially continuous and scalable, which are applicable to validate various satellite-based phenology products (e.g., global MODIS/VIIRS LSP), develop phenological models, and analyze climate impacts on terrestrial ecosystems.