skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: CloudRadianceHSI
This dataset includes 30 hyperspectral cloud images captured during the Summer and Fall of 2022 at Auburn University at Montgomery, Alabama, USA (Latitude N, Longitude W) using aResonon Pika XC2 Hyperspectral Imaging Camera. Utilizing the Spectronon software, the images were recorded with integration times between 9.0-12.0 ms, a frame rate of approximately 45 Hz, and a scan rate of 0.93 degrees per second. The images are calibrated to give spectral radiance in microflicks at 462 spectral bands in the 400 – 1000 nm wavelength region with a spectral resolution of 1.9 nm. A 17 m focal length objective lens was used giving a field of view equal to 30.8 degrees and an integration field of view of 0.71 mrad. These settings enable detailed spectral analysis of both dynamic cloud formations and clear sky conditions. Funded by NSF grant 2003740, this dataset is designed to advance understanding of diffuse solar radiation as influenced by cloud coverage.  The dataset is organized into 30 folders, each containing a hyperspectral image file (.bip), a header file (.hdr) with metadata, and an RGB render for visual inspection. Additional metadata, including date, time, central pixel azimuth, and altitude, are cataloged in an accompanying MS Excel file. A custom Python program is also provided to facilitate the reading and display of the HSI files.  The images can also be read and analyzed using the free version of the Spectron software available at https://resonon.com/software. To enrich this dataset, we have added a supplementary ZIP file containing multispectral (4-channel) image versions of the original hyperspectral scenes, together with the corresponding per-pixel photon flux and spectral radiance values computed from the full spectrum. These additions extend the dataset’s utility for machine learning and data fusion research by enabling comparative analysis between reduced-band multispectral imagery and full-spectrum hyperspectral data. The ExpandAI Challenge task is to develop models capable of predicting photon flux and radiance—derived from all 462 hyperspectral bands—using only the four multispectral channels. This benchmark aims to stimulate innovation in spectral information recovery, spectral-spatial inference, and physically informed deep learning for atmospheric imaging applications.  more » « less
Award ID(s):
2003740 2435093
PAR ID:
10527258
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE DataPort
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The CloudPatch-7 Hyperspectral Dataset comprises a manually curated collection of hyperspectral images, focused on pixel classification of atmospheric cloud classes. This labeled dataset features 380 patches, each a 50x50 pixel grid, derived from 28 larger, unlabeled parent images approximately 5000x1500 pixels in size. Captured using the Resonon PIKA XC2 camera, these images span 462 spectral bands from 400 to 1000 nm. Each patch is extracted from a parent image ensuring that its pixels fall within one of seven atmospheric conditions: Dense Dark Cumuliform Cloud, Dense Bright Cumuliform Cloud, Semi-transparent Cumuliform Cloud, Dense Cirroform Cloud, Semi-transparent Cirroform Cloud, Clear Sky - Low Aerosol Scattering (dark), and Clear Sky - Moderate to High Aerosol Scattering (bright). Incorporating contextual information from surrounding pixels enhances pixel classification into these 7 classes, making this dataset a valuable resource for spectral analysis, environmental monitoring, atmospheric science research, and testing machine learning applications that require contextual data. Parent images are very big in size, but they can be made available upon request. 
    more » « less
  2. Orthorectified flight line hyperspectral cubes retiled for publication. Collectively, the tiled hyperspectral cubes cover the footprint of the flux tower and established long-term study plots at Thompson Farm Observatory, Durham, NH. Data were acquired using a Headwall Photonics, Inc. Nano VNIR hyperspectral line scanning imager with 273 bands from 400-1000 nm. The sensor was flown on board a DJI M600 hexacopter at an altitude of ~80 m above the forest canopy, yielding ~6 cm GSD. Flight lines were converted from raw sensor observations to upwelling radiance a using a vendor-supplied radiometric calibration file for the sensor, then converted to reflectance using a calibration tarp with known reflectance. Finally, cubes were orthorectified using a 1m DSM in Headwall’s SpectralView software, mosaicked to individual flight line cubes, then subsequently tiled for publication. 
    more » « less
  3. null (Ed.)
    Abstract. Current cloud and aerosol identification methods for multispectral radiometers, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS), employ multichannel spectral tests on individual pixels (i.e., fields of view). The use of the spatial information in cloud and aerosol algorithms has been primarily through statistical parameters such as nonuniformity tests of surrounding pixels with cloud classification provided by the multispectral microphysical retrievals such as phase and cloud top height. With these methodologies there is uncertainty in identifying optically thick aerosols, since aerosols and clouds have similar spectral properties in coarse-spectral-resolution measurements. Furthermore, identifying clouds regimes (e.g., stratiform, cumuliform) from just spectral measurements is difficult, since low-altitude cloud regimes have similar spectral properties. Recent advances in computer vision using deep neural networks provide a new opportunity to better leverage the coherent spatial information in multispectral imagery. Using a combination of machine learning techniques combined with a new methodology to create the necessary training data, we demonstrate improvements in the discrimination between cloud and severe aerosols and an expanded capability to classify cloud types. The labeled training dataset was created from an adapted NASA Worldview platform that provides an efficient user interface to assemble a human-labeled database of cloud and aerosol types. The convolutional neural network (CNN) labeling accuracy of aerosols and cloud types was quantified using independent Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and MODIS cloud and aerosol products. By harnessing CNNs with a unique labeled dataset, we demonstrate the improvement of the identification of aerosols and distinct cloud types from MODIS and VIIRS images compared to a per-pixel spectral and standard deviation thresholding method. The paper concludes with case studies that compare the CNN methodology results with the MODIS cloud and aerosol products. 
    more » « less
  4. Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio. 
    more » « less
  5. Hyperspectral imaging systems are becoming widely used due to their increasing accessibility and their ability to provide detailed spectral responses based on hundreds of spectral bands. However, the resulting hyperspectral images (HSIs) come at the cost of increased storage requirements, increased computational time to process, and highly redundant data. Thus, dimensionality reduction techniques are necessary to decrease the number of spectral bands while retaining the most useful information. Our contribution is two-fold: First, we propose a filter-based method called interband redundancy analysis (IBRA) based on a collinearity analysis between a band and its neighbors. This analysis helps to remove redundant bands and dramatically reduces the search space. Second, we apply a wrapper-based approach called greedy spectral selection (GSS) to the results of IBRA to select bands based on their information entropy values and train a compact convolutional neural network to evaluate the performance of the current selection. We also propose a feature extraction framework that consists of two main steps: first, it reduces the total number of bands using IBRA; then, it can use any feature extraction method to obtain the desired number of feature channels. We present classification results obtained from our methods and compare them to other dimensionality reduction methods on three hyperspectral image datasets. Additionally, we used the original hyperspectral data cube to simulate the process of using actual filters in a multispectral imager. 
    more » « less