Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model.
more »
« less
An Integrated Approach to Registration and Fusion of Hyperspectral and Multispectral Images
Combining a hyperspectral (HS) image and a multi-spectral (MS) image---an example of image fusion---can result in a spatially and spectrally high-resolution image. Despite the plethora of fusion algorithms in remote sensing, a necessary prerequisite, namely registration, is mostly ignored. This limits their application to well-registered images from the same source. In this article, we propose and validate an integrated registration and fusion approach (code available at https://github.com/zhouyuanzxcv/Hyperspectral). The registration algorithm minimizes a least-squares (LSQ) objective function with the point spread function (PSF) incorporated together with a nonrigid freeform transformation applied to the HS image and a rigid transformation applied to the MS image. It can handle images with significant scale differences and spatial distortion. The fusion algorithm takes the full high-resolution HS image as an unknown in the objective function. Assuming that the pixels lie on a low-dimensional manifold invariant to local linear transformations from spectral degradation, the fusion optimization problem leads to a closed-form solution. The method was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets. When the proposed registration algorithm is compared to its rigid variant and two mutual information-based methods, it has the best accuracy for both the nonrigid simulated dataset and the real dataset, with an average error less than 0.15 pixels for nonrigid distortion of maximum 1 HS pixel. When the fusion algorithm is compared with current state-of-the-art algorithms, it has the best performance on images with registration errors as well as on simulations that do not consider registration effects.
more »
« less
- Award ID(s):
- 1743050
- PAR ID:
- 10140579
- Date Published:
- Journal Name:
- IEEE Transactions on Geoscience and Remote Sensing
- ISSN:
- 0196-2892
- Page Range / eLocation ID:
- 1 to 14
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset of hyperspectral sky images captured by a Resonon PIKA XC2 camera. The camera records images using 462 spectral bands, ranging from 400 to 1000 nm, with a spectral resolution of 1.9 nm. Our preliminary/unlabeled dataset comprised 33 parent hyperspectral images (HSI), each a substantial unlabeled image measuring 4402-by-1600 pixels. With the meteorological expertise within our team, we manually labeled pixels by extracting 10 to 20 sample patches from each parent image, each patch consisting of a 50-by-50 pixel field. This process yielded a collection of 444 patches, each categorically labeled into one of seven cloud and sky condition categories. To embed the inherent data structure while classifying individual pixels, we introduced an innovative technique to boost classification accuracy by incorporating patch-specific information into each pixel’s feature vector. The posterior probabilities generated by these classifiers, which capture the unique attributes of each patch, were subsequently concatenated with the pixel’s original spectral data to form an augmented feature vector. We then applied a final classifier to map the augmented vectors to the seven cloud/sky categories. The results compared favorably to the baseline model devoid of patch-origin embedding, showing that incorporating the spatial context along with the spectral information inherent in hyperspectral images enhances the classification accuracy in hyperspectral cloud classification. The dataset is available on IEEE DataPort.more » « less
-
Messinger, David W.; Velez-Reyes, Miguel (Ed.)Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusionmore » « less
-
This dataset includes 30 hyperspectral cloud images captured during the Summer and Fall of 2022 at Auburn University at Montgomery, Alabama, USA (Latitude N, Longitude W) using aResonon Pika XC2 Hyperspectral Imaging Camera. Utilizing the Spectronon software, the images were recorded with integration times between 9.0-12.0 ms, a frame rate of approximately 45 Hz, and a scan rate of 0.93 degrees per second. The images are calibrated to give spectral radiance in microflicks at 462 spectral bands in the 400 – 1000 nm wavelength region with a spectral resolution of 1.9 nm. A 17 m focal length objective lens was used giving a field of view equal to 30.8 degrees and an integration field of view of 0.71 mrad. These settings enable detailed spectral analysis of both dynamic cloud formations and clear sky conditions. Funded by NSF grant 2003740, this dataset is designed to advance understanding of diffuse solar radiation as influenced by cloud coverage. The dataset is organized into 30 folders, each containing a hyperspectral image file (.bip), a header file (.hdr) with metadata, and an RGB render for visual inspection. Additional metadata, including date, time, central pixel azimuth, and altitude, are cataloged in an accompanying MS Excel file. A custom Python program is also provided to facilitate the reading and display of the HSI files. The images can also be read and analyzed using the free version of the Spectron software available at https://resonon.com/software. To enrich this dataset, we have added a supplementary ZIP file containing multispectral (4-channel) image versions of the original hyperspectral scenes, together with the corresponding per-pixel photon flux and spectral radiance values computed from the full spectrum. These additions extend the dataset’s utility for machine learning and data fusion research by enabling comparative analysis between reduced-band multispectral imagery and full-spectrum hyperspectral data. The ExpandAI Challenge task is to develop models capable of predicting photon flux and radiance—derived from all 462 hyperspectral bands—using only the four multispectral channels. This benchmark aims to stimulate innovation in spectral information recovery, spectral-spatial inference, and physically informed deep learning for atmospheric imaging applications.more » « less
-
Hyperspectral images taken from aircraft or satellites contain information from hundreds of spectral bands, within which lie latent lower-dimensional structures that can be exploited for classifying vegetation and other materials. A disadvantage of working with hyperspectral images is that, due to an inherent trade-off between spectral and spatial resolution, they have a relatively coarse spatial scale, meaning that single pixels may correspond to spatial regions containing multiple materials. This article introduces the Diffusion and Volume maximization-based Image Clustering (D-VIC) algorithm for unsupervised material clustering to address this problem. By directly incorporating pixel purity into its labeling procedure, D-VIC gives greater weight to pixels corresponding to a spatial region containing just a single material. D-VIC is shown to outperform comparable state-of-the-art methods in extensive experiments on a range of hyperspectral images, including land-use maps and highly mixed forest health surveys (in the context of ash dieback disease), implying that it is well-equipped for unsupervised material clustering of spectrally-mixed hyperspectral datasets.more » « less
An official website of the United States government

