skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Pixel Level Scaled Fusion Model to Provide High Spatial-Spectral Resolution for Satellite Images Using LSTM Networks
Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio.  more » « less
Award ID(s):
1750970
PAR ID:
10155759
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Messinger, David W.; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less
  2. Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model. 
    more » « less
  3. Combining a hyperspectral (HS) image and a multi-spectral (MS) image---an example of image fusion---can result in a spatially and spectrally high-resolution image. Despite the plethora of fusion algorithms in remote sensing, a necessary prerequisite, namely registration, is mostly ignored. This limits their application to well-registered images from the same source. In this article, we propose and validate an integrated registration and fusion approach (code available at https://github.com/zhouyuanzxcv/Hyperspectral). The registration algorithm minimizes a least-squares (LSQ) objective function with the point spread function (PSF) incorporated together with a nonrigid freeform transformation applied to the HS image and a rigid transformation applied to the MS image. It can handle images with significant scale differences and spatial distortion. The fusion algorithm takes the full high-resolution HS image as an unknown in the objective function. Assuming that the pixels lie on a low-dimensional manifold invariant to local linear transformations from spectral degradation, the fusion optimization problem leads to a closed-form solution. The method was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets. When the proposed registration algorithm is compared to its rigid variant and two mutual information-based methods, it has the best accuracy for both the nonrigid simulated dataset and the real dataset, with an average error less than 0.15 pixels for nonrigid distortion of maximum 1 HS pixel. When the fusion algorithm is compared with current state-of-the-art algorithms, it has the best performance on images with registration errors as well as on simulations that do not consider registration effects. 
    more » « less
  4. This dataset includes 30 hyperspectral cloud images captured during the Summer and Fall of 2022 at Auburn University at Montgomery, Alabama, USA (Latitude N, Longitude W) using aResonon Pika XC2 Hyperspectral Imaging Camera. Utilizing the Spectronon software, the images were recorded with integration times between 9.0-12.0 ms, a frame rate of approximately 45 Hz, and a scan rate of 0.93 degrees per second. The images are calibrated to give spectral radiance in microflicks at 462 spectral bands in the 400 – 1000 nm wavelength region with a spectral resolution of 1.9 nm. A 17 m focal length objective lens was used giving a field of view equal to 30.8 degrees and an integration field of view of 0.71 mrad. These settings enable detailed spectral analysis of both dynamic cloud formations and clear sky conditions. Funded by NSF grant 2003740, this dataset is designed to advance understanding of diffuse solar radiation as influenced by cloud coverage.  The dataset is organized into 30 folders, each containing a hyperspectral image file (.bip), a header file (.hdr) with metadata, and an RGB render for visual inspection. Additional metadata, including date, time, central pixel azimuth, and altitude, are cataloged in an accompanying MS Excel file. A custom Python program is also provided to facilitate the reading and display of the HSI files.  The images can also be read and analyzed using the free version of the Spectron software available at https://resonon.com/software. To enrich this dataset, we have added a supplementary ZIP file containing multispectral (4-channel) image versions of the original hyperspectral scenes, together with the corresponding per-pixel photon flux and spectral radiance values computed from the full spectrum. These additions extend the dataset’s utility for machine learning and data fusion research by enabling comparative analysis between reduced-band multispectral imagery and full-spectrum hyperspectral data. The ExpandAI Challenge task is to develop models capable of predicting photon flux and radiance—derived from all 462 hyperspectral bands—using only the four multispectral channels. This benchmark aims to stimulate innovation in spectral information recovery, spectral-spatial inference, and physically informed deep learning for atmospheric imaging applications. 
    more » « less
  5. Hyperspectral super-resolution refers to the task of fusing a hyperspectral image (HSI) and a multispectral image (MSI) in order to produce a super-resolution image (SRI) that has high spatial and spectral resolution. Popular methods leverage matrix factorization that models each spectral pixel as a convex combination of spectral signatures belonging to a few endmembers. These methods are considered state-of-the-art, but several challenges remain. First, multiband images are naturally three dimensional (3-d) signals, while matrix methods usually ignore the 3-d structure, which is prone to information losses. Second, these methods do not provide identifiability guarantees under which the reconstruction task is feasible. Third, a tacit assumption is that the degradation operators from SRI to MSI and HSI are known - which is hardly the case in practice. Recently [1], [2] proposed a coupled tensor factorization approach to handle these issues. In this work we propose a hybrid model that combines the benefits of tensor and matrix factorization approaches. We also develop a new algorithm that is mathematically simple, enjoys identifiability under relaxed conditions and is completely agnostic of the spatial degradation operator. Experimental results with real hyperspectral data showcase the effectiveness of the proposed approach. 
    more » « less