skip to main content


Search for: All records

Award ID contains: 1750970

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model. 
    more » « less
  2. Properties in material composition and crystal structures have been explored by density functional theory (DFT) calculations, using databases such as the Open Quantum Materials Database (OQMD). Databases like these have been used currently for the training of advanced machine learning and deep neural network models, the latter providing higher performance when predicting properties of materials. However, current alternatives have shown a deterioration in accuracy when increasing the number of layers in their architecture (over-fitting problem). As an alternative method to address this problem, we have implemented residual neural network architectures based on Merge and Run Networks, IRNet and UNet to improve performance while relaxing the observed network depth limitation. The evaluation of the proposed architectures include a 9:1 ratio to train and test as well as 10 fold cross validation. In the experiments we found that our proposed architectures based on IRNet and UNet are able to obtain a lower Mean Absolute Error (MAE) than current strategies. The full implementation (Python, Tensorflow and Keras) and the trained networks will be available online for community validation and advancing the state of the art from our findings. 
    more » « less
  3. Messinger, David W. ; Velez-Reyes, Miguel (Ed.)
    Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets. 
    more » « less
  4. Messinger, David W. ; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less
  5. he Compressive Sensing (CS) framework has demonstrated improved acquisition efficiency on a variety of clinical applications. Of interest to this work is Reflectance Confocal Microscopy (RCM), where CS can influence a drastic reduction in instrumentation complexity and image acquisition times. However, CS introduces the disadvantage of requiring a time consuming and computationally intensive process for image recovery. To mitigate this, the current document details our preliminary work on expanding a Deep-Learning architecture for the acquisition and fast recovery of RCM images using CS. We show preliminary recoveries of RCM images of both a synthetic target and heterogeneous skin tissue using a state-of-the-art network architecture from compressive measurements at various undersampling rates. In addition, we propose an application-specific addition to an established network architecture, and evaluate its ability to further increase the accuracy of recovered CS RCM images and remove visual artifacts. Our initial results show that it is possible to recover compressively sampled images at near-real time rates with comparable quality to established computationally intensive and time-consuming optimization-based methods common in CS applications 
    more » « less
  6. Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio. 
    more » « less
  7. Compressive Sensing enables improvement of acquisition of a variety of signals in various applications with little to no discernible loss in terms of recovered image quality. The current work proposes a signal processing framework for the acquisition and fast reconstruction of compressively sampled hyperspectral images using an artificial neural network architecture. This ANN-based approach is capable of performing a fast reconstruction by avoiding the requirement of solving a computationally intensive image-specific optimization problem. The proposed framework contributes to advance single-pixel hyperspectral imaging device methodologies, which enable a significant reduction in device mechanical complexity, imaging rate, and cost. Our experiments demonstrate that a hyperspectral image can be reconstructed using only 10% of the samples without compromising classification performance. Specifically, the results show that classification performance of the compressively sampled hyperspectral image recovered using artificial neural networks is equal or higher to that of those obtained using current scanning hyperspectral imaging platforms. 
    more » « less