skip to main content

Title: Hyperspectral Dimensionality Reduction Based on Inter-Band Redundancy Analysis and Greedy Spectral Selection
Hyperspectral imaging systems are becoming widely used due to their increasing accessibility and their ability to provide detailed spectral responses based on hundreds of spectral bands. However, the resulting hyperspectral images (HSIs) come at the cost of increased storage requirements, increased computational time to process, and highly redundant data. Thus, dimensionality reduction techniques are necessary to decrease the number of spectral bands while retaining the most useful information. Our contribution is two-fold: First, we propose a filter-based method called interband redundancy analysis (IBRA) based on a collinearity analysis between a band and its neighbors. This analysis helps to remove redundant bands and dramatically reduces the search space. Second, we apply a wrapper-based approach called greedy spectral selection (GSS) to the results of IBRA to select bands based on their information entropy values and train a compact convolutional neural network to evaluate the performance of the current selection. We also propose a feature extraction framework that consists of two main steps: first, it reduces the total number of bands using IBRA; then, it can use any feature extraction method to obtain the desired number of feature channels. We present classification results obtained from our methods and compare them to other dimensionality reduction methods on three hyperspectral image datasets. Additionally, we used the original hyperspectral data cube to simulate the process of using actual filters in a multispectral imager.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Remote Sensing
Page Range / eLocation ID:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Performing a direct match between images from different spectra (i.e., passive infrared and visible) is challenging because each spectrum contains different information pertaining to the subject’s face. In this work, we investigate the benefits and limitations of using synthesized visible face images from thermal ones and vice versa in cross-spectral face recognition systems. For this purpose, we propose utilizing canonical correlation analysis (CCA) and manifold learning dimensionality reduction (LLE). There are four primary contributions of this work. First, we formulate the cross-spectral heterogeneous face matching problem (visible to passive IR) using an image synthesis framework. Second, a new processed database composed of two datasets consistent of separate controlled frontal face subsets (VIS-MWIR and VIS-LWIR) is generated from the original, raw face datasets collected in three different bands (visible, MWIR and LWIR). This multi-band database is constructed using three different methods for preprocessing face images before feature extraction methods are applied. There are: (1) face detection, (2) CSU’s geometric normalization, and (3) our recommended geometric normalization method. Third, a post-synthesis image denoising methodology is applied, which helps alleviate different noise patterns present in synthesized images and improve baseline FR accuracy (i.e. before image synthesis and denoising is applied) in practical heterogeneous FR scenarios. Finally, an extensive experimental study is performed to demonstrate the feasibility and benefits of cross-spectral matching when using our image synthesis and denoising approach. Our results are also compared to a baseline commercial matcher and various academic matchers provided by the CSU’s Face Identification Evaluation System. 
    more » « less
  2. Abstract

    Capturing fine spatial, spectral, and temporal information of the scene is highly desirable in many applications. However, recording data of such high dimensionality requires significant transmission bandwidth. Current computational imaging methods can partially address this challenge but are still limited in reducing input data throughput. In this paper, we report a video-rate hyperspectral imager based on a single-pixel photodetector which can achieve high-throughput hyperspectral video recording at a low bandwidth. We leverage the insight that 4-dimensional (4D) hyperspectral videos are considerably more compressible than 2D grayscale images. We propose a joint spatial-spectral capturing scheme encoding the scene into highly compressed measurements and obtaining temporal correlation at the same time. Furthermore, we propose a reconstruction method relying on a signal sparsity model in 4D space and a deep learning reconstruction approach greatly accelerating reconstruction. We demonstrate reconstruction of 128 × 128 hyperspectral images with 64 spectral bands at more than 4 frames per second offering a 900× data throughput compared to conventional imaging, which we believe is a first-of-its kind of a single-pixel-based hyperspectral imager.

    more » « less
  3. null (Ed.)
    State aggregation is a popular model reduction method rooted in optimal control. It reduces the complexity of engineering systems by mapping the system’s states into a small number of meta-states. The choice of aggregation map often depends on the data analysts’ knowledge and is largely ad hoc. In this paper, we propose a tractable algorithm that estimates the probabilistic aggregation map from the system’s trajectory. We adopt a soft-aggregation model, where each meta-state has a signature raw state, called an anchor state. This model includes several common state aggregation models as special cases. Our proposed method is a simple two- step algorithm: The first step is spectral decomposition of empirical transition matrix, and the second step conducts a linear transformation of singular vectors to find their approximate convex hull. It outputs the aggregation distributions and disaggregation distributions for each meta-state in explicit forms, which are not obtainable by classical spectral methods. On the theoretical side, we prove sharp error bounds for estimating the aggregation and disaggregation distributions and for identifying anchor states. The analysis relies on a new entry-wise deviation bound for singular vectors of the empirical transition matrix of a Markov process, which is of independent interest and cannot be deduced from existing literature. The application of our method to Manhattan traffic data successfully generates a data-driven state aggregation map with nice interpretations. 
    more » « less
  4. Brain-computer interface (BCI) systems are proposed as a means of communication for locked-in patients. One common BCI paradigm is motor imagery in which the user controls a BCI by imagining movements of different body parts. It is known that imagining different body parts results in event-related desynchronization (ERD) in various frequency bands. Existing methods such as common spatial patterns (CSP) and its refinement filterbank common spatial patterns (FB-CSP) aim at finding features that are informative for classification of the motor imagery class. Our proposed method is a temporally adaptive common spatial patterns implementation of the commonly used filter-bank common spatial patterns method using convolutional neural networks; hence it is called TA-CSPNN. With this method we aim to: (1) make the feature extraction and classification end-to-end, (2) base it on the way CSP/FBCSP extracts relevant features, and finally, (3) reduce the number of trainable parameters compared to existing deep learning methods to improve generalizability in noisy data such as EEG. More importantly, we show that this reduction in parameters does not affect performance and in fact the trained network generalizes better for data from some participants. We show our results on two datasets, one publicly available from BCI Competition IV, dataset 2a and another in-house motor imagery dataset. 
    more » « less
  5. Deep neural network clustering is superior to the conventional clustering methods due to deep feature extraction and nonlinear dimensionality reduction. Nevertheless, deep neural network leads to a rough representation regarding the inherent relationship of the data points. Therefore, it is still difficult for deep neural network to exploit the effective structure for direct clustering. To address this issue,we propose a robust embedded deep K-means clustering (REDKC) method. The proposed RED-KC approach utilizes the δ-norm metric to constrain the feature mapping process of the auto-encoder network, so that data are mapped to a latent feature space, which is more conducive to the robust clustering. Compared to the existing auto-encoder networks with the fixed prior, the proposed RED-KC is adaptive during the process of feature mapping. More importantly, the proposed RED-KC embeds the clustering process with the autoencoder network, such that deep feature extraction and clustering can be performed simultaneously. Accordingly, a direct and efficient clustering could be obtained within only one step to avoid the inconvenience of multiple separate stages, namely, losing pivotal information and correlation. Consequently, extensive experiments are provided to validate the effectiveness of the proposed approach. 
    more » « less