Recent studies have shown that fringe-adjusted joint transform correlation (FJTC) can be effectively applied for single class and even multiclass object detection in hyperspectral imagery (HSI). However, directly utilizing FJTC based techniques for HSI processing may not be efficient due to the fact that HSI may contain a large volume of data redundancy. Therefore, incorporating dimensionality reduction (DR) methods prior to the object detection procedure is suggested. In this paper, we combine several DRs individually with class-associative spectral FJTC (CSFJTC), and then compare their performance on single class and multiclass object detection tasks using a real-world hyperspectral data set. Test results show that the CSFJTC with denoising autoencoder provides superior performance compared to the alternate methods for detecting few dissimilar patterns in the scene.
more »
« less
Dynamical System Autoencoders
Autoencoders represent a significant category of deep learning models and are widely utilized for dimensionality reduction. However, standard Autoencoders are complicated architectures that normally have several layers and many hyper-parameters that require tuning. In this paper, we introduce a new type of autoencoder that we call dynamical system autoencoder (DSAE). Similar to classic autoencoders, DSAEs can effectively handle dimensionality reduction and denoising tasks, and they demonstrate strong performance in several benchmark tasks. However, DSAEs, in some sense, have a more flexible architecture than standard AEs. In particular, in this paper we study simple DSAEs that only have a single layer. In addition, DSAEs provide several theoretical and practical advantages arising from their implementation as iterative maps, which have been well studied over several decades. Beyond the inherent simplicity of DSAEs, we also demonstrate how to use sparse matrices to reduce the number of parameters for DSAEs without sacrificing the performance of our methods. Our simulation studies indicate that DSAEs achieved better performance than the classic autoencoders when the encoding dimension or training sample size was small. Additionally, we illustrate how to use DSAEs, and denoising autoencoders in general. to nerform sunervised learning tasks.
more »
« less
- Award ID(s):
- 2021871
- PAR ID:
- 10620861
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-7488-9
- Page Range / eLocation ID:
- 1496 to 1503
- Format(s):
- Medium: X
- Location:
- Miami, FL, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces. However, despite these recent successes, the majority of encoder architectures only perform dimension reduction on single streamlines as opposed to a full bundle of streamlines. This is a severe limitation of the encoder architecture that completely disregards the global geometric structure of streamlines at the expense of individual fibers. Moreover, the latent space may not be well structured which leads to doubt into their interpretability. In this paper we propose a novel Differentiable Vector Quantized Variational Autoencoder, which are engineered to ingest entire bundles of streamlines as single data-point and provides reliable trustworthy encodings that can then be later used to analyze streamlines in the latent space. Comparisons with several state of the art Autoencoders demonstrate superior performance in both encoding and synthesis.more » « less
-
We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.more » « less
-
Nie, Qing (Ed.)The analysis of single-cell genomics data presents several statistical challenges, and extensive efforts have been made to produce methods for the analysis of this data that impute missing values, address sampling issues and quantify and correct for noise. In spite of such efforts, no consensus on best practices has been established and all current approaches vary substantially based on the available data and empirical tests. The k-Nearest Neighbor Graph (kNN-G) is often used to infer the identities of, and relationships between, cells and is the basis of many widely used dimensionality-reduction and projection methods. The kNN-G has also been the basis for imputation methods using, e.g ., neighbor averaging and graph diffusion. However, due to the lack of an agreed-upon optimal objective function for choosing hyperparameters, these methods tend to oversmooth data, thereby resulting in a loss of information with regard to cell identity and the specific gene-to-gene patterns underlying regulatory mechanisms. In this paper, we investigate the tuning of kNN- and diffusion-based denoising methods with a novel non-stochastic method for optimally preserving biologically relevant informative variance in single-cell data. The framework, Denoising Expression data with a Weighted Affinity Kernel and Self-Supervision (DEWÄKSS), uses a self-supervised technique to tune its parameters. We demonstrate that denoising with optimal parameters selected by our objective function (i) is robust to preprocessing methods using data from established benchmarks, (ii) disentangles cellular identity and maintains robust clusters over dimension-reduction methods, (iii) maintains variance along several expression dimensions, unlike previous heuristic-based methods that tend to oversmooth data variance, and (iv) rarely involves diffusion but rather uses a fixed weighted kNN graph for denoising. Together, these findings provide a new understanding of kNN- and diffusion-based denoising methods. Code and example data for DEWÄKSS is available at https://gitlab.com/Xparx/dewakss/-/tree/Tjarnberg2020branch .more » « less
-
Deep learning leverages multi-layer neural networks architecture and demonstrates superb power in many machine learning applications. The deep denoising autoencoder technique extracts better coherent features from the seismic data. The technique allows us to automatically extract low-dimensional features from high dimensional feature space in a non-linear, data-driven, and unsupervised way. A properly trained denoising autoencoder takes a partially corrupted input and recovers the original undistorted input. In this paper, a novel autoencoder built upon the deep residual network is proposed to perform noise attenuation on the seismic data. We evaluate the proposed method with synthetic datasets and the result confirms the effective denoising performance of the proposed approach.more » « less
An official website of the United States government

