skip to main content

Title: Clustering on Sparse Data in Non-overlapping Feature Space with Applications to Cancer Subtyping
This paper presents a new algorithm, Reinforced and Informed Network-based Clustering (RINC), for finding unknown groups of similar data objects in sparse and largely non-overlapping feature space where a network structure among features can be observed. Sparse and non-overlapping unlabeled data become increasingly common and available especially in text mining and biomedical data mining. RINC inserts a domain informed model into a modelless neural network. In particular, our approach integrates physically meaningful feature dependencies into the neural network architecture and soft computational constraint. Our learning algorithm efficiently clusters sparse data through integrated smoothing and sparse auto-encoder learning. The informed design requires fewer samples for training and at least part of the model becomes explainable. The architecture of the reinforced network layers smooths sparse data over the network dependency in the feature space. Most importantly, through back-propagation, the weights of the reinforced smoothing layers are simultaneously constrained by the remaining sparse auto-encoder layers that set the target values to be equal to the raw inputs. Empirical results demonstrate that RINC achieves improved accuracy and renders physically meaningful clustering results.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2018 IEEE International Conference on Data Mining (ICDM)
Page Range / eLocation ID:
1079 - 1084
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep neural network clustering is superior to the conventional clustering methods due to deep feature extraction and nonlinear dimensionality reduction. Nevertheless, deep neural network leads to a rough representation regarding the inherent relationship of the data points. Therefore, it is still difficult for deep neural network to exploit the effective structure for direct clustering. To address this issue,we propose a robust embedded deep K-means clustering (REDKC) method. The proposed RED-KC approach utilizes the δ-norm metric to constrain the feature mapping process of the auto-encoder network, so that data are mapped to a latent feature space, which is more conducive to the robust clustering. Compared to the existing auto-encoder networks with the fixed prior, the proposed RED-KC is adaptive during the process of feature mapping. More importantly, the proposed RED-KC embeds the clustering process with the autoencoder network, such that deep feature extraction and clustering can be performed simultaneously. Accordingly, a direct and efficient clustering could be obtained within only one step to avoid the inconvenience of multiple separate stages, namely, losing pivotal information and correlation. Consequently, extensive experiments are provided to validate the effectiveness of the proposed approach. 
    more » « less
  2. Abstract

    Tissue dynamics play critical roles in many physiological functions and provide important metrics for clinical diagnosis. Capturing real-time high-resolution 3D images of tissue dynamics, however, remains a challenge. This study presents a hybrid physics-informed neural network algorithm that infers 3D flow-induced tissue dynamics and other physical quantities from sparse 2D images. The algorithm combines a recurrent neural network model of soft tissue with a differentiable fluid solver, leveraging prior knowledge in solid mechanics to project the governing equation on a discrete eigen space. The algorithm uses a Long-short-term memory-based recurrent encoder-decoder connected with a fully connected neural network to capture the temporal dependence of flow-structure-interaction. The effectiveness and merit of the proposed algorithm is demonstrated on synthetic data from a canine vocal fold model and experimental data from excised pigeon syringes. The results showed that the algorithm accurately reconstructs 3D vocal dynamics, aerodynamics, and acoustics from sparse 2D vibration profiles.

    more » « less
  3. This work proposes a domain-informed neural network architecture for experimental particle physics, using particle interaction localization with the time-projection chamber (TPC) technology for dark matter research as an example application. A key feature of the signals generated within the TPC is that they allow localization of particle interactions through a process called reconstruction (i.e., inverse-problem regression). While multilayer perceptrons (MLPs) have emerged as a leading contender for reconstruction in TPCs, such a black-box approach does not reflect prior knowledge of the underlying scientific processes. This paper looks anew at neural network-based interaction localization and encodes prior detector knowledge, in terms of both signal characteristics and detector geometry, into the feature encoding and the output layers of a multilayer (deep) neural network. The resulting neural network, termed Domain-informed Neural Network (DiNN), limits the receptive fields of the neurons in the initial feature encoding layers in order to account for the spatially localized nature of the signals produced within the TPC. This aspect of the DiNN, which has similarities with the emerging area of graph neural networks in that the neurons in the initial layers only connect to a handful of neurons in their succeeding layer, significantly reduces the number of parameters in the network in comparison to an MLP. In addition, in order to account for the detector geometry, the output layers of the network are modified using two geometric transformations to ensure the DiNN produces localizations within the interior of the detector. The end result is a neural network architecture that has 60% fewer parameters than an MLP, but that still achieves similar localization performance and provides a path to future architectural developments with improved performance because of their ability to encode additional domain knowledge into the architecture. 
    more » « less
  4. The authors consider a bistatic configuration with a stationary transmitter transmitting unknown waveforms of opportunity and a single moving receiver and present a deep learning (DL) framework for passive synthetic aperture radar (SAR) imaging. They approach DL from an optimisation based perspective and formulate image reconstruction as a machine learning task. By unfolding the iterations of a proximal gradient descent algorithm, they construct a deep recurrent neural network (RNN) that is parameterised by the transmitted waveforms. They cascade the RNN structure with a decoder stage to form a recurrent auto-encoder architecture. They then use backpropagation to learn transmitted waveforms by training the network in an unsupervised manner using SAR measurements. The highly non-convex problem of backpropagation is guided to a feasible solution over the parameter space by initialising the network with the known components of the SAR forward model. Moreover, prior information regarding the waveform structure is incorporated during initialisation and backpropagation. They demonstrate the effectiveness of the DL-based approach through numerical simulations that show focused, high contrast imagery using a single receiver antenna at realistic signal-to-noise-ratio levels. 
    more » « less
  5. Karlapalem, Kamal ; Cheng, Hong ; Ramakrishnan, Naren ; null ; null ; Reddy, P. Krishna ; Srivastava, Jaideep ; Chakraborty, Tanmoy (Ed.)
    Constrained learning, a weakly supervised learning task, aims to incorporate domain constraints to learn models without requiring labels for each instance. Because weak supervision knowledge is useful and easy to obtain, constrained learning outperforms unsupervised learning in performance and is preferable than supervised learning in terms of labeling costs. To date, constrained learning, especially constrained clustering, has been extensively studied, but was primarily focused on data in the Euclidean space. In this paper, we propose a weak supervision network embedding (WSNE) for constrained learning of graphs. Because no label is available for individual nodes, we propose a new loss function to quantify the constraint-based loss, and integrate this loss in a graph convolutional neural network (GCN) and variational graph auto-encoder (VGAE) combined framework to jointly model graph structures and node attributes. The joint optimization allows WSNE to learn embedding not only preserving network topology and content, but also satisfying the constraints. Experiments show that WSNE outperforms baselines for constrained graph learning tasks, including constrained graph clustering and constrained graph classification. 
    more » « less