skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1719932

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Sparse representation based classification (SRC) methods have achieved remarkable results. SRC, however, still suffer from requiring enough training samples, insufficient use of test samples, and instability of representation. In this paper, a stable inverse projection representation based classification (IPRC) is presented to tackle these problems by effectively using test samples. An IPR is first proposed and its feasibility and stability are analyzed. A classification criterion named category contribution rate is constructed to match the IPR and complete classification. Moreover, a statistical measure is introduced to quantify the stability of representation-based classification methods. Based on the IPRC technique, a robust tumor recognition framework is presented by interpreting microarray gene expression data, where a two-stage hybrid gene selection method is introduced to select informative genes. Finally, the functional analysis of candidate's pathogenicity-related genes is given. Extensive experiments on six public tumor microarray gene expression datasets demonstrate the proposed technique is competitive with state-of-the-art methods. 
    more » « less
  2. We develop a unified level-bundle method, called accelerated constrained level-bundle (ACLB) algorithm, for solving constrained convex optimization problems. where the objective and constraint functions can be nonsmooth, weakly smooth, and/or smooth. ACLB employs Nesterov’s accelerated gradient technique, and hence retains the iteration complexity as that of existing bundle-type methods if the objective or one of the constraint functions is nonsmooth. More importantly, ACLB can significantly reduce iteration complexity when the objective and all constraints are (weakly) smooth. In addition, if the objective contains a nonsmooth component which can be written as a specific form of maximum, we show that the iteration complexity of this component can be much lower than that for general nonsmooth objective function. Numerical results demonstrate the effectiveness of the proposed algorithm. 
    more » « less
  3. Since the cost of labeling data is getting higher and higher, we hope to make full use of the large amount of unlabeled data and improve image classification effect through adding some unlabeled samples for training. In addition, we expect to uniformly realize two tasks, namely the clustering of the unlabeled data and the recognition of the query image. We achieve the goal by designing a novel sparse model based on manifold assumption, which has been proved to work well in many tasks. Based on the assumption that images of the same class lie on a sub-manifold and an image can be approximately represented as the linear combination of its neighboring data due to the local linear property of manifold, we proposed a sparse representation model on manifold. Specifically, there are two regularizations, i.e., a variant Trace lasso norm and the manifold Laplacian regularization. The first regularization term enables the representation coefficients satisfying sparsity between groups and density within a group. And the second term is manifold Laplacian regularization by which label can be accurately propagated from labeled data to unlabeled data. Augmented Lagrange Multiplier (ALM) scheme and Gauss Seidel Alternating Direction Method of Multiplier (GS-ADMM) are given to solve the problem numerically. We conduct some experiments on three human face databases and compare the proposed work with several state-of-the-art methods. For each subject, some labeled face images are randomly chosen for training for those supervised methods, and a small amount of unlabeled images are added to form the training set of the proposed approach. All experiments show our method can get better classification results due to the addition of unlabeled samples. 
    more » « less
  4. We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to “kernelize” (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons. In terms of learning, when learning a feedforward deep architecture in a supervised setting, one needs to train all the components simultaneously using backpropagation (BP) since there are no explicit targets for the hidden layers (Rumelhart, Hinton, & Williams, 1986). We consider without loss of generality the two-layer case and present a general framework that explicitly characterizes a target for the hidden layer that is optimal for minimizing the objective function of the network. This characterization then makes possible a purely greedy training scheme that learns one layer at a time, starting from the input layer. We provide instantiations of the abstract framework under certain architectures and objective functions. Based on these instantiations, we present a layer-wise training algorithm for an l-layer feedforward network for classification, where l≥2 can be arbitrary. This algorithm can be given an intuitive geometric interpretation that makes the learning dynamics transparent. Empirical results are provided to complement our theory. We show that the kernelized networks, trained layer-wise, compare favorably with classical kernel machines as well as other connectionist models trained by BP. We also visualize the inner workings of the greedy kernelized models to validate our claim on the transparency of the layer-wise algorithm. 
    more » « less
  5. Dynamic positron emission tomography (dPET) is a nuclear medical imaging technology that shows the changes in radioactivity over time. In this article, we propose a structure and tracer kinetics-constrained reconstruction framework for dPET imaging. Given the Poisson nature of PET imaging, we integrate the sparse penalty on a dual dictionary into a Poisson-likelihood estimator. Explicit anatomical constraints with a structural dictionary constructed from magnetic resonance or computed tomography images are employed to take advantage of the anatomical imaging modalities. In the kinetic dictionary, we treat tracer kinetics as random variables in a physiologically plausible range based on a compartmental model. We demonstrate the performance of our proposed framework with a direct simulated data set and real patient data. 
    more » « less
  6. Image-based breast tumor classification is an active and challenging problem. In this paper, a robust breast tumor classification framework is presented based on deep feature representation learning and exploiting available information in existing samples. Feature representation learning of mammograms is fulfilled by a modified nonnegative matrix factorization model called LPML-LRNMF, which is motivated by hierarchical learning and layer-wise pre-training (LP) strategy in deep learning. Low-rank (LR) constraint is integrated into the feature representation learning model by considering 
    more » « less
  7. We propose a new two-stage joint image reconstruction method by recovering edges directly from observed data and then assembling an image using the recovered edges. More specifically, we reformulate joint image reconstruction with vectorial total-variation regularization as an l1 minimization problem of the Jacobian of the underlying multimodality or multicontrast images. We provide detailed derivation of data fidelity for the Jacobian in Radon and Fourier transform domains. The new minimization problem yields an optimal convergence rate higher than that of existing primaldual based reconstruction algorithms, and the per-iteration cost remains low by using closed-form matrix-valued shrinkages. We conducted numerical tests on a number of multicontrast CT and MR image datasets, which demonstrate that the proposed method significantly improves reconstruction efficiency and accuracy compared to the state-of-the-art joint image reconstruction methods. 
    more » « less
  8. In this study, we explore the use of low rank and sparse constraints for the noninvasive estimation of epicardial and endocardial extracellular potentials from body-surface electrocardiographic data to locate the focus of premature ventricular contractions (PVCs). The proposed strategy formulates the dynamic spatiotemporal distribution of cardiac potentials by means of low rank and sparse decomposition, where the low rank term represents the smooth background and the anomalous potentials are extracted in the sparse matrix. Compared to the most previous potential-based approaches, the proposed low rank and sparse constraints are batch spatiotemporal constraints that capture the underlying relationship of dynamic potentials. The resulting optimization problem is solved using alternating direction method of multipliers . Three sets of simulation experiments with eight different ventricular pacing sites demonstrate that the proposed model outperforms the existing Tikhonov regularization (zero-order, second-order) and L1-norm based method at accurately reconstructing the potentials and locating the ventricular pacing sites. Experiments on a total of 39 cases of real PVC data also validate the ability of the proposed method to correctly locate ectopic pacing sites. 
    more » « less