skip to main content


Search for: All records

Award ID contains: 1808159

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Generalized canonical correlation analysis (GCCA) aims to learn common low-dimensional representations from multiple "views" of the data (e.g., audio and video of the same event). In the era of big data, GCCA computation encounters many new challenges. In particular, distributed optimization for GCCA—which is well-motivated in applications like internet of things and parallel computing—may incur prohibitively high communication costs. To address this challenge, this work proposes a communication-efficient distributed GCCA algorithm under the popular MAX-VAR GCCA paradigm. A quantization strategy for information exchange among the computing agents is employed in the proposed algorithm. It is observed that our design, leveraging the idea of error feedback-based quantization, can reduce communication cost by at least 90% while maintaining essentially the same GCCA performance as the unquantized version. Furthermore, the proposed method is guar-anteed to converge to a neighborhood of the optimal solution in a geometric rate—even under aggressive quantization. The effectiveness of our method is demonstrated using both synthetic and real data experiments. 
    more » « less
  2. Canonical correlation analysis (CCA) has been essential in unsupervised multimodal/multiview latent representation learning and data fusion. Classic CCA extracts shared information from multiple modalities of data using linear transformations. In recent years, deep neural networks-based nonlinear feature extractors were combined with CCA to come up with new variants, namely the ``DeepCCA'' line of work. These approaches were shown to have enhanced performance in many applications. However, theoretical supports of DeepCCA are often lacking. To address this challenge, the recent work of Lyu and Fu (2020) showed that, under a reasonable postnonlinear generative model, a carefully designed DeepCCA criterion provably removes unknown distortions in data generation and identifies the shared information across modalities. Nonetheless, a critical assumption used by Lyu and Fu (2020) for identifiability analysis was that unlimited data is available, which is unrealistic. This brief paper puts forth a finite-sample analysis of the DeepCCA method by Lyu and Fu (2020). The main result is that the finite-sample version of the method can still estimate the shared information with a guaranteed accuracy when the number of samples is sufficiently large. Our analytical approach is a nontrivial integration of statistical learning, numerical differentiation, and robust system identification, which may be of interest beyond the scope of DeepCCA and benefit other unsupervised learning paradigms. 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
    Canonical polyadic decomposition (CPD) has been a workhorse for multimodal data analytics. This work puts forth a stochastic algorithmic framework for CPD under β-divergence, which is well-motivated in statistical learning—where the Euclidean distance is typically not preferred. Despite the existence of a series of prior works addressing this topic, pressing computational and theoretical challenges, e.g., scalability and convergence issues, still remain. In this paper, a unified stochastic mirror descent framework is developed for large-scale β-divergence CPD. Our key contribution is the integrated design of a tensor fiber sampling strategy and a flexible stochastic Bregman divergence-based mirror descent iterative procedure, which significantly reduces the computation and memory cost per iteration for various β. Leveraging the fiber sampling scheme and the multilinear algebraic structure of low-rank tensors, the proposed lightweight algorithm also ensures global convergence to a stationary point under mild conditions. Numerical results on synthetic and real data show that our framework attains significant computational saving compared with state-of-the-art methods. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)
    This work studies the model identification problem of a class of post-nonlinear mixture models in the presence of dependent latent components. Particularly, our interest lies in latent components that are nonnegative and sum-to-one. This problem is motivated by applications such as hyperspectral unmixing under nonlinear distortion effects. Many prior works tackled nonlinear mixture analysis using statistical independence among the latent components, which is not applicable in our case. A recent work by Yang et al. put forth a solution for this problem leveraging functional equations. However, the identifiability conditions derived there are somewhat restrictive. The associated implementation also has difficulties-the function approximator used in their work may not be able to represent general nonlinear distortions and the formulated constrained neural network optimization problem may be challenging to handle. In this work, we advance both the theoretical and practical aspects of the problem of interest. On the theory side, we offer a new identifiability condition that circumvents a series of stringent assumptions in Yang et al.'s work. On the algorithm side, we propose an easy-to-implement unconstrained neural network-based algorithm-without sacrificing function approximation capabilities. Numerical experiments are employed to support our design. 
    more » « less