Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2024
-
Unsupervised mixture learning (UML) aims at identifying linearly or nonlinearly mixed latent components in a blind manner. UML is known to be challenging: Even learning linear mixtures requires highly nontrivial analytical tools, e.g., independent component analysis or nonnegative matrix factorization. In this work, the post-nonlinear (PNL) mixture model---where {\it unknown} element-wise nonlinear functions are imposed onto a linear mixture---is revisited. The PNL model is widely employed in different fields ranging from brain signal classification, speech separation, remote sensing, to causal discovery. To identify and remove the unknown nonlinear functions, existing works often assume different properties on the latent components (e.g., statistical independence or probability-simplex structures). This work shows that under a carefully designed UML criterion, the existence of a nontrivial {\it null space} associated with the underlying mixing system suffices to guarantee identification/removal of the unknown nonlinearity. Compared to prior works, our finding largely relaxes the conditions of attaining PNL identifiability, and thus may benefit applications where no strong structural information on the latent components is known. A finite-sample analysis is offered to characterize the performance of the proposed approach under realistic settings. To implement the proposed learning criterion, a block coordinate descent algorithm is proposed. A series of numericalmore »Free, publicly-accessible full text available December 1, 2023
-
This paper focuses on downlink channel state information (CSI) acquisition. A frequency division duplex (FDD) of massive MIMO system is considered. In such systems, the base station (BS) obtains the downlink CSI from the mobile users' feedback. A key consideration is to reduce the feedback overhead while ensuring that the BS accurately recovers the downlink CSI. Existing approaches often resort to dictionary-based or tensor/matrix decomposition techniques, which either exhibit unsatisfactory accuracy or induce heavy computational load at the mobile end. To circumvent these challenges, this work formulates the limited channel feedback problem as a quantized and compressed matrix recovery problem. The formulation presents a computationally challenging maximum likelihood estimation (MLE) problem. An ADMM algorithm leveraging existing harmonic retrieval tools is proposed to effectively tackle the optimization problem. Simulations show that the proposed method attains promising channel estimation accuracy, using a much smaller amount of feedback bits relative to existing methods.Free, publicly-accessible full text available October 31, 2023
-
Nonlinear independent component analysis (nICA) aims at recovering statistically independent latent components that are mixed by unknown nonlinear functions. Central to nICA is the identifiability of the latent components, which had been elusive until very recently. Specifically, Hyvärinen et al. have shown that the nonlinearly mixed latent components are identifiable (up to often inconsequential ambiguities) under a generalized contrastive learning (GCL) formulation, given that the latent components are independent conditioned on a certain auxiliary variable. The GCL-based identifiability of nICA is elegant, and establishes interesting connections between nICA and popular unsupervised/self-supervised learning paradigms in representation learning, causal learning, and factor disentanglement. However, existing identifiability analyses of nICA all build upon an unlimited sample assumption and the use of ideal universal function learners—which creates a non-negligible gap between theory and practice. Closing the gap is a nontrivial challenge, as there is a lack of established “textbook” routine for finite sample analysis of such unsupervised problems. This work puts forth a finite-sample identifiability analysis of GCL-based nICA. Our analytical framework judiciously combines the properties of the GCL loss function, statistical generalization analysis, and numerical differentiation. Our framework also takes the learning function’s approximation error into consideration, and reveals an intuitive trade-off betweenmore »Free, publicly-accessible full text available July 1, 2023
-
Communication-Efficient Distributed MAX-VAR Generalized CCA via Error Feedback-Assisted QuantizationGeneralized canonical correlation analysis (GCCA) aims to learn common low-dimensional representations from multiple "views" of the data (e.g., audio and video of the same event). In the era of big data, GCCA computation encounters many new challenges. In particular, distributed optimization for GCCA—which is well-motivated in applications like internet of things and parallel computing—may incur prohibitively high communication costs. To address this challenge, this work proposes a communication-efficient distributed GCCA algorithm under the popular MAX-VAR GCCA paradigm. A quantization strategy for information exchange among the computing agents is employed in the proposed algorithm. It is observed that our design, leveraging the idea of error feedback-based quantization, can reduce communication cost by at least 90% while maintaining essentially the same GCCA performance as the unquantized version. Furthermore, the proposed method is guar-anteed to converge to a neighborhood of the optimal solution in a geometric rate—even under aggressive quantization. The effectiveness of our method is demonstrated using both synthetic and real data experiments.
-
Canonical correlation analysis (CCA) has been essential in unsupervised multimodal/multiview latent representation learning and data fusion. Classic CCA extracts shared information from multiple modalities of data using linear transformations. In recent years, deep neural networks-based nonlinear feature extractors were combined with CCA to come up with new variants, namely the ``DeepCCA'' line of work. These approaches were shown to have enhanced performance in many applications. However, theoretical supports of DeepCCA are often lacking. To address this challenge, the recent work of Lyu and Fu (2020) showed that, under a reasonable postnonlinear generative model, a carefully designed DeepCCA criterion provably removes unknown distortions in data generation and identifies the shared information across modalities. Nonetheless, a critical assumption used by Lyu and Fu (2020) for identifiability analysis was that unlimited data is available, which is unrealistic. This brief paper puts forth a finite-sample analysis of the DeepCCA method by Lyu and Fu (2020). The main result is that the finite-sample version of the method can still estimate the shared information with a guaranteed accuracy when the number of samples is sufficiently large. Our analytical approach is a nontrivial integration of statistical learning, numerical differentiation, and robust system identification, which may be ofmore »