Clustering is a fundamental primitive in unsupervised learning which gives rise to a rich class of computationally-challenging inference tasks. In this work, we focus on the canonical task of clustering d-dimensional Gaussian mixtures with unknown (and possibly degenerate) covariance. Recent works (Ghosh et al. ’20; Mao, Wein ’21; Davis, Diaz, Wang ’21) have established lower bounds against the class of low-degree polynomial methods and the sum-of-squares (SoS) hierarchy for recovering certain hidden structures planted in Gaussian clustering instances. Prior work on many similar inference tasks portends that such lower bounds strongly suggest the presence of an inherent statistical-to-computational gap for clustering, that is, a parameter regime where the clustering task is statistically possible but no polynomial-time algorithm succeeds. One special case of the clustering task we consider is equivalent to the problem of finding a planted hypercube vector in an otherwise random subspace. We show that, perhaps surprisingly, this particular clustering model does not exhibit a statistical-to-computational gap, despite the aforementioned low-degree and SoS lower bounds. To achieve this, we give an algorithm based on Lenstra–Lenstra–Lovász lattice basis reduction which achieves the statistically-optimal sample complexity of d + 1 samples. This result extends the class of problems whose conjectured statistical-to-computationalmore »
No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors
We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level.
- Award ID(s):
- 1848087
- Publication Date:
- NSF-PAR ID:
- 10252108
- Journal Name:
- Entropy
- Volume:
- 23
- Issue:
- 1
- Page Range or eLocation-ID:
- 115
- ISSN:
- 1099-4300
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given a random n × n symmetric matrix 𝑾 drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form 𝒙^⊤ 𝑾 𝒙 over all vectors 𝒙 in a constraint set 𝒮 ⊂ ℝⁿ. For a certain class of normalized constraint sets we show that, conditional on a certain complexity-theoretic conjecture, no polynomial-time algorithm can certify a better upper bound than the largest eigenvalue of 𝑾. A notable special case included in our results is the hypercube 𝒮 = {±1/√n}ⁿ, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics. Our results suggest a striking gap between optimization and certification for this problem. Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for predicting computational thresholds was proposed in a sequence of recent works on the sum-of-squaresmore »
-
Abernethy, Jacob ; Agarwal, Shivani (Ed.)We study a variant of the sparse PCA (principal component analysis) problem in the “hard” regime, where the inference task is possible yet no polynomial-time algorithm is known to exist. Prior work, based on the low-degree likelihood ratio, has conjectured a precise expression for the best possible (sub-exponential) runtime throughout the hard regime. Following instead a statistical physics inspired point of view, we show bounds on the depth of free energy wells for various Gibbs measures naturally associated to the problem. These free energy wells imply hitting time lower bounds that corroborate the low-degree conjecture: we show that a class of natural MCMC (Markov chain Monte Carlo) methods (with worst-case initialization) cannot solve sparse PCA with less than the conjectured runtime. These lower bounds apply to a wide range of values for two tuning parameters: temperature and sparsity misparametrization. Finally, we prove that the Overlap Gap Property (OGP), a structural property that implies failure of certain local search algorithms, holds in a significant part of the hard regime.
-
Table of Contents: Foreword by the CI 2016 Workshop Chairs …………………………………vi Foreword by the CI 2016 Steering Committee ..…………………………..…..viii List of Organizing Committee ………………………….……....x List of Registered Participants .………………………….……..xi Acknowledgement of Sponsors ……………………………..…xiv Hackathon and Workshop Agenda .………………………………..xv Hackathon Summary .………………………….…..xviii Invited talks - abstracts and links to presentations ………………………………..xxi Proceedings: 34 short research papers ……………………………….. 1-135 Papers 1. BAYESIAN MODELS FOR CLIMATE RECONSTRUCTION FROM POLLEN RECORDS ..................................... 1 Lasse Holmström, Liisa Ilvonen, Heikki Seppä, Siim Veski 2. ON INFORMATION CRITERIA FOR DYNAMIC SPATIO-TEMPORAL CLUSTERING ..................................... 5 Ethan D. Schaeffer, Jeremy M. Testa, Yulia R. Gel, Vyacheslav Lyubchich 3. DETECTING MULTIVARIATE BIOSPHERE EXTREMES ..................................... 9 Yanira Guanche García, Erik Rodner, Milan Flach, Sebastian Sippel, Miguel Mahecha, Joachim Denzler 4. SPATIO-TEMPORAL GENERATIVE MODELS FOR RAINFALL OVER INDIA ..................................... 13 Adway Mitra 5. A NONPARAMETRIC COPULA BASED BIAS CORRECTION METHOD FOR STATISTICAL DOWNSCALING ..................................... 17 Yi Li, Adam Ding, Jennifer Dy 6. DETECTING AND PREDICTING BEAUTIFUL SUNSETS USING SOCIAL MEDIA DATA ..................................... 21 Emma Pierson 7. OCEANTEA: EXPLORING OCEAN-DERIVED CLIMATE DATA USING MICROSERVICES ..................................... 25 Arne N. Johanson, Sascha Flögel, Wolf-Christian Dullo, Wilhelm Hasselbring 8. IMPROVED ANALYSIS OF EARTH SYSTEM MODELS AND OBSERVATIONS USING SIMPLE CLIMATE MODELS ..................................... 29 Balu Nadiga, Nathanmore »
-
Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which would be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that ismore »