Abstract We show a simple reduction which demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the presence of noise. More precisely, our reduction shows that any polynomialtime algorithm (not necessarily gradientbased) for learning such functions under small noise implies a polynomialtime quantum algorithm for solving worstcase lattice problems, whose hardness form the foundation of latticebased cryptography. Our core hard family of functions, which are wellapproximated by onelayer neural networks, take the general form of a univariate periodic function applied to an affine projection of the data. These functions have appeared in previous seminalmore »
Continuous LWE
We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomialtime quantum reduction from worstcase lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al. FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al. ICML 2019).
 Award ID(s):
 1845360
 Publication Date:
 NSFPAR ID:
 10233988
 Journal Name:
 Proceedings of the Annual ACM Symposium on Theory of Computing
 ISSN:
 07378017
 Sponsoring Org:
 National Science Foundation
More Like this


We show a simple reduction which demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the pres ence of noise. More precisely, our reduction shows that any polynomialtime algorithm (not necessarily gradientbased) for learning such functions under small noise implies a polynomialtime quantum algorithm for solving worstcase lattice problems, whose hardness form the foundation of latticebased cryptography. Our core hard family of functions, which are wellapproximated by onelayer neural networks, take the general form of a univariate periodic function applied to an affine projection of the data. These functions have appeared in previous seminalmore »

Symbolic methods have been used extensively for proving security of cryptographic protocols in the DolevYao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limitedin scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of latticebased cryptography and are extremely important due to their potential role in postquantum cryptography. Following (Barthe,more »

Symbolic methods have been used extensively for proving security of cryptographic protocols in the DolevYao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of latticebased cryptography and are extremely important due to their potential role in postquantum cryptography. Followingmore »

In this work, we conduct a comprehensive study on establishing hardness reductions for (Module) Learning with Rounding over rings (RLWR). Towards this, we present an algebraic framework of LWR, inspired by a recent work of Peikert and Pepin (TCC ’19). Then we show a searchtodecision reduction for RingLWR, generalizing a result in the plain LWR setting by Bogdanov et al. (TCC ’15). Finally, we show a reduction from RingLWE to Module RingLWR (even for leaky secrets), generalizing the plain LWE to LWR reduction by Alwen et al. (Crypto ’13). One of our central techniques is a new ring leftover hashmore »