The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.
Explore Research Products in the PAR It may take a few hours for recently added research products to appear in PAR search results.
We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al. FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al. ICML 2019). more »« less
In this work, we conduct a comprehensive study on establishing hardness reductions for (Module) Learning with Rounding over rings (RLWR). Towards this, we present an algebraic framework of LWR, inspired by a recent work of Peikert and Pepin (TCC ’19). Then we show a search-to-decision reduction for Ring-LWR, generalizing a result in the plain LWR setting by Bogdanov et al. (TCC ’15). Finally, we show a reduction from Ring-LWE to Module Ring-LWR (even for leaky secrets), generalizing the plain LWE to LWR reduction by Alwen et al. (Crypto ’13). One of our central techniques is a new ring leftover hash lemma, which might be of independent interests.
Min Jae Song; Ilias Zadik; Joan Bruna(
, Advances in neural information processing systems)
We show a simple reduction which demonstrates the cryptographic hardness of
learning a single periodic neuron over isotropic Gaussian distributions in the pres-
ence of noise. More precisely, our reduction shows that any polynomial-time
algorithm (not necessarily gradient-based) for learning such functions under small
noise implies a polynomial-time quantum algorithm for solving worst-case lattice
problems, whose hardness form the foundation of lattice-based cryptography. Our
core hard family of functions, which are well-approximated by one-layer neural
networks, take the general form of a univariate periodic function applied to an affine
projection of the data. These functions have appeared in previous seminal works
which demonstrate their hardness against gradient-based (Shamir’18), and Statisti-
cal Query (SQ) algorithms (Song et al.’17). We show that if (polynomially) small
noise is added to the labels, the intractability of learning these functions applies to
all polynomial-time algorithms, beyond gradient-based and SQ algorithms, under
the aforementioned cryptographic assumptions. Moreover, we demonstrate the
necessity of noise in the hardness result by designing a polynomial-time algorithm
for learning certain families of such functions under exponentially small adversarial
noise. Our proposed algorithm is not a gradient-based or an SQ algorithm, but is
rather based on the celebrated Lenstra-Lenstra-Lovász (LLL) lattice basis reduction
algorithm. Furthermore, in the absence of noise, this algorithm can be directly
applied to solve CLWE detection (Bruna et al.’21) and phase retrieval with an
optimal sample complexity of d + 1 samples. In the former case, this improves
upon the quadratic-in-d sample complexity required in (Bruna et al.’21).
Song, Min Jae; Zadik, Ilias; Bruna, Joan(
, Advances in neural information processing systems)
null
(Ed.)
Abstract
We show a simple reduction which demonstrates the cryptographic hardness of learning
a single periodic neuron over isotropic Gaussian distributions in the presence of noise. More
precisely, our reduction shows that any polynomial-time algorithm (not necessarily gradientbased)
for learning such functions under small noise implies a polynomial-time quantum algorithm
for solving worst-case lattice problems, whose hardness form the foundation of lattice-based
cryptography. Our core hard family of functions, which are well-approximated by one-layer neural
networks, take the general form of a univariate periodic function applied to an affine projection
of the data. These functions have appeared in previous seminal works which demonstrate their
hardness against gradient-based (Shamir’18), and Statistical Query (SQ) algorithms (Song et
al.’17). We show that if (polynomially) small noise is added to the labels, the intractability of
learning these functions applies to all polynomial-time algorithms, beyond gradient-based and
SQ algorithms, under the aforementioned cryptographic assumptions.
Moreover, we demonstrate the necessity of noise in the hardness result by designing a
polynomial-time algorithm for learning certain families of such functions under exponentially
small adversarial noise. Our proposed algorithm is not a gradient-based or an SQ algorithm, but
is rather based on the celebrated Lenstra-Lenstra-Lovász (LLL) lattice basis reduction algorithm.
Furthermore, in the absence of noise, this algorithm can be directly applied to solve CLWE
detection (Bruna et al.’21) and phase retrieval with an optimal sample complexity of d + 1
samples. In the former case, this improves upon the quadratic-in-d sample complexity required
in (Bruna et al.’21).
Goel, S; Gollakota, A; Klivans, A.(
, Advances in neural information processing systems)
null
(Ed.)
We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e.g., ReLU, sigmoid, sign). For the specific problem of ReLU regression (equivalently, agnostically learning a ReLU), we show that any statistical-query algorithm with tolerance n−(1/ϵ)b must use at least 2ncϵ queries for some constant b,c>0, where n is the dimension and ϵ is the accuracy parameter. Our results rule out general (as opposed to correlational) SQ learning algorithms, which is unusual for real-valued learning problems. Our techniques involve a gradient boosting procedure for "amplifying" recent lower bounds due to Diakonikolas et al. (COLT 2020) and Goel et al. (ICML 2020) on the SQ dimension of functions computed by two-layer neural networks. The crucial new ingredient is the use of a nonstandard convex functional during the boosting procedure. This also yields a best-possible reduction between two commonly studied models of learning: agnostic learning and probabilistic concepts.
Barthe, Gilles; Fan, Xiong; Gancher, Joshua; Grégoire, Benjamin; Jacomme, Charlie; Shi, Elaine(
, ACM Conf. on Computer and Communications Security)
Symbolic methods have been used extensively for proving security of cryptographic protocols in the Dolev-Yao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity.
This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of lattice-based cryptography and are extremely important due to their potential role in post-quantum cryptography. Following (Barthe, Gre ́goire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary’s knowledge, the Dolev-Yao model. The computational logic is used to capture (indistinguishability-based) security notions and drive the security proofs whereas deducibility problems are used as side-conditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPA- PKE (Gentry et al., STOC 2008), (Hierarchical) Identity-Based Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. Asiacrypt 2011), CCA-PKE (Micciancio et al., Eurocrypt 2012). The main technical novelty beyond AutoLWE is a set of (semi-)decision procedures for deducibility problems, using extensions of Grobner basis computations for subalgebras in the (non-)commutative setting (instead of ideals in the commutative setting). Our procedures cover the theory of matrices, which is required for lattice-based assumption, as well as the theory of non-commutative rings, fields, and Diffie-Hellman exponentiation, in its standard, bilinear and multilinear forms. Additionally, AutoLWE supports oracle-relative assumptions, which are used specifically to apply (advanced forms of) the Leftover Hash Lemma, an information-theoretical tool widely used in lattice-based proofs.
Bruna, J, Regev, O, Song, M, and Tang, Y. Continuous LWE. Retrieved from https://par.nsf.gov/biblio/10233988. Proceedings of the Annual ACM Symposium on Theory of Computing .
Bruna, J, Regev, O, Song, M, & Tang, Y. Continuous LWE. Proceedings of the Annual ACM Symposium on Theory of Computing, (). Retrieved from https://par.nsf.gov/biblio/10233988.
Bruna, J, Regev, O, Song, M, and Tang, Y.
"Continuous LWE". Proceedings of the Annual ACM Symposium on Theory of Computing (). Country unknown/Code not available. https://par.nsf.gov/biblio/10233988.
@article{osti_10233988,
place = {Country unknown/Code not available},
title = {Continuous LWE},
url = {https://par.nsf.gov/biblio/10233988},
abstractNote = {We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al. FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al. ICML 2019).},
journal = {Proceedings of the Annual ACM Symposium on Theory of Computing},
author = {Bruna, J and Regev, O and Song, M and Tang, Y},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.