Symbolic methods have been used extensively for proving security of cryptographic protocols in the DolevYao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limitedin scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of latticebased cryptography and are extremely important due to their potential role in postquantum cryptography. Following (Barthe, Gregoire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary’s knowledge, the DolevYao model. The computational logic is used to capture (indistinguishabilitybased) security notions and drive the security proofs whereas deducibility problems are used as sideconditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPAPKE (Gentry et al., STOC 2008), (Hierarchical) IdentityBased Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. Asiacrypt 2011),more »
Symbolic Proofs for LatticeBased Cryptography
Symbolic methods have been used extensively for proving security of cryptographic protocols in the DolevYao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity.
This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of latticebased cryptography and are extremely important due to their potential role in postquantum cryptography. Following (Barthe, Gre ́goire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary’s knowledge, the DolevYao model. The computational logic is used to capture (indistinguishabilitybased) security notions and drive the security proofs whereas deducibility problems are used as sideconditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPA PKE (Gentry et al., STOC 2008), (Hierarchical) IdentityBased Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. more »
 Award ID(s):
 1704788
 Publication Date:
 NSFPAR ID:
 10094523
 Journal Name:
 ACM Conf. on Computer and Communications Security
 Sponsoring Org:
 National Science Foundation
More Like this


The randompermutation model (RPM) and the idealcipher model (ICM) are idealized models that offer a simple and intuitive way to assess the conjectured standardmodel security of many important symmetrickey and hashfunction constructions. Similarly, the genericgroup model (GGM) captures generic algorithms against assumptions in cyclic groups by modeling encodings of group elements as random injections and allows to derive simple bounds on the advantage of such algorithms. Unfortunately, both wellknown attacks, e.g., based on rainbow tables (Hellman, IEEE Transactions on Information Theory ’80), and more recent ones, e.g., against the discretelogarithm problem (CorriganGibbs and Kogan, EUROCRYPT ’18), suggest that the concrete security bounds one obtains from such idealized proofs are often completely inaccurate if one considers nonuniform or preprocessing attacks in the standard model. To remedy this situation, this work defines the auxiliaryinput (AI) RPM/ICM/GGM, which capture both nonuniform and preprocessing attacks by allowing an attacker to leak an arbitrary (boundedoutput) function of the oracle’s function table; derives the first nonuniform bounds for a number of important practical applications in the AIRPM/ICM, including constructions based on the MerkleDamgård and sponge paradigms, which underly the SHA hashing standards, and for AIRPM/ICM applications with computational security; and using simpler proofs, recovers the AIGGMmore »

Zeroknowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacypreserving proofs of membership for general NP languages. Our focus in this work is on postquantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000x gap in the proof size between the best prequantum constructions and the best postquantum ones. Here, we develop and implement new latticebased zkSNARKs in the designatedverifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2^20 is just over 16 KB. Our proofs are 10.3x shorter than previous postquantum zkSNARKs for general NP languages. Compared to previous latticebased zkSNARKs (also in the designatedverifier preprocessing model), we obtain a 42x reduction in proof size and a 60x reduction in the prover's running time, all while achieving a much higher level of soundness. Compared to the shortest prequantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our latticebased construction is 131x longer, but both the prover and the verifier are faster (by 1.2x and 2.8x, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together withmore »

We study several strengthening of classical circular security assumptions which were recently introduced in four new latticebased constructions of indistinguishability obfuscation: BrakerskiDottlingGargMalavolta (Eurocrypt 2020), GayPass (STOC 2021), BrakerskiDottlingGargMalavolta (Eprint 2020) and WeeWichs (Eprint 2020). We provide explicit counterexamples to the 2circular shielded randomness leakage assumption w.r.t. the GentrySahaiWaters fully homomorphic encryption scheme proposed by GayPass, and the homomorphic pseudorandom LWE samples conjecture proposed by WeeWichs. Our work suggests a separation between classical circular security of the kind underlying unlevelled fullyhomomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the GayPass assumption and unspecified in the WeeWichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakageresilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved.

We study entropy flattening: Given a circuit C_X implicitly describing an nbit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among blackbox constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries. Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from oneway functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zeroknowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan,more »