skip to main content


Title: Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification.
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation (IO), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions. New Assumption: Previous to our work, all constructions of IO from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). (Goldreich and follow-up works study Boolean pseudorandom generators with constant-locality, which can be computed by constant-degree polynomials.) We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects. New Techniques: we introduce a number of new techniques: – We show how to build partially-hiding public-key functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic NC1 functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups. – We construct single-ciphertext secret-key functional encryption for all circuits with linear key generation, assuming only the LWE assumption. Simplification: Unlike prior works, our new techniques furthermore let us construct public-key functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the polynomial hardness of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext grows only sublinearly in the output size of the circuit and not its size. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve iO.  more » « less
Award ID(s):
1936825
NSF-PAR ID:
10255455
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
EUROCRYPT
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Indistinguishability obfuscation, introduced by [Barak et. al. Crypto2001], aims to compile programs into unintelligible ones while preserving functionality. It is a fascinating and powerful object that has been shown to enable a host of new cryptographic goals and beyond. However, constructions of indistinguishability obfuscation have remained elusive, with all other proposals relying on heuristics or newly conjectured hardness assumptions. In this work, we show how to construct indistinguishability obfuscation from subexponential hardness of four well-founded assumptions. We prove: Informal Theorem: Let 𝜏∈ (0,∞), 𝛿∈ (0,1), 𝜖∈ (0,1) be arbitrary constants. Assume sub-exponential security of the following assumptions: - the Learning With Errors (LWE) assumption with subexponential modulus-to-noise ratio 2^{𝑘^𝜖} and noises of magnitude polynomial in 𝑘,where 𝑘 is the dimension of the LWE secret, - the Learning Parity with Noise (LPN) assumption over general prime fields Z𝑝 with polynomially many LPN samples and error rate 1/ℓ^𝛿 ,where ℓ is the dimension of the LPN secret, - the existence of a Boolean Pseudo-Random Generator (PRG) in NC0 with stretch 𝑛^{1+𝜏}, where 𝑛 is the length of the PRG seed, - the Decision Linear (DLIN) assumption on symmetric bilinear groups of prime order. Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits. 
    more » « less
  2. In a traitor tracing (TT) system for n users, every user has his/her own secret key. Content providers can encrypt messages using a public key, and each user can decrypt the ciphertext using his/her secret key. Suppose some of the n users collude to construct a pirate decoding box. Then the tracing scheme has a special algorithm, called 𝖳𝗋𝖺𝖼𝖾 , which can identify at least one of the secret keys used to construct the pirate decoding box. Traditionally, the trace algorithm output only the ‘index’ associated with the traitors. As a result, to use such systems, either a central master authority must map the indices to actual identities, or there should be a public mapping of indices to identities. Both these options are problematic, especially if we need public tracing with anonymity of users. Nishimaki, Wichs, and Zhandry (NWZ) [Eurocrypt 2016] addressed this problem by constructing a traitor tracing scheme where the identities of users are embedded in the secret keys, and the trace algorithm, given a decoding box D, can recover the entire identities of the traitors. We call such schemes ‘Embedded Identity Traitor Tracing’ schemes. NWZ constructed such schemes based on adaptively secure functional encryption (FE). Currently, the only known constructions of FE schemes are based on nonstandard assumptions such as multilinear maps and iO. In this work, we study the problem of embedded identities TT based on standard assumptions. We provide a range of constructions based on different assumptions such as public key encryption (PKE), bilinear maps and the Learning with Errors (LWE) assumption. The different constructions have different efficiency trade offs. In our PKE based construction, the ciphertext size grows linearly with the number of users; the bilinear maps based construction has sub-linear (𝑛√ ) sized ciphertexts. Both these schemes have public tracing. The LWE based scheme is a private tracing scheme with optimal ciphertexts (i.e., log(𝑛)). Finally, we also present other notions of traitor tracing, and discuss how they can be build in a generic manner from our base embedded identity TT scheme. 
    more » « less
  3. null (Ed.)
    We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-Dottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-Dottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020). We provide explicit counterexamples to the 2-circular shielded randomness leakage assumption w.r.t. the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the homomorphic pseudorandom LWE samples conjecture proposed by Wee-Wichs. Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved. 
    more » « less
  4. Boldyreva, Alexandra ; Kolesnikov, Vladimir (Ed.)
    Updatable Encryption (UE) and Proxy Re-encryption (PRE) allow re-encrypting a ciphertext from one key to another in the symmetric-key and public-key settings, respectively, without decryption. A longstanding open question has been the following: do unidirectional UE and PRE schemes (where ciphertext re-encryption is permitted in only one direction) necessarily require stronger/more structured assumptions as compared to their bidirectional counterparts? Known constructions of UE and PRE seem to exemplify this “gap” – while bidirectional schemes can be realized as relatively simple extensions of public-key encryption from standard assumptions such as DDH or LWE, unidirectional schemes typically rely on stronger assumptions such as FHE or indistinguishability obfuscation (iO), or highly structured cryptographic tools such as bilinear maps or lattice trapdoors. In this paper, we bridge this gap by showing the first feasibility results for realizing unidirectional UE and PRE from a new generic primitive that we call Key and Plaintext Homomorphic Encryption (KPHE) – a public-key encryption scheme that supports additive homomorphisms on its plaintext and key spaces simultaneously. We show that KPHE can be instantiated from DDH. This yields the first constructions of unidirectional UE and PRE from DDH. Our constructions achieve the strongest notions of post-compromise security in the standard model. Our UE schemes also achieve “backwards-leak directionality” of key updates (a notion we discuss is equivalent, from a security perspective, to that of unidirectionality with no-key updates). Our results establish (somewhat surprisingly) that unidirectional UE and PRE schemes satisfying such strong security notions do not, in fact, require stronger/more structured cryptographic assumptions as compared to bidirectional schemes. 
    more » « less
  5. null (Ed.)
    Recently, Quach, Wee and Wichs (FOCS 2018) proposed a new powerful cryptographic primitive called laconic function evaluation (LFE). Using an LFE scheme, Alice can compress a large circuit f into a small digest. Bob can encrypt some data x under this digest in a way that enables Alice to recover f(x) without learning anything else about Bob’s data. The laconic property requires that the size of the digest, the run-time of the encryption algorithm and the size of the ciphertext should be much smaller than the circuit-size of f. This new tool is motivated by an interesting application of “Bob-optimized” two-round secure two-party computation (2PC). In such a 2PC, Alice will get the final result thus the workload of Bob will be minimized. In this paper, we consider a “client-optimized” two-round secure multiparty computation, in which multiple clients provide inputs and enable a server to obtain final outputs while protecting privacy of each individual input. More importantly, we would also minimize the cost of each client. For this purpose, we propose multi-input laconic function evaluation (MI-LFE), and give a systematic study of it. It turns out that MI-LFE for general circuit is not easy. Specifically, we first show that the directly generalized version, i.e., the public-key MI-LFE implies virtual black-box obfuscation. Hence the public-key MI-LFE (for general circuits) is infeasible. This forces us to turn to secret key version of MI-LFE, in which encryption now needs to take a secret key. Next we show that secret-key MI-LFE also implies heavy cryptographic primitives including witness encryption for NP language and the indistinguishability obfuscation. On the positive side, we show that the secret-key MI-LFE can be constructed assuming indistinguishability obfuscation and learning with errors assumption. Our theoretical results suggest that we may have to explore relaxed versions of MI-LFE for meaningful new applications of “client-optimized” MPC and others. 
    more » « less