In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output-compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set {LWE, DDH, N𝑡ℎ Residuosity}.
We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO):
1.Compact MPC for Turing Machines in the Random Oracle Model. In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubácek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model.
2. Succinct iO for Turing Machines in the Shared Randomness Model. In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set {LWE, DDH, N𝑡ℎ Residuosity}.
more »
« less
Compact Adaptively Secure ABE from k-Lin: Beyond NC1 and Towards NL
We present a new general framework for constructing com- pact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt ’19] that simultaneously achieves compactness and adaptive security from static assumptions sup- ports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs.
Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
more »
« less
- Award ID(s):
- 1936825
- PAR ID:
- 10163639
- Date Published:
- Journal Name:
- EUROCRYPT
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
One of the primary research challenges in Attribute-Based Encryption (ABE) is constructing and proving cryptosystems that are adaptively secure. To date the main paradigm for achieving adaptive security in ABE is dual system encryption. However, almost all such solutions in bilinear groups rely on (variants of) either the subgroup decision problem over composite order groups or the decision linear assumption. Both of these assumptions are decisional rather than search assumptions and the target of the assumption is a source or bilinear group element. This is in contrast to earlier selectively secure ABE systems which can be proven secure from either the decisional or search Bilinear Diffie-Hellman assumption. In this work we make progress on closing this gap by giving a new ABE construction for the subset functionality and prove security under the Search Bilinear Diffie-Hellman assumption. We first provide a framework for proving adaptive security in Attribute-Based Encryption systems. We introduce a concept of ABE with deletable attributes where any party can take a ciphertext encrypted under the attribute string and modify it into a ciphertext encrypted under any string where is derived by replacing any bits of with symbols (i.e. ``deleting" attributes of ). The semantics of the system are that any private key for a circuit can be used to decrypt a ciphertext associated with if none of the input bits read by circuit are symbols and . We show a pathway for combining ABE with deletable attributes with constrained psuedorandom functions to obtain adaptively secure ABE building upon the recent work of Tsabary. Our new ABE system will be adaptively secure and be a ciphertext-policy ABE that supports the same functionality as the underlying constrained PRF as long as the PRF is ``deletion conforming". Here we also provide a simple constrained PRF construction that gives subset functionality. Our approach enables us to access a broader array of Attribute-Based Encryption schemes support deletion of attributes. For example, we show that both the Goyal~et al.~(GPSW) and Boyen ABE schemes can trivially handle a deletion operation. And, by using a hardcore bit variant of GPSW scheme we obtain an adaptively secure ABE scheme under the Search Bilinear Diffie-Hellman assumption in addition to pseudo random functions in NC1. This gives the first adaptively secure ABE from a search assumption as all prior work relied on decision assumptions over source group elements.more » « less
-
Attribute-based encryption (ABE) generalizes public-key encryption and enables fine-grained control to encrypted data. However, ABE upends the traditional trust model of public-key encryption by requiring a single trusted authority to issue decryption keys. If an adversary compromises the central authority and exfiltrates its secret key, then the adversary can decrypt every ciphertext in the system. This work introduces registered ABE, a primitive that allows users to generate secret keys on their own and then register the associated public key with a “key curator” along with their attributes. The key curator aggregates the public keys from the different users into a single compact master public key. To decrypt, users occasionally need to obtain helper decryption keys from the key curator which they combine with their own secret keys. We require that the size of the aggregated public key, the helper decryption keys, the ciphertexts, as well as the encryption/decryption times to be polylogarithmic in the number of registered users. Moreover, the key curator is entirely transparent and maintains no secrets. Registered ABE generalizes the notion of registration-based encryption (RBE) introduced by Garg et al. (TCC 2018), who focused on the simpler setting of identity-based encryption. We construct a registered ABE scheme that supports an a priori bounded number of users and policies that can be described by a linear secret sharing scheme (e.g., monotone Boolean formulas) from assumptions on composite-order pairing groups. Our approach deviates sharply from previous techniques for constructing RBE and only makes black-box use of cryptography. All existing RBE constructions (a weaker notion than registered ABE) rely on heavy non-black-box techniques. The encryption and decryption costs of our construction are comparable to those of vanilla pairing-based ABE. Two limitations of our scheme are that it requires a structured reference string whose size scales quadratically with the number of users (and linearly with the size of the attribute universe) and the running time of registration scales linearly with the number of users. Finally, as a feasibility result, we construct a registered ABE scheme that supports general policies and an arbitrary number of users from indistinguishability obfuscation and somewhere statistically binding hash functions.more » « less
-
We provide compelling evidence for the potential of hardness-vs.-randomness approaches to make progress on the long-standing problem of derandomizing space-bounded computation. Our first contribution is a derandomization of bounded-space machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but non-matching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardness-vs.-randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is non-black-box use of the descriptions of space-bounded Turing machines, when proving hardness-to-randomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to space-bounded machines.more » « less
-
Many models of self-assembly have been shown to be capable of performing computation. Tile Automata was recently introduced combining features of both Celluar Automata and the 2-Handed Model of self-assembly both capable of universal computation. In this work we study the complexity of Tile Automata utilizing features inherited from the two models mentioned above. We first present a construction for simulating Turing Machines that performs both covert and fuel efficient computation. We then explore the capabilities of limited Tile Automata systems such as 1-Dimensional systems (all assemblies are of height $1$) and freezing Systems (tiles may not repeat states). Using these results we provide a connection between the problem of finding the largest uniquely producible assembly using $n$ states and the busy beaver problem for non-freezing systems and provide a freezing system capable of uniquely assembling an assembly whose length is exponential in the number of states of the system. We finish by exploring the complexity of the Unique Assembly Verification problem in Tile Automata with different limitations such as freezing and systems without the power of detachment.more » « less