skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Unifying Computational Entropies via Kullback–Leibler Divergence
We introduce KL-hardness, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible-entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, KL-hardness unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12).  more » « less
Award ID(s):
1763299
PAR ID:
10137367
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
CRYPTO 2019, Lecture Notes in Computer Science
Volume:
11693
Page Range / eLocation ID:
831-858
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We prove the computational intractability of rotating and placing n square tiles into a 1 × n array such that adjacent tiles are compatible—either equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes, as in jigsaw puzzles. Beyond basic NP-hardness, we prove that it is NP-hard even to approximately maximize the number of placed tiles (allowing blanks), while satisfying the compatibility constraint between nonblank tiles, within a factor of 0.9999999702. (On the other hand, there is an easy (1/2)-approximation.) This is the first (correct) proof of inapproximability for edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of distinguishing, for a directed graph on n nodes, between having a Hamiltonian path (length n − 1) and having at most 0.999999284 (n − 1) edges that form a vertex-disjoint union of paths. We use this gap hardness and gap-preserving reductions to establish similar gap hardness for 1 × n jigsaw and edge-matching puzzles. 
    more » « less
  2. null (Ed.)
    Consider the following two fundamental open problems in complexity theory: • Does a hard-on-average language in NP imply the existence of one-way functions? • Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems—namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true. This result follows from a more general study of interactive puzzles—a generalization of average-case hardness in NP—and in particular, a novel round-collapse theorem for computationallysound protocols, analogous to Babai-Moran’s celebrated round-collapse theorem for informationtheoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP/poly. 
    more » « less
  3. Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if P=NP, or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland. A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity. Ball, Rosen, Sabin, and Vasudevan [BRSV’17] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function. Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems. The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC). Our main contribution is a novel construction of a cryptographic key exchange, together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero-k-Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions. Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in O(n) time secure against O(n^2) time adversaries (see [Merkle’78] and [BGI’08]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC. 
    more » « less
  4. One of the earliest models of weak randomness is the Chor-Goldreich (CG) source. A (t,n,k)- CG source is a sequence of random variables X =(x1,…,xt)∼({0,1}n)t, where each Xi has min-entropy k conditioned on any fixing of x1,…,xi−1. Chor and Goldreich proved that there is no deterministic way to extract randomness from such a source. Nevertheless, Doron, Moshkovitz, Oh, and Zuckerman showed that there is a deterministic way to condense a CG source into a string with small entropy gap. They gave applications of such a condenser to simulating randomized algorithms with small error and to certain cryptographic tasks. They studied the case where the block length n and entropy rate k/n are both constant. We study the much more general setting where the block length can be arbitrarily large, and the entropy rate can be arbitrarily small. We construct the first explicit condenser for CG sources in this setting, and it can be instantiated in a number of different ways. When the entropy rate of the CG source is constant, our condenser requires just a constant number of blocks t to produce an output with entropy rate 0.9, say. In the low entropy regime, using t=poly(n) blocks, our condenser can achieve output entropy rate 0.9 even if each block has just 1 bit of min-entropy. Moreover, these condensers have exponentially small error. Finally, we provide strong existential and impossibility results. For our existential result, we show that a random function is a seedless condenser (with surprisingly strong parameters) for any small family of sources. As a corollary, we get new existential results for seeded condensers and condensers for CG sources. For our impossibility result, we show the latter result is nearly tight, by giving a simple proof that the output of any condenser for CG sources must inherit the entropy gap of (one block of) its input. 
    more » « less
  5. Achieving quantum computational advantage requires solving a classically intractable problem on a quantum device. Natural proposals rely upon the intrinsic hardness of classically simulating quantum mechanics; however, verifying the output is itself classically intractable. On the other hand, certain quantum algorithms (e.g. prime factorization via Shor's algorithm) are efficiently verifiable, but require more resources than what is available on near-term devices. One way to bridge the gap between verifiability and implementation is to use "interactions" between a prover and a verifier. By leveraging cryptographic functions, such protocols enable the classical verifier to enforce consistency in a quantum prover's responses across multiple rounds of interaction. In this work, we demonstrate the first implementation of an interactive quantum advantage protocol, using an ion trap quantum computer. We execute two complementary protocols -- one based upon the learning with errors problem and another where the cryptographic construction implements a computational Bell test. To perform multiple rounds of interaction, we implement mid-circuit measurements on a subset of trapped ion qubits, with subsequent coherent evolution. For both protocols, the performance exceeds the asymptotic bound for classical behavior; maintaining this fidelity at scale would conclusively demonstrate verifiable quantum advantage. 
    more » « less