skip to main content


Title: On the Security of Proofs of Sequential Work in a Post-Quantum World
A Proof of Sequential Work (PoSW) allows a prover to convince a resource-bounded verifier that the prover invested a substantial amount of sequential time to perform some underlying computation. PoSWs have many applications including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Mahmoody, Moran, and Vadhan (ITCS 2013) gave the first construction of a PoSW in the random oracle model though the construction relied on expensive depth-robust graphs. In a recent breakthrough, Cohen and Pietrzak (EUROCRYPT 2018) gave an efficient PoSW construction that does not require expensive depth-robust graphs. In the classical parallel random oracle model, it is straightforward to argue that any successful PoSW attacker must produce a long ℋ-sequence and that any malicious party running in sequential time T-1 will fail to produce an ℋ-sequence of length T except with negligible probability. In this paper, we prove that any quantum attacker running in sequential time T-1 will fail to produce an ℋ-sequence except with negligible probability - even if the attacker submits a large batch of quantum queries in each round. The proof is substantially more challenging and highlights the power of Zhandry’s recent compressed oracle technique (CRYPTO 2019). We further extend this result to establish post-quantum security of a non-interactive PoSW obtained by applying the Fiat-Shamir transform to Cohen and Pietrzak’s efficient construction (EUROCRYPT 2018).  more » « less
Award ID(s):
1755708 2047272 1910659
NSF-PAR ID:
10297291
Author(s) / Creator(s):
; ;
Editor(s):
Tessaro, Stefano
Date Published:
Journal Name:
2nd Conference on Information-Theoretic Cryptography (ITC 2021)
Volume:
199
Page Range / eLocation ID:
22:1--22:27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Dodis, Y. (Ed.)
    Memory-hard functions (MHFs) are a useful cryptographic primitive which can be used to design egalitarian proof of work puzzles and to protect low entropy secrets like passwords against brute-force attackers. Intuitively, a memory-hard function is a function whose evaluation costs are dominated by memory costs even if the attacker uses specialized hardware (FPGAs/ASICs), and several cost metrics have been proposed to quantify this intuition. For example, space-time cost looks at the product of running time and the maximum space usage over the entire execution of an algorithm. Alwen and Serbinenko (STOC 2015) observed that the space-time cost of evaluating a function multiple times may not scale linearly in the number of instances being evaluated and introduced the stricter requirement that a memory-hard function has high cumulative memory complexity (CMC) to ensure that an attacker’s amortized space-time costs remain large even if the attacker evaluates the function on multiple different inputs in parallel. Alwen et al. (EUROCRYPT 2018) observed that the notion of CMC still gives the attacker undesirable flexibility in selecting space-time tradeoffs e.g., while the MHF Scrypt has maximal CMC Ω(N^2), an attacker could evaluate the function with constant O(1) memory in time O(N^2). Alwen et al. introduced an even stricter notion of Sustained Space complexity and designed an MHF which has s=Ω(N/logN) sustained complexity t=Ω(N) i.e., any algorithm evaluating the function in the parallel random oracle model must have at least t=Ω(N) steps where the memory usage is at least Ω(N/logN). In this work, we use dynamic pebbling games and dynamic graphs to explore tradeoffs between sustained space complexity and cumulative memory complexity for data-dependent memory-hard functions such as Argon2id and Scrypt. We design our own dynamic graph (dMHF) with the property that any dynamic pebbling strategy either (1) has Ω(N) rounds with Ω(N) space, or (2) has CMC Ω(N^{3−ϵ})—substantially larger than N^2. For Argon2id we show that any dynamic pebbling strategy either(1) has Ω(N) rounds with Ω(N^{1−ϵ}) space, or (2) has CMC ω(N^2). We also present a dynamic version of DRSample (Alwen et al. 2017) for which any dynamic pebbling strategy either (1) has Ω(N) rounds with Ω(N/log N) space, or (2) has CMC Ω(N^3/log N). 
    more » « less
  2. We revisit the problem of finding B-block-long collisions in Merkle-Damg˚ard Hash Functions in the auxiliary-input random oracle model, in which an attacker gets a piece of S-bit advice about the random oracle and makes T oracle queries. Akshima, Cash, Drucker and Wee (CRYPTO 2020), based on the work of Coretti, Dodis, Guo and Steinberger (EUROCRYPT 2018), showed a simple attack for 2 ≤ B ≤ T (with respect to a random salt). The attack achieves advantage Ω( e ST B/2 n + T 2/2 n) where n is the output length of the random oracle. They conjectured that this attack is optimal. However, this so-called STB conjecture was only proved for B ≈ T and B = 2. Very recently, Ghoshal and Komargodski (CRYPTO 22) confirmed STB conjecture for all constant values of B, and provided an Oe(S 4T B2/2 n + T 2/2 n) bound for all choices of B. In this work, we prove an Oe((ST B/2 n)· max{1, ST2/2 n}+T 2/2 n) bound for every 2 < B < T. Our bound confirms the STB conjecture for ST2 ≤ 2 n, and is optimal up to a factor of S for ST2 > 2 n (note as T 2 is always at most 2n, otherwise finding a collision is trivial by the birthday attack). Our result subsumes all previous upper bounds for all ranges of parameters except for B = Oe(1) and ST2 > 2 n. We obtain our results by adopting and refining the technique of Chung, Guo, Liu, and Qian (FOCS 2020). Our approach yields more modular proofs and sheds light on how to bypass the limitations of prior techniques. Along the way, we obtain a considerably simpler and illuminating proof for B = 2, recovering the main result of Akshima, Cash, Drucker and Wee. 
    more » « less
  3. Zero-knowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacy-preserving proofs of membership for general NP languages. Our focus in this work is on post-quantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000x gap in the proof size between the best pre-quantum constructions and the best post-quantum ones. Here, we develop and implement new lattice-based zkSNARKs in the designated-verifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2^20 is just over 16 KB. Our proofs are 10.3x shorter than previous post-quantum zkSNARKs for general NP languages. Compared to previous lattice-based zkSNARKs (also in the designated-verifier preprocessing model), we obtain a 42x reduction in proof size and a 60x reduction in the prover's running time, all while achieving a much higher level of soundness. Compared to the shortest pre-quantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our lattice-based construction is 131x longer, but both the prover and the verifier are faster (by 1.2x and 2.8x, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices. 
    more » « less
  4. We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument of knowledge with the following three efficiency properties for computing and proving a (non-deterministic, polynomial time) parallel RAM computation that can be computed in parallel time T with at most p processors: — The prover’s (parallel) running time is \( T + \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) . (In other words, the prover’s running time is essentially T for large computation times!) — The prover uses at most \( p \cdot \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) processors. — The communication and verifier complexity are both \( \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) . The combination of all three is desirable, as it gives a way to leverage a moderate increase in parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover’s parallel running time is not allowed. Our main contribution is a generic construction of SPARKs from any succinct argument of knowledge where the prover’s parallel running time is \( T \cdot \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) when using p processors, assuming collision-resistant hash functions. When suitably instantiating our construction, we achieve a four-round SPARK for any parallel RAM computation assuming only collision resistance. Additionally assuming the existence of a succinct non-interactive argument of knowledge (SNARK), we construct a non-interactive SPARK that also preserves the space complexity of the underlying computation up to \( \mathrm{poly}\hspace{-2.0pt}\log (T\cdot p) \) factors. We also show the following applications of non-interactive SPARKs. First, they immediately imply delegation protocols with near optimal prover (parallel) running time. This, in turn, gives a way to construct verifiable delay functions (VDFs) from any sequential function. When the sequential function is also memory-hard, this yields the first construction of a memory-hard VDF. 
    more » « less
  5. The goal of this paper is to improve the efficiency and applicability of straightline extraction techniques in the random oracle model. Straightline extraction in the random oracle model refers to the existence of an extractor, which given the random oracle queries made by a prover on some theorem, is able to produce a witness for with roughly the same probability that produces a verifying proof. This notion applies to both zero-knowledge protocols and verifiable computation where the goal is compressing a proof. Pass (CRYPTO '03) first showed how to achieve this property for NP using a cut-and-choose technique which incurred a -bit overhead in communication where is a security parameter. Fischlin (CRYPTO '05) presented a more efficient technique based on ``proofs of work'' that sheds this cost, but only applies to a limited class of Sigma Protocols with a ``quasi-unique response'' property, which for example, does not necessarily include the standard OR composition for Sigma protocols. With Schnorr/EdDSA signature aggregation as a motivating application, we develop new techniques to improve the computation cost of straight-line extractable proofs. Our improvements to the state of the art range from 70X--200X for the best compression parameters. This is due to a uniquely suited polynomial evaluation algorithm, and the insight that a proof-of-work that relies on multicollisions and the birthday paradox is faster to solve than inverting a fixed target. Our collision based proof-of-work more generally improves the Prover's random oracle query complexity when applied in the NIZK setting as well. In addition to reducing the query complexity of Fischlin's Prover, for a special class of Sigma protocols we can for the first time closely match a new lower bound we present. Finally we extend Fischlin's technique so that it applies to a more general class of strongly-sound Sigma protocols, which includes the OR composition. We achieve this by carefully randomizing Fischlin's technique---we show that its current deterministic nature prevents its application to certain multi-witness languages. 
    more » « less