skip to main content


Title: On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that (1) An Argon2i DAG is (e,O(n3/e3))) -reducible. (2) The cumulative pebbling cost for Argon2i is at most O(n1.768) . This improves upon the previous best upper bound of O(n1.8) [AB17]. (3) Argon2i DAG is (e,Ω~(n3/e3)) -depth robust. By contrast, analysis of [ABP17a] only established that Argon2i was (e,Ω~(n2/e2)) -depth robust. (4) The cumulative pebbling complexity of Argon2i is at least Ω~(n1.75) . This improves on the previous best bound of Ω(n1.66) [ABP17a] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. We also show that Argon2i has high fractional depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.  more » « less
Award ID(s):
1704587
NSF-PAR ID:
10068523
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Theory of Cryptography. TCC 2017
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) $G$ with $N=2^n$ nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [CCS'17] constructed an DAG called DRSample which has aAT complexity at least $\Omega\left( N^2/\log N\right)$. Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost $\mathcal{O}\left(N^{1.767}\right)$), though the constants in these bounds are poorly understood. We show that the the greedy pebbling strategy of Boneh et al. [ASIACRYPT'16] is particularly effective against DRSample e.g., the aAT cost is $\mathcal{O}\left( N^2/\log N\right)$. In fact, our empirical analysis {\em reverses} the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of $N \leq 2^{24}$. We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to {\em known} pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost $\omega(N^2)$ or requires at least $\Omega(N)$ steps with $\Omega(N/\log N)$ pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity $\Omega(N^2/\log N)$. We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost $\Omega\left( N^2\right)$ and, if a plausible conjecture holds, any parallel pebbling has aAT cost $\Omega(N^2 \log \log N/\log N)$ --- the best possible bound for an iMHF. We implement our new iMHF and demonstrate that it is just as fast as Argon2. Along the way we propose a simple modification to the Argon2 round function which increases an attacker's aAT cost by nearly an order of magnitude without increasing running time on a CPU. Finally, we give a pebbling reduction which proves that in the parallel random oracle model (PROM) the cost of evaluating an iMHF like Argon2i or DRSample+BRG is given by the pebbling cost of the underlying DAG. Prior pebbling reductions assumed that the iMHF round function concatenates input labels before hashing and did not apply to practical iMHFs such as Argon2i, DRSample or DRSample+BRG where input labels are instead XORed together. 
    more » « less
  2. Memory Hard Functions (MHFs) have been proposed as an answer to the growing inequality between the computational speed of general purpose CPUs and Application Specific Integrated Circuits (ASICs). MHFs have seen widespread applications including password hashing, key stretching and proofs of work. Several metrics have been proposed to quantify the “memory hardness” of a function. Cumulative memory complexity (CMC) [8] (or amortized Area × Time complexity [4]) attempts to quantify the cost to acquire/build the hardware to evaluate the function — after normalizing the time it takes to evaluate the function. By contrast, bandwidth hardness [30] attempts to quantify the amortized energy costs of evaluating this function on hardware — which in turn is largely dominated by the number of cache misses. Ideally, a good MHF would be both bandwidth hard and have high cumulative memory complexity. While the cumulative memory complexity of leading MHF candidates is well understood, little is known about the bandwidth hardness of many prominent MHF candidates. Our contributions are as follows: First, we provide the first reduction proving that, in the parallel random oracle model, the bandwidth hardness of a Data-Independent Memory Hard Function (iMHF) is described by the red-blue pebbling cost of the directed acyclic graph (DAG) associated with that iMHF. Second, we show that the goals of designing an MHF with high CMC/bandwidth hardness are well aligned. In particular, we prove that any function with high CMC also has relatively high energy costs. This result leads to the first unconditional lower bound on the energy cost of scrypt in the parallel random oracle model. Third, we analyze the bandwidth hardness of several prominent iMHF candidates such as Argon2i [11], winner of the password hashing competition, aATSample and DRSample [4] — the first practical iMHF with essentially asymptotically optimal CMC. We show Argon2i, aATSample and DRSample are maximally bandwidth hard under appropriate cache size. Finally, we show that the problem of finding a red-blue pebbling with minimum energy cost is NP-hard. 
    more » « less
  3. Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC’15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn’t scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using n steps and O(n/log(n)) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs Ω(n/log(n)) memory for Ω(n) steps. As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on n nodes with constant indegree with high “sustained-space complexity”, meaning that any parallel black-pebbling strategy requires Ω(n/log(n)) pebbles for at least Ω(n) steps. Along the way we construct a family of maximally “depth-robust” DAGs with maximum indegree O(logn) , improving upon the construction of Mahmoody et al. [ITCS’13] which had maximum indegree O(log2n⋅polylog(logn)) . 
    more » « less
  4. Kiltz, E. (Ed.)
    The classical (parallel) black pebbling game is a useful abstraction which allows us to analyze the resources (space, space-time, cumulative space) necessary to evaluate a function f with a static data-dependency graph G. Of particular interest in the field of cryptography are data-independent memory-hard functions fG,H which are defined by a directed acyclic graph (DAG) G and a cryptographic hash function H. The pebbling complexity of the graph G characterizes the amortized cost of evaluating fG,H multiple times as well as the total cost to run a brute-force preimage attack over a fixed domain X, i.e., given y∈{0,1}∗ find x∈X such that fG,H(x)=y. While a classical attacker will need to evaluate the function fG,H at least m=|X| times a quantum attacker running Grover’s algorithm only requires O(m−−√) blackbox calls to a quantum circuit CG,H evaluating the function fG,H. Thus, to analyze the cost of a quantum attack it is crucial to understand the space-time cost (equivalently width times depth) of the quantum circuit CG,H. We first observe that a legal black pebbling strategy for the graph G does not necessarily imply the existence of a quantum circuit with comparable complexity—in contrast to the classical setting where any efficient pebbling strategy for G corresponds to an algorithm with comparable complexity for evaluating fG,H. Motivated by this observation we introduce a new parallel reversible pebbling game which captures additional restrictions imposed by the No-Deletion Theorem in Quantum Computing. We apply our new reversible pebbling game to analyze the reversible space-time complexity of several important graphs: Line Graphs, Argon2i-A, Argon2i-B, and DRSample. Specifically, (1) we show that a line graph of size N has reversible space-time complexity at most O(N^{1+2/√logN}). (2) We show that any (e, d)-reducible DAG has reversible space-time complexity at most O(Ne+dN2^d). In particular, this implies that the reversible space-time complexity of Argon2i-A and Argon2i-B are at most O(N^2 loglogN/√logN) and O(N^2/(log N)^{1/3}), respectively. (3) We show that the reversible space-time complexity of DRSample is at most O((N^2loglog N)/log N). We also study the cumulative pebbling cost of reversible pebblings extending a (non-reversible) pebbling attack of Alwen and Blocki on depth-reducible graphs. 
    more » « less
  5. Cumulative memory – the sum of space used per step over the duration of a computation – is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory Ω(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n^3/T), and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory Ω(k^3 n/T^2). (Full version of ICALP 2023 paper.) 
    more » « less