skip to main content


Title: There Are 10 Types of Vectors (and Polynomials): Efficient Zero-Knowledge Proofs of "One-Hotness" via Polynomials with One Zero
We present a new 4-move special honest-verifier zero-knowledge proof of knowledge system for proving that a vector of Pedersen commitments opens to a so-called ``one-hot'' vector (i.e., to a vector from the standard orthonormal basis) from $\mathbb{Z}_p^n$. The need for such proofs arises in the contexts of symmetric private information retrieval (SPIR), end-to-end verifiable voting (E2E), and privacy-preserving data aggregation and analytics, among others. The key insight underlying the new protocol is a simple observation regarding the paucity of roots of polynomials of bounded degree over a finite field. The new protocol is fast and yields succinct proofs: For vectors of length $n$, the prover evaluates $\Theta(\lg{n})$ group operations plus $\Theta(n)$ field operations and sends just $\Theta(\lg{n})$ group and field elements, while the verifier evaluates one $n$-base multiexponentiation plus $\Theta(\lg{n})$ additional group operations and sends just $2(\lambda+\lg{n})$ bits to obtain a soundness error less than $2^{-\lambda}$. (A 5-move variant of the protocol reduces prover upload to just $2\lambda+\lg{n}$ bits for the same soundness error.) We have implemented both our new protocol and its closest competitors from the literature; in accordance with our analytic results, experiments confirm that the new protocols handily outperform existing protocols for all but the shortest of vectors (roughly, for vectors with more than 16-32 elements).  more » « less
Award ID(s):
1718475 1565375
NSF-PAR ID:
10112771
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 ACM Workshop on Privacy in the Electronic Society (WPES 2019)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Zero-knowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacy-preserving proofs of membership for general NP languages. Our focus in this work is on post-quantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000x gap in the proof size between the best pre-quantum constructions and the best post-quantum ones. Here, we develop and implement new lattice-based zkSNARKs in the designated-verifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2^20 is just over 16 KB. Our proofs are 10.3x shorter than previous post-quantum zkSNARKs for general NP languages. Compared to previous lattice-based zkSNARKs (also in the designated-verifier preprocessing model), we obtain a 42x reduction in proof size and a 60x reduction in the prover's running time, all while achieving a much higher level of soundness. Compared to the shortest pre-quantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our lattice-based construction is 131x longer, but both the prover and the verifier are faster (by 1.2x and 2.8x, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices. 
    more » « less
  2. Amir Hashemi (Ed.)
    The proof-of-work interactive protocol by Shafi Goldwasser, Yael T. Kalai and Guy N. Rothblum (GKR) [STOC 2008, JACM 2015] certifies the execution of an algorithm via the evaluation of a corresponding boolean or arithmetic circuit whose structure is known to the verifier by circuit wiring algorithms that define the uniformity of the circuit. Here we study protocols whose prover time- and space-complexities are within a poly-logarithmic factor of the time- and space-complexity of the algorithm; we call those protocols `prover efficient.' We show that the uniformity assumptions can be relaxed from LOGSPACE to polynomial-time in the bit-lengths of the labels which enumerate the nodes in the circuit. Our protocol applies GKR recursively to the arising sumcheck problems on each level of the circuit whose values are verified, and deploys any of the prover efficient versions of GKR on the constructed sorting/prefix circuits with log-depth wiring functions. The verifier time-complexity of GKR grows linearly in the depth of the circuit. For deep circuits such as the Miller-Rabin integer primality test of an n-bit integer, the large number of rounds may interfere with soundness guarantees after the application of the Fiat-Shamir heuristic. We re-arrange the circuit evaluation problem by the baby-steps/giant-steps method to achieve a depth of n^(1/2+o(1)), at prover cost n^(2+o(1)) bit complexity and communication and verifier cost n^(3/2+o(1)). 
    more » « less
  3. Cas Cremers and Engin Kirda (Ed.)
    Vector Oblivious Linear Evaluation (VOLE) supports fast and scalable interactive Zero-Knowledge (ZK) proofs. Despite recent improvements to VOLE-based ZK, compiling proof statements to a control-flow oblivious form (e.g., a circuit) continues to lead to expensive proofs. One useful setting where this inefficiency stands out is when the statement is a disjunction of clauses $\mathcal{L}_1 \lor \cdots \lor \mathcal{L}_B$. Typically, ZK requires paying the price to handle all $B$ branches. Prior works have shown how to avoid this price in communication, but not in computation. Our main result, $\mathsf{Batchman}$, is asymptotically and concretely efficient VOLE-based ZK for batched disjunctions, i.e. statements containing $R$ repetitions of the same disjunction. This is crucial for, e.g., emulating CPU steps in ZK. Our prover and verifier complexity is only $\bigO(RB+R|\C|+B|\C|)$, where $|\C|$ is the maximum circuit size of the $B$ branches. Prior works' computation scales in $RB|\C|$. For non-batched disjunctions, we also construct a VOLE-based ZK protocol, $\mathsf{Robin}$, which is (only) communication efficient. For small fields and for statistical security parameter $\lambda$, this protocol's communication improves over the previous state of the art ($\mathsf{Mac'n'Cheese}$, Baum et al., CRYPTO'21) by up to factor $\lambda$. Our implementation outperforms prior state of the art. E.g., we achieve up to $6\times$ improvement over $\mathsf{Mac'n'Cheese}$ (Boolean, single disjunction), and for arithmetic batched disjunctions our experiments show we improve over $\mathsf{QuickSilver}$ (Yang et al., CCS'21) by up to $70\times$ and over $\mathsf{AntMan}$ (Weng et al., CCS'22) by up to $36\times$. 
    more » « less
  4. Interactive proof systems allow a resource-bounded verifier to decide an intractable language (or compute a hard function) by communicating with a powerful but untrusted prover. Such systems guarantee that the prover can only convince the verifier of true statements. In the context of centralized computation, a celebrated result shows that interactive proofs are extremely powerful, allowing polynomial-time verifiers to decide any language in PSPACE. In this work we initiate the study of interactive distributed proofs: a network of nodes interacts with a single untrusted prover, who sees the entire network graph, to decide whether the graph satisfies some property. We focus on the communication cost of the protocol — the number of bits the nodes must exchange with the prover and each other. Our model can also be viewed as a generalization of the various models of “distributed NP” (proof labeling schemes, etc.) which received significant attention recently: while these models only allow the prover to present each network node with a string of advice, our model allows for back-and-forth interaction. We prove both upper and lower bounds for the new model. We show that for some problems, interaction can exponentially decrease the communication cost compared to a non-interactive prover, but on the other hand, some problems retain non-trivial cost even with interaction. 
    more » « less
  5. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

     
    more » « less