skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On Black-Box Verifiable Outsourcing
We study verifiable outsourcing of computation in a model where the verifier has black-box access to the function being computed. We introduce the problem of oracle-aided batch verification of computation (OBVC) for a function class $$\mathcal{F}$$. This allows a verifier to efficiently verify the correctness of any $$f \in \mathcal{F}$$ evaluated on a batch of $$n$$ instances $$x_1, \ldots, x_n$$, while only making $$\lambda$$ calls to an oracle for $$f$$ (along with $$O(n \lambda)$$ calls to low-complexity helper oracles), for security parameter $$\lambda$$. We obtain the following positive and negative results: - We build OBVC protocols for the class of all functions that admit {\em random-self-reductions}. Some of our protocols rely on homomorphic encryption schemes. - We show that there cannot exist OBVC schemes for the class of all functions mapping $$\lambda$$-bit inputs to $$\lambda$$-bit outputs, for any $$n = \mathsf{poly}(\lambda)$$.  more » « less
Award ID(s):
2238718
PAR ID:
10489542
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
TCC 2023
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Amir Hashemi (Ed.)
    The proof-of-work interactive protocol by Shafi Goldwasser, Yael T. Kalai and Guy N. Rothblum (GKR) [STOC 2008, JACM 2015] certifies the execution of an algorithm via the evaluation of a corresponding boolean or arithmetic circuit whose structure is known to the verifier by circuit wiring algorithms that define the uniformity of the circuit. Here we study protocols whose prover time- and space-complexities are within a poly-logarithmic factor of the time- and space-complexity of the algorithm; we call those protocols `prover efficient.' We show that the uniformity assumptions can be relaxed from LOGSPACE to polynomial-time in the bit-lengths of the labels which enumerate the nodes in the circuit. Our protocol applies GKR recursively to the arising sumcheck problems on each level of the circuit whose values are verified, and deploys any of the prover efficient versions of GKR on the constructed sorting/prefix circuits with log-depth wiring functions. The verifier time-complexity of GKR grows linearly in the depth of the circuit. For deep circuits such as the Miller-Rabin integer primality test of an n-bit integer, the large number of rounds may interfere with soundness guarantees after the application of the Fiat-Shamir heuristic. We re-arrange the circuit evaluation problem by the baby-steps/giant-steps method to achieve a depth of n^(1/2+o(1)), at prover cost n^(2+o(1)) bit complexity and communication and verifier cost n^(3/2+o(1)). 
    more » « less
  2. Abstract Consider averages along the prime integers ℙ given by {\mathcal{A}_N}f(x) = {N^{ - 1}}\sum\limits_{p \in \mathbb{P}:p \le N} {(\log p)f(x - p).} These averages satisfy a uniform scale-free ℓ p -improving estimate. For all 1 < p < 2, there is a constant C p so that for all integer N and functions f supported on [0, N ], there holds {N^{ - 1/p'}}{\left\| {{\mathcal{A}_N}f} \right\|_{\ell p'}} \le {C_p}{N^{ - 1/p}}{\left\| f \right\|_{\ell p}}. The maximal function 𝒜 * f = sup N |𝒜 N f | satisfies ( p , p ) sparse bounds for all 1 < p < 2. The latter are the natural variants of the scale-free bounds. As a corollary, 𝒜 * is bounded on ℓ p ( w ), for all weights w in the Muckenhoupt 𝒜 p class. No prior weighted inequalities for 𝒜 * were known. 
    more » « less
  3. The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O'Donnell, 2014]). Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm's query complexity is close to optimal in terms of its dependence on n. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias. 
    more » « less
  4. In general, the generator matrix sparsity is a critical factor in determining the encoding complexity of a linear code. Further, certain applications, e.g., distributed crowdsourcing schemes utilizing linear codes, require most or even all the columns of the generator matrix to have some degree of sparsity. In this paper, we leverage polar codes and the well-established channel polarization to design capacity-achieving codes with a certain constraint on the weights of all the columns in the generator matrix (GM) while having a low-complexity decoding algorithm. We first show that given a binary-input memoryless symmetric (BMS) channel $$W$$ and a constant $$s \in (0, 1]$$ , there exists a polarization kernel such that the corresponding polar code is capacity-achieving with the rate of polarization $s/2$ , and the GM column weights being bounded from above by $$N^{s}$$ . To improve the sparsity versus error rate trade-off, we devise a column-splitting algorithm and two coding schemes for BEC and then for general BMS channels. The polar-based codes generated by the two schemes inherit several fundamental properties of polar codes with the original $$2 \times 2$$ kernel including the decay in error probability, decoding complexity, and the capacity-achieving property. Furthermore, they demonstrate the additional property that their GM column weights are bounded from above sublinearly in $$N$$ , while the original polar codes have some column weights that are linear in $$N$$ . In particular, for any BEC and $$\beta < 0.5$$ , the existence of a sequence of capacity-achieving polar-based codes where all the GM column weights are bounded from above by $$N^{\lambda} $$ with $$\lambda \approx 0.585$$ , and with the error probability bounded by $${\mathcal {O}}(2^{-N^{\beta }})$$ under a decoder with complexity $${\mathcal {O}}(N\log N)$$ , is shown. The existence of similar capacity-achieving polar-based codes with the same decoding complexity is shown for any BMS channel and $$\beta < 0.5$$ with $$\lambda \approx 0.631$$ . 
    more » « less
  5. We present a new 4-move special honest-verifier zero-knowledge proof of knowledge system for proving that a vector of Pedersen commitments opens to a so-called ``one-hot'' vector (i.e., to a vector from the standard orthonormal basis) from $$\mathbb{Z}_p^n$$. The need for such proofs arises in the contexts of symmetric private information retrieval (SPIR), end-to-end verifiable voting (E2E), and privacy-preserving data aggregation and analytics, among others. The key insight underlying the new protocol is a simple observation regarding the paucity of roots of polynomials of bounded degree over a finite field. The new protocol is fast and yields succinct proofs: For vectors of length $$n$$, the prover evaluates $$\Theta(\lg{n})$$ group operations plus $$\Theta(n)$$ field operations and sends just $$\Theta(\lg{n})$$ group and field elements, while the verifier evaluates one $$n$$-base multiexponentiation plus $$\Theta(\lg{n})$$ additional group operations and sends just $$2(\lambda+\lg{n})$$ bits to obtain a soundness error less than $$2^{-\lambda}$$. (A 5-move variant of the protocol reduces prover upload to just $$2\lambda+\lg{n}$$ bits for the same soundness error.) We have implemented both our new protocol and its closest competitors from the literature; in accordance with our analytic results, experiments confirm that the new protocols handily outperform existing protocols for all but the shortest of vectors (roughly, for vectors with more than 16-32 elements). 
    more » « less