skip to main content


Title: Near-Optimal Derandomization of Medium-Width Branching Programs
We give a deterministic white-box algorithm to estimate the expectation of a read-once branching program of length n and width w in space Õ(logn+√logn·logw). In particular, we obtain an almost optimal space Õ(logn) derandomization of programs up to width w=2√logn. Previously, the best known space complexity for this problem was O(min{logn· logw,log3/2n+√logn· logw}) via the classic algorithms of Savitch (JCSS 1970) and Saks and Zhou (JCSS 1999), which only achieve space Õ(logn) for w=polylog(n). We prove this result by showing that a variant of the Saks-Zhou algorithm developed by Cohen, Doron, and Sberlo (ECCC 2022) still works without executing one of the steps in the algorithm, the so-called random shift step. This allows us to extend their algorithm from computing the nth power of a w× w stochastic matrix to multiplying n distinct w× w stochastic matrices with no degradation in space consumption. In the regime where w≥ n, we also show that our approach can achieve parameters matching those of the original Saks-Zhou algorithm (with no loglog factors). Finally, we show that for w≤ 2√logn, an algorithm even simpler than our algorithm and that of Saks and Zhou achieves space O(log3/2 n).  more » « less
Award ID(s):
2310818
PAR ID:
10494250
Author(s) / Creator(s):
 ;
Publisher / Repository:
55th Annual ACM Symposium on Theory of Computing, STOC 2023
Date Published:
Journal Name:
55th Annual ACM Symposium on Theory of Computing, STOC 2023
Page Range / eLocation ID:
23-34
Format(s):
Medium: X
Location:
Orlando, Florida
Sponsoring Org:
National Science Foundation
More Like this
  1. Given an undirected weighted graph with n vertices and m edges, we give the first deterministic m1+o(1)-time algorithm for constructing the cactus representation of all global minimum cuts. This improves the current n2+o(1)-time state-of-the-art deterministic algorithm, which can be obtained by combining ideas implicitly from three papers [22, 27, 12]. The known explicitly stated deterministic algorithm has a runtime of Õ(mn) [9, 34]. Using our technique, we can even speed up the fastest randomized algorithm of [23] whose running time is at least Ω(m log4 n) to O(m log3 n). 
    more » « less
  2. Kiltz, E. (Ed.)
    The classical (parallel) black pebbling game is a useful abstraction which allows us to analyze the resources (space, space-time, cumulative space) necessary to evaluate a function f with a static data-dependency graph G. Of particular interest in the field of cryptography are data-independent memory-hard functions fG,H which are defined by a directed acyclic graph (DAG) G and a cryptographic hash function H. The pebbling complexity of the graph G characterizes the amortized cost of evaluating fG,H multiple times as well as the total cost to run a brute-force preimage attack over a fixed domain X, i.e., given y∈{0,1}∗ find x∈X such that fG,H(x)=y. While a classical attacker will need to evaluate the function fG,H at least m=|X| times a quantum attacker running Grover’s algorithm only requires O(m−−√) blackbox calls to a quantum circuit CG,H evaluating the function fG,H. Thus, to analyze the cost of a quantum attack it is crucial to understand the space-time cost (equivalently width times depth) of the quantum circuit CG,H. We first observe that a legal black pebbling strategy for the graph G does not necessarily imply the existence of a quantum circuit with comparable complexity—in contrast to the classical setting where any efficient pebbling strategy for G corresponds to an algorithm with comparable complexity for evaluating fG,H. Motivated by this observation we introduce a new parallel reversible pebbling game which captures additional restrictions imposed by the No-Deletion Theorem in Quantum Computing. We apply our new reversible pebbling game to analyze the reversible space-time complexity of several important graphs: Line Graphs, Argon2i-A, Argon2i-B, and DRSample. Specifically, (1) we show that a line graph of size N has reversible space-time complexity at most O(N^{1+2/√logN}). (2) We show that any (e, d)-reducible DAG has reversible space-time complexity at most O(Ne+dN2^d). In particular, this implies that the reversible space-time complexity of Argon2i-A and Argon2i-B are at most O(N^2 loglogN/√logN) and O(N^2/(log N)^{1/3}), respectively. (3) We show that the reversible space-time complexity of DRSample is at most O((N^2loglog N)/log N). We also study the cumulative pebbling cost of reversible pebblings extending a (non-reversible) pebbling attack of Alwen and Blocki on depth-reducible graphs. 
    more » « less
  3. We give an nO(loglogn)-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over {±1}n. Even in the realizable setting, the previous fastest runtime was nO(logn), a consequence of a classic algorithm of Ehrenfeucht and Haussler. Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be “pruned” so that every variable in the resulting tree is influential. 
    more » « less
  4. Polynomial approximations for e−x and ex have applications to the design of algorithms for many problems, and our degree bounds show both the power and limitations of these algorithms. We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(logn) dimensions with error δ=n−Θ(1). We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B=Θ(logn) mirroring the corresponding transition in dB;δ(e−x): - When B=o(logn), we give the first algorithm running in time n1+o(1). - When B=κlogn for a small constant κ>0, we give an algorithm running in time n1+O(loglogκ−1/logκ−1). The loglogκ−1/logκ−1 term in the exponent comes from analyzing the behavior of the leading constant in our computation of dB;δ(e−x). - When B=ω(logn), we show that time n2−o(1) is necessary assuming SETH. 
    more » « less
  5. We present O(log logn)-round algorithms in the Massively Parallel Computation (MPC) model, with ˜O(n) memory per machine, that compute a maximal independent set, a 1 + ε approximation of maximum matching, and a 2 + ε approximation of minimum vertex cover, for any n-vertex graph and any constant ε > 0. These improve the state of the art as follows: • Our MIS algorithm leads to a simple O(log log Δ)-round MIS algorithm in the CONGESTED-CLIQUE model of distributed computing, which improves on the ˜O (plog Δ)-round algorithm of Ghaffari [PODC’17]. • OurO(log logn)-round (1+ε)-approximate maximum matching algorithm simplifies or improves on the following prior work: O(log2 logn)-round (1 + ε)-approximation algorithm of Czumaj et al. [STOC’18] and O(log logn)-round (1 + ε)- approximation algorithm of Assadi et al. [arXiv’17]. • Our O(log logn)-round (2+ε)-approximate minimum vertex cover algorithm improves on an O(log logn)-round O(1)- approximation of Assadi et al. [arXiv’17]. 
    more » « less