A graph spanner is a fundamental graph structure that faithfully preserves the pairwise distances in the input graph up to a small multiplicative stretch. The common objective in the computation of spanners is to achieve the bestknown existential sizestretch tradeoff efficiently. Classical models and algorithmic analysis of graph spanners essentially assume that the algorithm can read the input graph, construct the desired spanner, and write the answer to the output tape. However, when considering massive graphs containing millions or even billions of nodes not only the input graph, but also the output spanner might be too large for a single processor to store. To tackle this challenge, we initiate the study of local computation algorithms (LCAs) for graph spanners in general graphs, where the algorithm should locally decide whether a given edge (u,v)∈E belongs to the output spanner. Such LCAs give the user the `illusion' that a specific sparse spanner for the graph is maintained, without ever fully computing it. We present the following results: For general nvertex graphs and r∈{2,3}, there exists an LCA for (2r−1)spanners with O˜(n1+1/r) edges and sublinear probe complexity of O˜(n1−1/2r). These size/stretch tradeoffs are best possible (up to polylogarithmic factors). For every k≥1 andmore »
Optimal Vertex FaultTolerant Spanners in Polynomial Time
Recent work has pinned down the existentially optimal size bounds for vertex faulttolerant spanners: for any positive integer k, every nnode graph has a (2k – 1)spanner on O(f^{1–1/k} n^{1+1/k}) edges resilient to f vertex faults, and there are examples of input graphs on which this bound cannot be improved. However, these proofs work by analyzing the output spanner of a certain exponentialtime greedy algorithm. In this work, we give the first algorithm that produces vertex fault tolerant spanners of optimal size and which runs in polynomial time. Specifically, we give a randomized algorithm which takes Õ(f^{1–1/k} n^{2+1/k} + mf2) time. We also derandomize our algorithm to give a deterministic algorithm with similar bounds. This reflects an exponential improvement in runtime over [BodwinPatel PODC '19], the only previously known algorithm for constructing optimal vertex faulttolerant spanners.
 Award ID(s):
 1909111
 Publication Date:
 NSFPAR ID:
 10295389
 Journal Name:
 Proceedings of the 2021 ACMSIAM Symposium on Discrete Algorithms (SODA)
 Page Range or eLocationID:
 29242938
 Sponsoring Org:
 National Science Foundation
More Like this


We consider the problem of faulttolerant parallel search on an infinite line by n robots. Starting from the origin, the robots are required to find a target at an unknown location. The robots can move with maximum speed 1 and can communicate wirelessly among themselves. However, among the n robots, there are f robots that exhibit byzantine faults. A faulty robot can fail to report the target even after reaching it, or it can make malicious claims about having found the target when in fact it has not. Given the presence of such faulty robots, the search for the target can only be concluded when the nonfaulty robots have sufficient evidence that the target has been found. We aim to design algorithms that minimize the value of S_d(n, f), the time to find a target at a (unknown) distance d from the origin by n robots among which f are faulty. We give several different algorithms whose running time depends on the ratio f/n, the density of faulty robots, and also prove lower bounds. Our algorithms are optimal for some densities of faulty robots.

Over the past decade, there has been increasing interest in distributed/parallel algorithms for processing largescale graphs. By now, we have quite fast algorithmsusually sublogarithmictime and often poly(łogłog n)time, or even fasterfor a number of fundamental graph problems in the massively parallel computation (MPC) model. This model is a widelyadopted theoretical abstraction of MapReduce style settings, where a number of machines communicate in an alltoall manner to process largescale data. Contributing to this line of work on MPC graph algorithms, we present poly(łog k) ε poly(łogłog n) round MPC algorithms for computing O(k^1+o(1) )spanners in the strongly sublinear regime of local memory. To the best of our knowledge, these are the first sublogarithmictime MPC algorithms for spanner construction. As primary applications of our spanners, we get two important implications, as follows: For the MPC setting, we get an O(łog^2łog n)round algorithm for O(łog^1+o(1) n) approximation of all pairs shortest paths (APSP) in the nearlinear regime of local memory. To the best of our knowledge, this is the first sublogarithmictime MPC algorithm for distance approximations. Our result above also extends to the Congested Clique model of distributed computing, with the same round complexity and approximation guarantee. This gives the first sublogarithmic algorithmmore »

The problem of sparsifying a graph or a hypergraph while approximately preserving its cut structure has been extensively studied and has many applications. In a seminal work, Benczúr and Karger (1996) showed that given any nvertex undirected weighted graph G and a parameter ε ∈ (0,1), there is a nearlinear time algorithm that outputs a weighted subgraph G' of G of size Õ(n/ε²) such that the weight of every cut in G is preserved to within a (1 ± ε)factor in G'. The graph G' is referred to as a (1 ± ε)approximate cut sparsifier of G. Subsequent recent work has obtained a similar result for the more general problem of hypergraph cut sparsifiers. However, all known sparsification algorithms require Ω(n + m) time where n denotes the number of vertices and m denotes the number of hyperedges in the hypergraph. Since m can be exponentially large in n, a natural question is if it is possible to create a hypergraph cut sparsifier in time polynomial in n, independent of the number of edges. We resolve this question in the affirmative, giving the first sublinear time algorithm for this problem, given appropriate query access to the hypergraph. Specifically, we designmore »

We study the problem of certification: given queries to a function f : {0,1}n → {0,1} with certificate complexity ≤ k and an input x⋆, output a sizek certificate for f’s value on x⋆. For monotone functions, a classic local search algorithm of Angluin accomplishes this task with n queries, which we show is optimal for local search algorithms. Our main result is a new algorithm for certifying monotone functions with O(k8 logn) queries, which comes close to matching the informationtheoretic lower bound of Ω(k logn). The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We further prove exponentialink lower bounds when f is nonmonotone, and when f is monotone but the algorithm is only given random examples of f. These lower bounds show that assumptions on the structure of f and query access to it are both necessary for the polynomial dependence on k that we achieve.