skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Parallel Minimum Cuts in O ( m log 2 n ) Work and Low Depth
We present a randomizedO(mlog2n) work,O(polylogn) depth parallel algorithm for minimum cut. This algorithm matches the work bounds of a recent sequential algorithm by Gawrychowski, Mozes, and Weimann [ICALP’20], and improves on the previously best parallel algorithm by Geissmann and Gianinazzi [SPAA’18], which performsO(mlog4n) work inO(polylogn) depth. Our algorithm makes use of three components that might be of independent interest. First, we design a parallel data structure that efficiently supports batched mixed queries and updates on trees. It generalizes and improves the work bounds of a previous data structure of Geissmann and Gianinazzi and is work efficient with respect to the best sequential algorithm. Second, we design a parallel algorithm for approximate minimum cut that improves on previous results by Karger and Motwani. We use this algorithm to give a work-efficient procedure to produce a tree packing, as in Karger’s sequential algorithm for minimum cuts. Last, we design an efficient parallel algorithm for solving the minimum 2-respecting cut problem.  more » « less
Award ID(s):
1919223
PAR ID:
10568742
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Parallel Computing
Volume:
10
Issue:
4
ISSN:
2329-4949
Page Range / eLocation ID:
1 to 28
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper we provide anO(mloglogO(1)nlog (1/ϵ))-expected time algorithm for solving Laplacian systems onn-nodem-edge graphs, improving upon the previous best expected runtime of\(O(m \sqrt {\log n} \mathrm{log log}^{O(1)} n \log (1/\epsilon)) \)achieved by (Cohen, Kyng, Miller, Pachocki, Peng, Rao, Xu 2014). To obtain this result we provide efficient constructions of low spectral stretch graph approximations with improved stretch and sparsity bounds. As motivation for this work, we show that for every set of vectors in\(\mathbb {R}^d \)(not just those induced by graphs) and all integerk> 1 there exist an ultra-sparsifier withd− 1 +O(d/k) re-weighted vectors of relative condition number at mostk2. For smallk, this improves upon the previous best known multiplicative factor of\(k \cdot \tilde{O}(\log d) \), which is only known for the graph case. Additionally, in the graph case we employ our low-stretch subgraph construction to obtainn− 1 +O(n/k)-edge ultrasparsifiers of relative condition numberk1 +o(1)fork=ω(log δn) for anyδ> 0: this improves upon the previous work fork=o(exp (log 1/2 −δn)). 
    more » « less
  2. On hypergraphs withmhyperedges andnvertices, wherepdenotes the total size of the hyperedges, we provide the following results:We give an algorithm that runs in\(\widetilde{O}(mn^{2k-2})\)time for finding a minimumk-cut in hypergraphs of arbitrary rank. This algorithm betters the previous best running time for the minimumk-cut problem, fork> 2.We give an algorithm that runs in\(\widetilde{O}(n^{\max \lbrace r,2k-2\rbrace })\)time for finding a minimumk-cut in hypergraphs of constant rankr. This algorithm betters the previous best running times for both the minimum cut and minimumk-cut problems for dense hypergraphs.Both of our algorithms are Monte Carlo, i.e., they return a minimumk-cut (or minimum cut) with high probability. These algorithms are obtained as instantiations of a genericbranching randomized contractiontechnique on hypergraphs, which extends the celebrated work of Karger and Stein on recursive contractions in graphs. Our techniques and results also extend to the problems of minimum hedge-cut and minimum hedge-k-cut on hedgegraphs, which generalize hypergraphs. 
    more » « less
  3. Existing parallel algorithms for wavelet tree construction have a work complexity of $$O(n\log\sigma)$$. This paper presents parallel algorithms for the problem with improved work complexity. Our first algorithm is based on parallel integer sorting and has either $$O(n\log\log n\lceil\log\sigma/\sqrt{\log n\log\log n}\rceil)$$ work and polylogarithmic depth, or $$O(n\lceil\log\sigma/\sqrt{\log n}\rceil)$$ work and sub-linear depth. We also describe another algorithm that has $$O(n\lceil\log\sigma/\sqrt{\log n} \rceil)$$ work and $$O(\sigma+\log n)$$ depth. We then show how to use similar ideas to construct variants of wavelet trees (arbitrary-shaped binary trees and multiary trees) as well as wavelet matrices in parallel with lower work complexity than prior algorithms. Finally, we show that the rank and select structures on binary sequences and multiary sequences, which are stored on wavelet tree nodes, can be constructed in parallel with improved work bounds, matching those of the best existing sequential algorithms for constructing rank and select structures. 
    more » « less
  4. Megow, Nicole; Smith, Adam (Ed.)
    The celebrated IP = PSPACE Theorem gives an efficient interactive proof for any bounded-space algorithm. In this work we study interactive proofs for non-deterministic bounded space computations. While Savitch’s Theorem shows that nondeterministic bounded-space algorithms can be simulated by deterministic bounded-space algorithms, this simulation has a quadratic overhead. We give interactive protocols for nondeterministic algorithms directly to get faster verifiers. More specifically, for any non-deterministic space S algorithm, we construct an interactive proof in which the verifier runs in time Õ(n+S²). This improves on the best previous bound of Õ(n+S³) and matches the result for deterministic space bounded algorithms, up to polylog(S) factors. We further generalize to alternating bounded space algorithms. For any language L decided by a time T, space S algorithm that uses d alternations, we construct an interactive proof in which the verifier runs in time Õ(n + S log(T) + S d) and the prover runs in time 2^O(S). For d = O(log(T)), this matches the best known interactive proofs for deterministic algorithms, up to polylog(S) factors, and improves on the previous best verifier time for nondeterministic algorithms by a factor of log(T). We also improve the best prior verifier time for unbounded alternations by a factor of S. Using known connections of bounded alternation algorithms to bounded depth circuits, we also obtain faster verifiers for bounded depth circuits with unbounded fan-in. 
    more » « less
  5. A $$(1\pm\epsilon)$$ -sparsifier of a hypergraph $G(V, E)$ is a (weighted) subgraph that preserves the value of every cut to within a $$(1\pm\epsilon)$$ -factor. It is known that every hypergraph with $$n$$ vertices admits a $$(1 \pm \epsilon)$$ -sparsifier with $$\tilde{O}(n/{\epsilon}^{2})$$ hyperedges. In this work, we explore the task of building such a sparsifier by using only linear measurements (a linear sketch) over the hyperedges of $$G$$, and provide nearly-matching upper and lower bounds for this task. Specifically, we show that there is a randomized linear sketch of size $$\tilde{O}(nr\log(m)/\epsilon^{2})$$ bits which with high probability contains sufficient information to recover a $$(1\pm\epsilon)$$ cut-sparsifier with $$\tilde{O}(n/\epsilon^{2})$$ hyperedges for any hypergraph with at most $$m$$ edges each of which has arity bounded by $$r$$. This immediately gives a dynamic streaming algorithm for hypergraph cut sparsification with an identical space complexity, improving on the previous best known bound of $$\tilde{O}(nr^{2}\log^{4}({m})/\epsilon^{2})$$ bits of space (Guha, McGregor, and Tench, PODS 2015). We complement our algorithmic result above with a nearly-matching lower bound. We show that for every $$\epsilon\in(0,1)$$, one needs $$\Omega(nr\log(m/n)/\log(n))$$ bits to construct a $$(1\pm\epsilon)$$ -sparsifier via linear sketching, thus showing that our linear sketch achieves an optimal dependence on both $$r$$ and $$\log(m)$$. The starting point for our improved algorithm is importance sampling of hyperedges based on the new notion of $$k$$ -cut strength introduced in the recent work of Quanrud (SODA 2024). The natural algorithm based on this concept leads to $$\log m$$ levels of sampling where errors can potentially accumulate, and this accounts for the polylog $(m)$ losses in the sketch size of the natural algorithm. We develop a more intricate analysis of the accumulation in error to show most levels do not contribute to the error and actual loss is only polylog $(n)$. Combining with careful preprocessing (and analysis) this enables us to get rid of all extraneous $$\log m$$ factors in the sketch size, but the quadratic dependence on $$r$$ remains. This dependence originates from use of correlated $$\ell_{0}$$ -samplers to recover a large number of low-strength edges in a hypergraph simultaneously by looking at neighborhoods of individual vertices. In graphs, this leads to discovery of $$\Omega(n)$$ edges in a single shot, whereas in hypergraphs, this may potentially only reveal $$O$$($$n$$/$$r$$) new edges, thus requiring $$\Omega(r)$$ rounds of recovery. To remedy this we introduce a new technique of random fingerprinting of hyperedges which effectively eliminates the correlations created by large arity hyperedges, and leads to a scheme for recovering hyperedges of low strength with an optimal dependence on $$r$$. Putting all these ingredients together yields our linear sketching algorithm. Our lower bound is established by a reduction from the universal relation problem in the one-way communication setting. 
    more » « less