We present O(log logn)round algorithms in the Massively Parallel
Computation (MPC) model, with ˜O(n) memory per machine, that
compute a maximal independent set, a 1 + ε approximation of
maximum matching, and a 2 + ε approximation of minimum vertex
cover, for any nvertex graph and any constant ε > 0. These improve
the state of the art as follows:
• Our MIS algorithm leads to a simple O(log log Δ)round
MIS algorithm in the CONGESTEDCLIQUE model of distributed
computing, which improves on the ˜O (plog Δ)round
algorithm of Ghaffari [PODC’17].
• OurO(log logn)round (1+ε)approximate maximum matching
algorithm simplifies or improves on the following prior
work: O(log2 logn)round (1 + ε)approximation algorithm
of Czumaj et al. [STOC’18] and O(log logn)round (1 + ε)
approximation algorithm of Assadi et al. [arXiv’17].
• Our O(log logn)round (2+ε)approximate minimum vertex
cover algorithm improves on an O(log logn)round O(1)
approximation of Assadi et al. [arXiv’17].
more »
« less
Approximating RedBlue Set Cover and Minimum Monotone Satisfying Assignment
We provide new approximation algorithms for the RedBlue Set Cover and Circuit Minimum Monotone Satisfying Assignment (MMSA) problems. Our algorithm for RedBlue Set Cover achieves Õ(m^{1/3})approximation improving on the Õ(m^{1/2})approximation due to Elkin and Peleg (where m is the number of sets). Our approximation algorithm for MMSA_t (for circuits of depth t) gives an Õ(N^{1δ}) approximation for δ = 1/3 2^{3⌈t/2⌉}, where N is the number of gates and variables. No nontrivial approximation algorithms for MMSA_t with t ≥ 4 were previously known.
We complement these results with lower bounds for these problems: For RedBlue Set Cover, we provide a nearly approximation preserving reduction from Min kUnion that gives an Ω(m^{1/4  ε}) hardness under the DensevsRandom conjecture, while for MMSA we sketch a proof that an SDP relaxation strengthened by SheraliAdams has an integrality gap of N^{1ε} where ε → 0 as the circuit depth t → ∞.
more »
« less
 NSFPAR ID:
 10518037
 Editor(s):
 Megow, Nicole; Smith, Adam
 Publisher / Repository:
 Schloss Dagstuhl – LeibnizZentrum für Informatik
 Date Published:
 Journal Name:
 Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)
 Subject(s) / Keyword(s):
 RedBlue Set Cover Problem Circuit Minimum Monotone Satisfying Assignment (MMSA) Problem LP Rounding
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


We study the classic set cover problem from the perspective of sublinear algorithms. Given access to a collection of m sets over n elements in the query model, we show that sublinear algorithms derived from existing techniques have almost tight query complexities. On one hand, first we show an adaptation of the streaming algorithm presented in [17] to the sublinear query model, that returns an αapproximate cover using Õ(m(n/k)^1/(α–1) + nk) queries to the input, where k denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of k, the required number of queries is , even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require Ω(nk) queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter k. On the other hand, we show that this bound is not optimal for larger values of the parameter k, as there exists a (1 + ε)approximation algorithm with Õ(mn/kε^2) queries. We show that this bound is essentially tight for sufficiently small constant ε, by establishing a lower bound of query complexity. Our lowerbound results follow by carefully designing two distributions of instances that are hard to distinguish. In particular, our first lower bound involves a probabilistic construction of a certain set system with a minimum set cover of size αk, with the key property that a small number of “almost uniformly distributed” modifications can reduce the minimum set cover size down to k. Thus, these modifications are not detectable unless a large number of queries are asked. We believe that our probabilistic construction technique might find applications to lower bounds for other combinatorial optimization problems.more » « less

null (Ed.)We study the communication cost (or message complexity) of fundamental distributed symmetry breaking problems, namely, coloring and MIS. While significant progress has been made in understanding and improving the running time of such problems, much less is known about the message complexity of these problems. In fact, all known algorithms need at least Ω(m) communication for these problems, where m is the number of edges in the graph. We addressthe following question in this paper: can we solve problems such as coloring and MIS using sublinear, i.e., o(m) communication, and if sounder what conditions? In a classical result, Awerbuch, Goldreich, Peleg, and Vainish [JACM 1990] showed that fundamental global problems such asbroadcast and spanning tree construction require at least o(m) messages in the KT1 Congest model (i.e., Congest model in which nodes have initial knowledge of the neighbors' ID's) when algorithms are restricted to be comparisonbased (i.e., algorithms inwhich node ID's can only be compared). Thirty five years after this result, King, Kutten, and Thorup [PODC 2015] showed that onecan solve the above problems using Õ(n) messages (n is the number of nodes in the graph) in Õ(n) rounds in the KT1 Congest model if noncomparisonbased algorithms are permitted. An important implication of this result is that one can use the synchronous nature of the KT1 Congest model, using silence to convey information,and solve any graph problem using noncomparisonbased algorithms with Õ(n) messages, but this takes an exponential number of rounds. In the asynchronous model, even this is not possible. In contrast, much less is known about the message complexity of local symmetry breaking problems such as coloring and MIS. Our paper fills this gap by presenting the following results. Lower bounds: In the KT1 CONGEST model, we show that any comparisonbased algorithm, even a randomized Monte Carlo algorithm with constant success probability, requires Ω(n 2) messages in the worst case to solve either (△ + 1)coloring or MIS, regardless of the number of rounds. We also show that Ω(n) is a lower bound on the number ofmessages for any (△ + 1)coloring or MIS algorithm, even noncomparisonbased, and even with nodes having initial knowledge of up to a constant radius. Upper bounds: In the KT1 CONGEST model, we present the following randomized noncomparisonbased algorithms for coloring that, with high probability, use o(m) messages and run in polynomially many rounds.(a) A (△ + 1)coloring algorithm that uses Õ(n1.5) messages, while running in Õ(D + √ n) rounds, where D is the graph diameter. Our result also implies an asynchronous algorithm for (△ + 1)coloring with the same message bound but running in Õ(n) rounds. (b) For any constantε > 0, a (1+ε)△coloring algorithm that uses Õ(n/ε 2 ) messages, while running in Õ(n) rounds. If we increase our input knowledge slightly to radius 2, i.e.,in the KT2 CONGEST model, we obtain:(c) A randomized comparisonbased MIS algorithm that uses Õ(n 1.5) messages. while running in Õ( √n) rounds. While our lower bound results can be viewed as counterparts to the classical result of Awerbuch, Goldreich, Peleg, and Vainish [JACM 90], but for local problems, our algorithms are the firstknown algorithms for coloring and MIS that take o(m) messages and run in polynomially many rounds.more » « less

We consider the maximum matching problem in the semistreaming model formalized by Feigenbaum, Kannan, McGregor, Suri, and Zhang that is inspired by giant graphs of today. As our main result, we give a twopass (1/2 + 1/16)approximation algorithm for trianglefree graphs and a twopass (1/2 + 1/32)approximation algorithm for general graphs; these improve the approximation ratios of 1/2 + 1/52 for bipartite graphs and 1/2 + 1/140 for general graphs by Konrad, Magniez, and Mathieu. In three passes, we are able to achieve approximation ratios of 1/2 + 1/10 for trianglefree graphs and 1/2 + 1/19.753 for general graphs. We also give a multipass algorithm where we bound the number of passes precisely—we give a (2/3 − ε) approximation algorithm that uses 2/(3ε) passes for trianglefree graphs and 4/(3ε) passes for general graphs. Our algorithms are simple and combinatorial, use O(n log n) space, and (can be implemented to) have O(1) update time per edge. For general graphs, our multipass algorithm improves the best known deterministic algorithms in terms of the number of passes: * Ahn and Guha give a (2/3−ε)approximation algorithm that uses O(log(1/ε)/ε2) passes, whereas our (2/3 − ε)approximation algorithm uses 4/(3ε) passes; * they also give a (1 − ε)approximation algorithm that uses O(log n · poly(1/ε)) passes, where n is the number of vertices of the input graph; although our algorithm is (2/3−ε)approximation, our number of passes do not depend on n. Earlier multipass algorithms either have a large constant inside bigO notation for the number of passes or the constant cannot be determined due to the involved analysis, so our multipass algorithm should use much fewer passes for approximation ratios bounded slightly below 2/3.more » « less

Bringmann, Karl ; Grohe, Martin ; Puppis, Gabriele ; Svensson, Ola (Ed.)We consider the problem of approximate counting of triangles and longer fixed length cycles in directed graphs. For triangles, Tětek [ICALP'22] gave an algorithm that returns a (1±ε)approximation in Õ(n^ω/t^{ω2}) time, where t is the unknown number of triangles in the given n node graph and ω < 2.372 is the matrix multiplication exponent. We obtain an improved algorithm whose running time is, within polylogarithmic factors the same as that for multiplying an n× n/t matrix by an n/t × n matrix. We then extend our framework to obtain the first nontrivial (1± ε)approximation algorithms for the number of hcycles in a graph, for any constant h ≥ 3. Our running time is Õ(MM(n,n/t^{1/(h2)},n)), the time to multiply n × n/(t^{1/(h2)}) by n/(t^{1/(h2)) × n matrices. Finally, we show that under popular finegrained hypotheses, this running time is optimal.more » « less