skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on June 23, 2026

Title: Stochastic Matching via In-n-Out Local Computation Algorithms
Consider the following stochastic matching problem. We are given a known graph 𝐺 = (𝑉 , 𝐸). An unknown subgraph 𝐺𝑝 = (𝑉 , 𝐸𝑝 ) is realized where 𝐸𝑝 includes every edge of 𝐸 independently with some probability 𝑝 ∈ (0, 1]. The goal is to query a sparse subgraph 𝐻 of 𝐺, such that the realized edges in 𝐻 include an approximate maximum matching of 𝐺𝑝 . This problem has been studied extensively over the last decade due to its applications in kidney exchange, online dating, and online labor markets. For any fixed πœ€ > 0, [BDH STOC’20] showed that any graph 𝐺 has a subgraph 𝐻 with quasipoly(1/𝑝) = (1/𝑝)poly(log(1/𝑝 ) ) maximum degree, achieving a (1 βˆ’ πœ€)-approximation. A major open question is the best approximation achievable with poly(1/𝑝)- degree subgraphs. A long line of work has progressively improved the approximation in the poly(1/𝑝)-degree regime from .5 [BDH+ EC’15] to .501 [AKL EC’17], .656 [BHFR SODA’19], .666 [AB SOSA’19], .731 [BBD SODA’22] (bipartite graphs), and most recently to .68 [DS ’24]. In this work, we show that a poly(1/𝑝)-degree subgraph can obtain a (1 βˆ’ πœ€)-approximation for any desirably small fixed πœ€ > 0, achieving the best of both worlds. Beyond its quantitative improvement, a key conceptual contribu- tion of our work is to connect local computation algorithms (LCAs) to the stochastic matching problem for the first time. While prior work on LCAs mainly focuses on their out-queries (the number of vertices probed to produce the output of a given vertex), our analysis also bounds the in-queries (the number of vertices that probe a given vertex). We prove that the outputs of LCAs with bounded in- and out-queries (in-n-out LCAs for short) have limited correlation, a property that our analysis crucially relies on and might find applications beyond stochastic matchings.  more » « less
Award ID(s):
2022448
PAR ID:
10631027
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
57th Annual ACM Symposium on Theory of Computing (STOC 2025)
Date Published:
Format(s):
Medium: X
Location:
Prague, Czech Republic
Sponsoring Org:
National Science Foundation
More Like this
  1. Meka, Raghu (Ed.)
    We study the stochastic minimum vertex cover problem for general graphs. In this problem, we are given a graph G=(V, E) and an existence probability p_e for each edge e ∈ E. Edges of G are realized (or exist) independently with these probabilities, forming the realized subgraph H. The existence of an edge in H can only be verified using edge queries. The goal of this problem is to find a near-optimal vertex cover of H using a small number of queries. Previous work by Derakhshan, Durvasula, and Haghtalab [STOC 2023] established the existence of 1.5 + Ξ΅ approximation algorithms for this problem with O(n/Ξ΅) queries. They also show that, under mild correlation among edge realizations, beating this approximation ratio requires querying a subgraph of size Ξ©(n β‹… RS(n)). Here, RS(n) refers to Ruzsa-SzemerΓ©di Graphs and represents the largest number of induced edge-disjoint matchings of size Θ(n) in an n-vertex graph. In this work, we design a simple algorithm for finding a (1 + Ξ΅) approximate vertex cover by querying a subgraph of size O(n β‹… RS(n)) for any absolute constant Ξ΅ > 0. Our algorithm can tolerate up to O(n β‹… RS(n)) correlated edges, hence effectively completing our understanding of the problem under mild correlation. 
    more » « less
  2. We study the stochastic vertex cover problem. In this problem, G = (V, E) is an arbitrary known graph, and G⋆ is an unknown random subgraph of G where each edge e is realized independently with probability p. Edges of G⋆ can only be verified using edge queries. The goal in this problem is to find a minimum vertex cover of G⋆ using a small number of queries. Our main result is designing an algorithm that returns a vertex cover of G⋆ with size at most (3/2+Ρ”) times the expected size of the minimum vertex cover, using only O(n/Ρ” p) non-adaptive queries. This improves over the best-known 2-approximation algorithm by Behnezhad, Blum and Derakhshan [SODA’22] who also show that Ξ©(n/p) queries are necessary to achieve any constant approximation. Our guarantees also extend to instances where edge realizations are not fully independent. We complement this upperbound with a tight 3/2-approximation lower bound for stochastic graphs whose edges realizations demonstrate mild correlations. 
    more » « less
  3. In this paper, we generalize the recently studied stochastic matching problem to more accurately model a significant medical process, kidney exchange, and several other applications. Up until now the stochastic matching problem that has been studied was as follows: given a graph G= (V,E), each edge is included in the realized sub-graph of G independently with probability pe, and the goal is to find a degree-bounded sub-graph Q of G that has an expected maximum matching that approximates the expected maximum matching of G. This model does not account for possibilities of vertex dropouts, which can be found in several applications, e.g. in kidney exchange when donors or patients opt out of the exchange process as well as in online freelancing and online dating when online profiles are found to be faked. Thus, we will study a more generalized model of stochastic matching in which vertices and edges are both realized independently with some probabilities pv, pe, respectively, which more accurately fits important applications than the previously studied model. We will discuss the first algorithms and analysis for this generalization of the stochastic matching model and prove that they achieve good approximation ratios. In particular, we show that the approximation factor of a natural algorithm for this problem is at least 0.6568 in unweighted graphs, and 1/2+Ξ΅ in weighted graphs for some constant Ξ΅ >0. We further improve our result for unweighted graphs to 2/3 using edge degree constrained sub-graphs (EDCS). 
    more » « less
  4. We present a weighted approach to compute a maximum cardinality matching in an arbitrary bipartite graph. Our main result is a new algorithm that takes as input a weighted bipartite graph G(A cup B,E) with edge weights of 0 or 1. Let w <= n be an upper bound on the weight of any matching in G. Consider the subgraph induced by all the edges of G with a weight 0. Suppose every connected component in this subgraph has O(r) vertices and O(mr/n) edges. We present an algorithm to compute a maximum cardinality matching in G in O~(m(sqrt{w} + sqrt{r} + wr/n)) time. When all the edge weights are 1 (symmetrically when all weights are 0), our algorithm will be identical to the well-known Hopcroft-Karp (HK) algorithm, which runs in O(m sqrt{n}) time. However, if we can carefully assign weights of 0 and 1 on its edges such that both w and r are sub-linear in n and wr=O(n^{gamma}) for gamma < 3/2, then we can compute maximum cardinality matching in G in o(m sqrt{n}) time. Using our algorithm, we obtain a new O~(n^{4/3}/epsilon^4) time algorithm to compute an epsilon-approximate bottleneck matching of A,B subsetR^2 and an 1/(epsilon^{O(d)}}n^{1+(d-1)/(2d-1)}) poly log n time algorithm for computing epsilon-approximate bottleneck matching in d-dimensions. All previous algorithms take Omega(n^{3/2}) time. Given any graph G(A cup B,E) that has an easily computable balanced vertex separator for every subgraph G'(V',E') of size |V'|^{delta}, for delta in [1/2,1), we can apply our algorithm to compute a maximum matching in O~(mn^{delta/1+delta}) time improving upon the O(m sqrt{n}) time taken by the HK-Algorithm. 
    more » « less
  5. Given a graph and an integer k, Densest k-Subgraph is the algorithmic task of finding the subgraph on k vertices with the maximum number of edges. This is a fundamental problem that has been subject to intense study for decades, with applications spanning a wide variety of fields. The state-of-the-art algorithm is an O(n^{1/4+Ο΅})-factor approximation (for any Ο΅>0) due to Bhaskara et al. [STOC '10]. Moreover, the so-called log-density framework predicts that this is optimal, i.e. it is impossible for an efficient algorithm to achieve an O(n^{1/4βˆ’Ο΅})-factor approximation. In the average case, Densest k-Subgraph is a prototypical noisy inference task which is conjectured to exhibit a statistical-computational gap. In this work, we provide the strongest evidence yet of hardness for Densest k-Subgraph by showing matching lower bounds against the powerful Sum-of-Squares (SoS) algorithm, a meta-algorithm based on convex programming that achieves state-of-art algorithmic guarantees for many optimization and inference problems. For k ≀ n^1/2, we obtain a degree n^Ξ΄ SoS lower bound for the hard regime as predicted by the log-density framework. To show this, we utilize the modern framework for proving SoS lower bounds on average-case problems pioneered by Barak et al. [FOCS '16]. A key issue is that small denser-than-average subgraphs in the input will greatly affect the value of the candidate pseudo-expectation operator around the subgraph. To handle this challenge, we devise a novel matrix factorization scheme based on the positive minimum vertex separator. We then prove an intersection tradeoff lemma to show that the error terms when using this separator are indeed small. 
    more » « less