skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 11, 2026

Title: On the Spectral Expansion of Monotone Subsets of the Hypercube
We study the spectral gap of subgraphs of the hypercube induced by monotone subsets of vertices. For a monotone subset A ⊆ {0, 1}n of density μ(A), the previous best lower bound on the spectral gap, due to Cohen [Coh16], was γ ≳ μ(A)/n2, improving upon the earlier bound γ ≳ μ(A)2/n2 established by Ding and Mossel [DM14]. In this paper, we prove the optimal lower bound γ ≳ μ(A)/n. As a corollary, we improve the mixing time upper bound of the random walk on constant-density monotone sets from O(n3), as shown by Ding and Mossel, to O(n2). Along the way, we develop two new inequalities that may be of independent interest: (1) a directed L2-Poincar´e inequality on the hypercube, and (2) an “approximate” FKG inequality for monotone sets  more » « less
Award ID(s):
2022448
PAR ID:
10631078
Author(s) / Creator(s):
;
Publisher / Repository:
International Conference on Randomization and Computation (RANDOM
Date Published:
Format(s):
Medium: X
Location:
Berkeley, California
Sponsoring Org:
National Science Foundation
More Like this
  1. Recently, a number of variants of the notion of cut-preserving hypergraph sparsification have been studied in the literature. These variants include directed hypergraph sparsification, submodular hypergraph sparsification, general notions of approximation including spectral approximations, and more general notions like sketching that can answer cut queries using more general data structures than just sparsifiers. In this work, we provide reductions between these different variants of hypergraph sparsification and establish new upper and lower bounds on the space complexity of preserving their cuts. Specifically, we show that: 1) (1 ± ε) directed hypergraph spectral (respectively cut) sparsification on n vertices efficiently reduces to (1 ± ε) undirected hypergraph spectral (respectively cut) sparsification on n² + 1 vertices. Using the work of Lee and Jambulapati, Liu, and Sidford (STOC 2023) this gives us directed hypergraph spectral sparsifiers with O(n² log²(n) / ε²) hyperedges and directed hypergraph cut sparsifiers with O(n² log(n)/ ε²) hyperedges by using the work of Chen, Khanna, and Nagda (FOCS 2020), both of which improve upon the work of Oko, Sakaue, and Tanigawa (ICALP 2023). 2) Any cut sketching scheme which preserves all cuts in any directed hypergraph on n vertices to a (1 ± ε) factor (for ε = 1/(2^{O(√{log(n)})})) must have worst-case bit complexity n^{3 - o(1)}. Because directed hypergraphs are a subclass of submodular hypergraphs, this also shows a worst-case sketching lower bound of n^{3 - o(1)} bits for sketching cuts in general submodular hypergraphs. 3) (1 ± ε) monotone submodular hypergraph cut sparsification on n vertices efficiently reduces to (1 ± ε) symmetric submodular hypergraph sparsification on n+1 vertices. Using the work of Jambulapati et. al. (FOCS 2023) this gives us monotone submodular hypergraph sparsifiers with Õ(n / ε²) hyperedges, improving on the O(n³ / ε²) hyperedge bound of Kenneth and Krauthgamer (arxiv 2023). At a high level, our results use the same general principle, namely, by showing that cuts in one class of hypergraphs can be simulated by cuts in a simpler class of hypergraphs, we can leverage sparsification results for the simpler class of hypergraphs. 
    more » « less
  2. Recently, a number of variants of the notion of cut-preserving hypergraph sparsification have been studied in the literature. These variants include directed hypergraph sparsification, submodular hypergraph sparsification, general notions of approximation including spectral approximations, and more general notions like sketching that can answer cut queries using more general data structures than just sparsifiers. In this work, we provide reductions between these different variants of hypergraph sparsification and establish new upper and lower bounds on the space complexity of preserving their cuts. Specifically, we show that: 1) (1 ± ε) directed hypergraph spectral (respectively cut) sparsification on n vertices efficiently reduces to (1 ± ε) undirected hypergraph spectral (respectively cut) sparsification on n² + 1 vertices. Using the work of Lee and Jambulapati, Liu, and Sidford (STOC 2023) this gives us directed hypergraph spectral sparsifiers with O(n² log²(n) / ε²) hyperedges and directed hypergraph cut sparsifiers with O(n² log(n)/ ε²) hyperedges by using the work of Chen, Khanna, and Nagda (FOCS 2020), both of which improve upon the work of Oko, Sakaue, and Tanigawa (ICALP 2023). 2) Any cut sketching scheme which preserves all cuts in any directed hypergraph on n vertices to a (1 ± ε) factor (for ε = 1/(2^{O(√{log(n)})})) must have worst-case bit complexity n^{3 - o(1)}. Because directed hypergraphs are a subclass of submodular hypergraphs, this also shows a worst-case sketching lower bound of n^{3 - o(1)} bits for sketching cuts in general submodular hypergraphs. 3) (1 ± ε) monotone submodular hypergraph cut sparsification on n vertices efficiently reduces to (1 ± ε) symmetric submodular hypergraph sparsification on n+1 vertices. Using the work of Jambulapati et. al. (FOCS 2023) this gives us monotone submodular hypergraph sparsifiers with Õ(n / ε²) hyperedges, improving on the O(n³ / ε²) hyperedge bound of Kenneth and Krauthgamer (arxiv 2023). At a high level, our results use the same general principle, namely, by showing that cuts in one class of hypergraphs can be simulated by cuts in a simpler class of hypergraphs, we can leverage sparsification results for the simpler class of hypergraphs. 
    more » « less
  3. Bringmann, Karl; Grohe, Martin; Puppis, Gabriele; Svensson, Ola (Ed.)
    Recently, a number of variants of the notion of cut-preserving hypergraph sparsification have been studied in the literature. These variants include directed hypergraph sparsification, submodular hypergraph sparsification, general notions of approximation including spectral approximations, and more general notions like sketching that can answer cut queries using more general data structures than just sparsifiers. In this work, we provide reductions between these different variants of hypergraph sparsification and establish new upper and lower bounds on the space complexity of preserving their cuts. Specifically, we show that: 1) (1 ± ε) directed hypergraph spectral (respectively cut) sparsification on n vertices efficiently reduces to (1 ± ε) undirected hypergraph spectral (respectively cut) sparsification on n² + 1 vertices. Using the work of Lee and Jambulapati, Liu, and Sidford (STOC 2023) this gives us directed hypergraph spectral sparsifiers with O(n² log²(n) / ε²) hyperedges and directed hypergraph cut sparsifiers with O(n² log(n)/ ε²) hyperedges by using the work of Chen, Khanna, and Nagda (FOCS 2020), both of which improve upon the work of Oko, Sakaue, and Tanigawa (ICALP 2023). 2) Any cut sketching scheme which preserves all cuts in any directed hypergraph on n vertices to a (1 ± ε) factor (for ε = 1/(2^{O(√{log(n)})})) must have worst-case bit complexity n^{3 - o(1)}. Because directed hypergraphs are a subclass of submodular hypergraphs, this also shows a worst-case sketching lower bound of n^{3 - o(1)} bits for sketching cuts in general submodular hypergraphs. 3) (1 ± ε) monotone submodular hypergraph cut sparsification on n vertices efficiently reduces to (1 ± ε) symmetric submodular hypergraph sparsification on n+1 vertices. Using the work of Jambulapati et. al. (FOCS 2023) this gives us monotone submodular hypergraph sparsifiers with Õ(n / ε²) hyperedges, improving on the O(n³ / ε²) hyperedge bound of Kenneth and Krauthgamer (arxiv 2023). At a high level, our results use the same general principle, namely, by showing that cuts in one class of hypergraphs can be simulated by cuts in a simpler class of hypergraphs, we can leverage sparsification results for the simpler class of hypergraphs. 
    more » « less
  4. Tauman_Kalai, Yael (Ed.)
    We show improved monotonicity testers for the Boolean hypercube under the p-biased measure, as well as over the hypergrid [m]ⁿ. Our results are: 1) For any p ∈ (0,1), for the p-biased hypercube we show a non-adaptive tester that makes Õ(√n/ε²) queries, accepts monotone functions with probability 1 and rejects functions that are ε-far from monotone with probability at least 2/3. 2) For all m ∈ ℕ, we show an Õ(√nm³/ε²) query monotonicity tester over [m]ⁿ. We also establish corresponding directed isoperimetric inequalities in these domains, analogous to the isoperimetric inequality in [Subhash Khot et al., 2018]. Previously, the best known tester due to Black, Chakrabarty and Seshadhri [Hadley Black et al., 2018] had Ω(n^{5/6}) query complexity. Our results are optimal up to poly-logarithmic factors and the dependency on m. Our proof uses a notion of monotone embeddings of measures into the Boolean hypercube that can be used to reduce the problem of monotonicity testing over an arbitrary product domains to the Boolean cube. The embedding maps a function over a product domain of dimension n into a function over a Boolean cube of a larger dimension n', while preserving its distance from being monotone; an embedding is considered efficient if n' is not much larger than n, and we show how to construct efficient embeddings in the above mentioned settings. 
    more » « less
  5. We study local symmetry breaking problems in the Congest model, focusing on ruling set problems, which generalize the fundamental Maximal Independent Set (MIS) problem. The time (round) complexity of MIS (and ruling sets) have attracted much attention in the Local model. Indeed, recent results (Barenboim et al., FOCS 2012, Ghaffari SODA 2016) for the MIS problem have tried to break the long-standing O(log n)-round “barrier” achieved by Luby’s algorithm, but these yield o(log n)-round complexity only when the maximum degree  is somewhat small relative to n. More importantly, these results apply only in the Local model. In fact, the best known time bound in the Congest model is still O(log n) (via Luby’s algorithm) even for moderately small  (i.e., for  = (log n) and  = o(n)). Furthermore, message complexity has been largely ignored in the context of local symmetry breaking. Luby’s algorithm takes O(m) messages on m-edge graphs and this is the best known bound with respect to messages. Our work is motivated by the following central question: can we break the (log n) time complexity barrier and the (m) message complexity barrier in the Congest model for MIS or closelyrelated symmetry breaking problems? This paper presents progress towards this question for the distributed ruling set problem in the Congest model. A -ruling set is an independent set such that every node in the graph is at most hops from a node in the independent set. We present the following results: Time Complexity: We show that we can break the O(log n) “barrier” for 2- and 3-ruling sets. We compute 3-ruling sets in O  log n log log n  rounds with high probability (whp). More generally we show that 2-ruling sets can be computed in O  log · (log n)1/2+" + log n log log n  rounds for any " > 0, which is o(log n) for a wide range of  values (e.g.,  = 2(log n)1/2−" ). These are the first 2- and 3-ruling set algorithms to improve over the O(log n)-round complexity of Luby’s algorithm in the Congest model. Message Complexity: We show an (n2) lower bound on the message complexity of computing an MIS (i.e., 1-ruling set) which holds also for randomized algorithms and present a contrast to this by showing a randomized algorithm for 2-ruling sets that, whp, uses only O(n log2 n) messages and runs in O( log n) rounds. This is the first message-efficient algorithm known for ruling sets, which has message complexity nearly linear in n (which is optimal up to a polylogarithmic factor). 
    more » « less