skip to main content

This content will become publicly available on December 6, 2024

Title: Improved Distributed Algorithms for Random Colorings
Markov Chain Monte Carlo (MCMC) algorithms are a widely-used algorithmic tool for sampling from high-dimensional distributions, a notable example is the equilibirum distribution of graphical models. The Glauber dynamics, also known as the Gibbs sampler, is the simplest example of an MCMC algorithm; the transitions of the chain update the configuration at a randomly chosen coordinate at each step. Several works have studied distributed versions of the Glauber dynamics and we extend these efforts to a more general family of Markov chains. An important combinatorial problem in the study of MCMC algorithms is random colorings. Given a graph G of maximum degree Δ and an integer k ≥ Δ+1, the goal is to generate a random proper vertex k-coloring of G. Jerrum (1995) proved that the Glauber dynamics has O(nlog{n}) mixing time when k > 2Δ. Fischer and Ghaffari (2018), and independently Feng, Hayes, and Yin (2018), presented a parallel and distributed version of the Glauber dynamics which converges in O(log{n}) rounds for k > (2+ε)Δ for any ε > 0. We improve this result to k > (11/6-δ)Δ for a fixed δ > 0. This matches the state of the art for randomly sampling colorings of general graphs in the sequential setting. Whereas previous works focused on distributed variants of the Glauber dynamics, our work presents a parallel and distributed version of the more general flip dynamics presented by Vigoda (2000) (and refined by Chen, Delcourt, Moitra, Perarnau, and Postle (2019)), which recolors local maximal two-colored components in each step.  more » « less
Award ID(s):
2147094 2205743
Author(s) / Creator(s):
; ;
Bessani, Alysson; Défago, Xavier; Nakamura, Junya; Wada, Koichi; Yamauchi, Yukiko
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Journal Name:
27th International Conference on Principles of Distributed Systems (OPODIS 2023)
Subject(s) / Keyword(s):
Distributed Graph Algorithms Local Algorithms Coloring Glauber Dynamics Sampling Markov Chains Theory of computation → Distributed algorithms Theory of computation → Graph algorithms analysis Theory of computation → Random walks and Markov chains
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For general spin systems, we prove that a contractive coupling for an arbitrary local Markov chain implies optimal bounds on the mixing time and the modified log-Sobolev constant for a large class of Markov chains including the Glauber dynamics, arbitrary heat-bath block dynamics, and the Swendsen-Wang dynamics. This reveals a novel connection between probabilistic techniques for bounding the convergence to stationarity and analytic tools for analyzing the decay of relative entropy. As a corollary of our general results, we obtain O(n log n) mixing time and Ω(1/n) modified log-Sobolev constant of the Glauber dynamics for sampling random q-colorings of an n-vertex graph with constant maximum degree Δ when q > (11/6–∊0)Δ for some fixed ∊0 > 0. We also obtain O(log n) mixing time and Ω(1) modified log-Sobolev constant of the Swendsen-Wang dynamics for the ferromagnetic Ising model on an n-vertex graph of constant maximum degree when the parameters of the system lie in the tree uniqueness region. At the heart of our results are new techniques for establishing spectral independence of the spin system and block factorization of the relative entropy. On one hand we prove that a contractive coupling of any local Markov chain implies spectral independence of the Gibbs distribution. On the other hand we show that spectral independence implies factorization of entropy for arbitrary blocks, establishing optimal bounds on the modified log-Sobolev constant of the corresponding block dynamics. 
    more » « less
  2. Markov Chain Monte Carlo (MCMC) has been the de facto technique for sampling and inference of large graphs such as online social networks. At the heart of MCMC lies the ability to construct an ergodic Markov chain that attains any given stationary distribution \pi, often in the form of random walks or crawling agents on the graph. Most of the works around MCMC, however, presume that the graph is undirected or has reciprocal edges, and become inapplicable when the graph is directed and non-reciprocal. Here we develop a similar framework for directed graphs called Non- Markovian Monte Carlo (NMMC) by establishing a mapping to convert \pi into the quasi-stationary distribution of a carefully constructed transient Markov chain on an extended state space. As applications, we demonstrate how to achieve any given distribution \pi on a directed graph and estimate the eigenvector centrality using a set of non-Markovian, history-dependent random walks on the same graph in a distributed manner.We also provide numerical results on various real-world directed graphs to confirm our theoretical findings, and present several practical enhancements to make our NMMC method ready for practical use inmost directed graphs. To the best of our knowledge, the proposed NMMC framework for directed graphs is the first of its kind, unlocking all the limitations set by the standard MCMC methods for undirected graphs. 
    more » « less
  3. Let Ωqq(H) denote the set of proper [q]‐colorings of the hypergraphH. Let Γqbe the graph with vertex set Ωqwhere two coloringsσ,τare adjacent iff the corresponding colorings differ in exactly one vertex. We show that ifH=Hn,m;k, k ≥ 2, the randomk‐uniform hypergraph withV=[n] andm=dn/khyperedges then w.h.p. Γqis connected ifdis sufficiently large and. This is optimal up to the first order ind. Furthermore, with a few more colors, we find that the diameter of ΓqisO(n) w.h.p., where the hidden constant depends ond. So, with this choice ofd,q, the natural Glauber dynamics Markov Chain on Ωqis ergodic w.h.p.

    more » « less
  4. In this paper we consider the problem of sampling from the low-temperature exponential random graph model (ERGM). The usual approach is via Markov chain Monte Carlo, but Bhamidi et al. showed that any local Markov chain suffers from an exponentially large mixing time due to metastable states. We instead consider metastable mixing, a notion of approximate mixing relative to the stationary distribution, for which it turns out to suffice to mix only within a collection of metastable states. We show that the Glauber dynamics for the ERGM at any temperature -- except at a lower-dimensional critical set of parameters -- when initialized at G(n,p) for the right choice of p has a metastable mixing time of O(n^2logn) to within total variation distance exp(−Ω(n)). 
    more » « less
  5. null (Ed.)
    The cumulative pebbling complexity of a directed acyclic graph G is defined as cc(G) = min_P ∑_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of G and |P_i| denotes the number of pebbles on the graph during round i. Intuitively, cc(G) captures the amortized Space-Time complexity of pebbling m copies of G in parallel. The cumulative pebbling complexity of a graph G is of particular interest in the field of cryptography as cc(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) f_{G,H} [Joël Alwen and Vladimir Serbinenko, 2015] defined using a constant indegree directed acyclic graph (DAG) G and a random oracle H(⋅). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find x such that f_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF f_{G,H}, it is crucial to estimate the value cc(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP-Hard to compute cc(G), but their techniques do not even rule out an efficient (1+ε)-approximation algorithm for any constant ε>0. We show that for any constant c > 0, it is Unique Games hard to approximate cc(G) to within a factor of c. Along the way, we show the hardness of approximation of the DAG Vertex Deletion problem on DAGs of constant indegree. Namely, we show that for any k,ε >0 and given a DAG G with N nodes and constant indegree, it is Unique Games hard to distinguish between the case that G is (e_1, d_1)-reducible with e_1=N^{1/(1+2 ε)}/k and d_1=k N^{2 ε/(1+2 ε)}, and the case that G is (e_2, d_2)-depth-robust with e_2 = (1-ε)k e_1 and d_2= 0.9 N^{(1+ε)/(1+2 ε)}, which may be of independent interest. Our result generalizes a result of Svensson who proved an analogous result for DAGs with indegree 𝒪(N). 
    more » « less