For general spin systems, we prove that a contractive coupling for an arbitrary local Markov chain implies optimal bounds on the mixing time and the modified logSobolev constant for a large class of Markov chains including the Glauber dynamics, arbitrary heatbath block dynamics, and the SwendsenWang dynamics. This reveals a novel connection between probabilistic techniques for bounding the convergence to stationarity and analytic tools for analyzing the decay of relative entropy. As a corollary of our general results, we obtain O(n log n) mixing time and Ω(1/n) modified logSobolev constant of the Glauber dynamics for sampling random qcolorings of an nvertex graph with constant maximum degree Δ when q > (11/6–∊0)Δ for some fixed ∊0 > 0. We also obtain O(log n) mixing time and Ω(1) modified logSobolev constant of the SwendsenWang dynamics for the ferromagnetic Ising model on an nvertex graph of constant maximum degree when the parameters of the system lie in the tree uniqueness region. At the heart of our results are new techniques for establishing spectral independence of the spin system and block factorization of the relative entropy. On one hand we prove that a contractive coupling of any local Markov chain implies spectral independence of the Gibbs distribution. On the other hand we show that spectral independence implies factorization of entropy for arbitrary blocks, establishing optimal bounds on the modified logSobolev constant of the corresponding block dynamics.
more »
« less
This content will become publicly available on December 6, 2024
Improved Distributed Algorithms for Random Colorings
Markov Chain Monte Carlo (MCMC) algorithms are a widelyused algorithmic tool for sampling from highdimensional distributions, a notable example is the equilibirum distribution of graphical models. The Glauber dynamics, also known as the Gibbs sampler, is the simplest example of an MCMC algorithm; the transitions of the chain update the configuration at a randomly chosen coordinate at each step. Several works have studied distributed versions of the Glauber dynamics and we extend these efforts to a more general family of Markov chains. An important combinatorial problem in the study of MCMC algorithms is random colorings. Given a graph G of maximum degree Δ and an integer k ≥ Δ+1, the goal is to generate a random proper vertex kcoloring of G.
Jerrum (1995) proved that the Glauber dynamics has O(nlog{n}) mixing time when k > 2Δ. Fischer and Ghaffari (2018), and independently Feng, Hayes, and Yin (2018), presented a parallel and distributed version of the Glauber dynamics which converges in O(log{n}) rounds for k > (2+ε)Δ for any ε > 0. We improve this result to k > (11/6δ)Δ for a fixed δ > 0. This matches the state of the art for randomly sampling colorings of general graphs in the sequential setting. Whereas previous works focused on distributed variants of the Glauber dynamics, our work presents a parallel and distributed version of the more general flip dynamics presented by Vigoda (2000) (and refined by Chen, Delcourt, Moitra, Perarnau, and Postle (2019)), which recolors local maximal twocolored components in each step.
more »
« less
 NSFPAR ID:
 10502733
 Editor(s):
 Bessani, Alysson; Défago, Xavier; Nakamura, Junya; Wada, Koichi; Yamauchi, Yukiko
 Publisher / Repository:
 Schloss Dagstuhl – LeibnizZentrum für Informatik
 Date Published:
 Journal Name:
 27th International Conference on Principles of Distributed Systems (OPODIS 2023)
 Subject(s) / Keyword(s):
 Distributed Graph Algorithms Local Algorithms Coloring Glauber Dynamics Sampling Markov Chains Theory of computation → Distributed algorithms Theory of computation → Graph algorithms analysis Theory of computation → Random walks and Markov chains
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


Markov Chain Monte Carlo (MCMC) has been the de facto technique for sampling and inference of large graphs such as online social networks. At the heart of MCMC lies the ability to construct an ergodic Markov chain that attains any given stationary distribution \pi, often in the form of random walks or crawling agents on the graph. Most of the works around MCMC, however, presume that the graph is undirected or has reciprocal edges, and become inapplicable when the graph is directed and nonreciprocal. Here we develop a similar framework for directed graphs called Non Markovian Monte Carlo (NMMC) by establishing a mapping to convert \pi into the quasistationary distribution of a carefully constructed transient Markov chain on an extended state space. As applications, we demonstrate how to achieve any given distribution \pi on a directed graph and estimate the eigenvector centrality using a set of nonMarkovian, historydependent random walks on the same graph in a distributed manner.We also provide numerical results on various realworld directed graphs to confirm our theoretical findings, and present several practical enhancements to make our NMMC method ready for practical use inmost directed graphs. To the best of our knowledge, the proposed NMMC framework for directed graphs is the first of its kind, unlocking all the limitations set by the standard MCMC methods for undirected graphs.more » « less

Let Ω_{q}=Ω_{q}(
H ) denote the set of proper [q ]‐colorings of the hypergraphH . Let Γ_{q}be the graph with vertex set Ω_{q}where two coloringsσ ,τ are adjacent iff the corresponding colorings differ in exactly one vertex. We show that ifH =H _{n,m;k},k ≥ 2, the randomk ‐uniform hypergraph withV =[n ] andm =dn /k hyperedges then w.h.p. Γ_{q}is connected ifd is sufficiently large and. This is optimal up to the first order in d . Furthermore, with a few more colors, we find that the diameter of Γ_{q}isO (n ) w.h.p., where the hidden constant depends ond . So, with this choice ofd ,q , the natural Glauber dynamics Markov Chain on Ω_{q}is ergodic w.h.p. 
In this paper we consider the problem of sampling from the lowtemperature exponential random graph model (ERGM). The usual approach is via Markov chain Monte Carlo, but Bhamidi et al. showed that any local Markov chain suffers from an exponentially large mixing time due to metastable states. We instead consider metastable mixing, a notion of approximate mixing relative to the stationary distribution, for which it turns out to suffice to mix only within a collection of metastable states. We show that the Glauber dynamics for the ERGM at any temperature  except at a lowerdimensional critical set of parameters  when initialized at G(n,p) for the right choice of p has a metastable mixing time of O(n^2logn) to within total variation distance exp(−Ω(n)).more » « less

null (Ed.)The cumulative pebbling complexity of a directed acyclic graph G is defined as cc(G) = min_P ∑_i P_i, where the minimum is taken over all legal (parallel) black pebblings of G and P_i denotes the number of pebbles on the graph during round i. Intuitively, cc(G) captures the amortized SpaceTime complexity of pebbling m copies of G in parallel. The cumulative pebbling complexity of a graph G is of particular interest in the field of cryptography as cc(G) is tightly related to the amortized AreaTime complexity of the DataIndependent MemoryHard Function (iMHF) f_{G,H} [Joël Alwen and Vladimir Serbinenko, 2015] defined using a constant indegree directed acyclic graph (DAG) G and a random oracle H(⋅). A secure iMHF should have amortized SpaceTime complexity as high as possible, e.g., to deter bruteforce password attacker who wants to find x such that f_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF f_{G,H}, it is crucial to estimate the value cc(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NPHard to compute cc(G), but their techniques do not even rule out an efficient (1+ε)approximation algorithm for any constant ε>0. We show that for any constant c > 0, it is Unique Games hard to approximate cc(G) to within a factor of c. Along the way, we show the hardness of approximation of the DAG Vertex Deletion problem on DAGs of constant indegree. Namely, we show that for any k,ε >0 and given a DAG G with N nodes and constant indegree, it is Unique Games hard to distinguish between the case that G is (e_1, d_1)reducible with e_1=N^{1/(1+2 ε)}/k and d_1=k N^{2 ε/(1+2 ε)}, and the case that G is (e_2, d_2)depthrobust with e_2 = (1ε)k e_1 and d_2= 0.9 N^{(1+ε)/(1+2 ε)}, which may be of independent interest. Our result generalizes a result of Svensson who proved an analogous result for DAGs with indegree 𝒪(N).more » « less