skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Mixing time for Markov chain on linear extensions
We provide a general framework for computing mixing times of finite Markov chains when its minimal ideal is left zero. Our analysis is based on combining results by Brown and Diaconis with our previous work on stationary distributions of finite Markov chains. We introduce a new Markov chain on linear extensions of a poset with n vertices, which is a variant of the promotion Markov chain of Ayyer, Klee and the last author, and show that it has a mixing time O(n log n).  more » « less
Award ID(s):
1764153 1760329
NSF-PAR ID:
10287170
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Séminaire lotharingien de combinatoire
Volume:
85B
ISSN:
1286-4889
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We provide a general framework for computing mixing times of finite Markov chains when its minimal ideal is left zero. Our analysis is based on combining results by Brown and Diaconis with our previous work on stationary distributions of finite Markov chains. We introduce a new Markov chain on linear extensions of a poset with n vertices, which is a variant of the promotion Markov chain of Ayyer, Klee and the last author, and show that it has a mixing time O(n log n). 
    more » « less
  2. Abstract Let G be a finite group. Let $H, K$ be subgroups of G and $H \backslash G / K$ the double coset space. If Q is a probability on G which is constant on conjugacy classes ( $Q(s^{-1} t s) = Q(t)$ ), then the random walk driven by Q on G projects to a Markov chain on $H \backslash G /K$ . This allows analysis of the lumped chain using the representation theory of G . Examples include coagulation-fragmentation processes and natural Markov chains on contingency tables. Our main example projects the random transvections walk on $GL_n(q)$ onto a Markov chain on $S_n$ via the Bruhat decomposition. The chain on $S_n$ has a Mallows stationary distribution and interesting mixing time behavior. The projection illuminates the combinatorics of Gaussian elimination. Along the way, we give a representation of the sum of transvections in the Hecke algebra of double cosets, which describes the Markov chain as a mixture of Metropolis chains. Some extensions and examples of double coset Markov chains with G a compact group are discussed. 
    more » « less
  3. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n. 
    more » « less
  4. The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n. 
    more » « less
  5. The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n. 
    more » « less