skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Normal distributions of finite Markov chains
We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.  more » « less
Award ID(s):
1764153 1760329
PAR ID:
10148320
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Journal of Algebra and Computation
Volume:
29
Issue:
08
ISSN:
0218-1967
Page Range / eLocation ID:
1431 to 1449
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We provide a general framework for computing mixing times of finite Markov chains when its minimal ideal is left zero. Our analysis is based on combining results by Brown and Diaconis with our previous work on stationary distributions of finite Markov chains. We introduce a new Markov chain on linear extensions of a poset with n vertices, which is a variant of the promotion Markov chain of Ayyer, Klee and the last author, and show that it has a mixing time O(n log n). 
    more » « less
  2. We provide a general framework for computing mixing times of finite Markov chains when its minimal ideal is left zero. Our analysis is based on combining results by Brown and Diaconis with our previous work on stationary distributions of finite Markov chains. We introduce a new Markov chain on linear extensions of a poset with n vertices, which is a variant of the promotion Markov chain of Ayyer, Klee and the last author, and show that it has a mixing time O(n log n). 
    more » « less
  3. Abstract Let G be a finite group. Let $H, K$ be subgroups of G and $$H \backslash G / K$$ the double coset space. If Q is a probability on G which is constant on conjugacy classes ( $$Q(s^{-1} t s) = Q(t)$$ ), then the random walk driven by Q on G projects to a Markov chain on $$H \backslash G /K$$ . This allows analysis of the lumped chain using the representation theory of G . Examples include coagulation-fragmentation processes and natural Markov chains on contingency tables. Our main example projects the random transvections walk on $$GL_n(q)$$ onto a Markov chain on $$S_n$$ via the Bruhat decomposition. The chain on $$S_n$$ has a Mallows stationary distribution and interesting mixing time behavior. The projection illuminates the combinatorics of Gaussian elimination. Along the way, we give a representation of the sum of transvections in the Hecke algebra of double cosets, which describes the Markov chain as a mixture of Metropolis chains. Some extensions and examples of double coset Markov chains with G a compact group are discussed. 
    more » « less
  4. Epigenetic cell memory, the inheritance of gene expression patterns across subsequent cell divisions, is a critical property of multicellular organisms. In recent work [S. Bruno, R. J. Williams, and D. Del Vecchio, PLOS Comput. Biol., 18 (2022), pp. 1–27], a subset of the authors observed in a simulation study how the stochastic dynamics and time scale differences between establishment and erasure processes in chromatin modifications (such as histone modifications and DNA methylation) can have a critical effect on epigenetic cell memory. In this paper, we provide a mathematical framework to rigorously validate and extend beyond these computational findings. Viewing our stochastic model of a chromatin modification circuit as a singularly perturbed, finite state, continuous time Markov chain, we extend beyond existing theory in order to characterize the leading coefficients in the series expansions of stationary distributions and mean first passage times. In particular, we characterize the limiting stationary distribution in terms of a reduced Markov chain, provide an algorithm to determine the orders of the poles of mean first passage times, and determine how changing erasure rates affects system behavior. The theoretical tools developed in this paper not only allow us to set a rigorous mathematical basis for the computational findings of our prior work, highlighting the effect of chromatin modification dynamics on epigenetic cell memory, but they can also be applied to other singularly perturbed Markov chains beyond the applications in this paper, especially those associated with chemical reaction networks. 
    more » « less
  5. Markov chain Monte Carlo algorithms have important applications in counting problems and in machine learning problems, settings that involve estimating quantities that are difficult to compute exactly. How much can quantum computers speed up classical Markov chain algorithms? In this work we consider the problem of speeding up simulated annealing algorithms, where the stationary distributions of the Markov chains are Gibbs distributions at temperatures specified according to an annealing schedule. We construct a quantum algorithm that both adaptively constructs an annealing schedule and quantum samples at each temperature. Our adaptive annealing schedule roughly matches the length of the best classical adaptive annealing schedules and improves on nonadaptive temperature schedules by roughly a quadratic factor. Our dependence on the Markov chain gap matches other quantum algorithms and is quadratically better than what classical Markov chains achieve. Our algorithm is the first to combine both of these quadratic improvements. Like other quantum walk algorithms, it also improves on classical algorithms by producing “qsamples” instead of classical samples. This means preparing quantum states whose amplitudes are the square roots of the target probability distribution. In constructing the annealing schedule we make use of amplitude estimation, and we introduce a method for making amplitude estimation nondestructive at almost no additional cost, a result that may have independent interest. Finally we demonstrate how this quantum simulated annealing algorithm can be applied to the problems of estimating partition functions and Bayesian inference. 
    more » « less