skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Driving Markov Chains to Desired Equilibria via Linear Programming
In this paper, we develop methods to find the most sparse perturbation to a given Markov chain (either discrete- or continuous-time) such that the perturbed Markov chain achieves a desired equilibrium.  more » « less
Award ID(s):
1723272
PAR ID:
10184341
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 53rd Asilomar Conference on Signals, Systems, and Computers
Page Range / eLocation ID:
741 to 748
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we consider the problem of mode clustering in Markov jump models. This model class consists of multiple dynamical modes with a switching sequence that determines how the system switches between them over time. Under different active modes, the observations can have different characteristics. Given the observations only and without knowing the mode sequence, the goal is to cluster the modes based on their transition distributions in the Markov chain to find a reduced-rank Markov matrix that is embedded in the original Markov chain. Our approach involves mode sequence estimation, mode clustering and reduced-rank model estimation, where mode clustering is achieved by applying the singular value decomposition and k-means. We show that, under certain conditions, the clustering error can be bounded, and the reduced-rank Markov chain is a good approximation to the original Markov chain. Through simulations, we show the efficacy of our approach and the application of our approach to real world scenarios. Index Terms—Switched model, Markov chain, clustering 
    more » « less
  2. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n. 
    more » « less
  3. The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n. 
    more » « less
  4. The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n. 
    more » « less
  5. Abstract Let G be a finite group. Let $H, K$ be subgroups of G and $$H \backslash G / K$$ the double coset space. If Q is a probability on G which is constant on conjugacy classes ( $$Q(s^{-1} t s) = Q(t)$$ ), then the random walk driven by Q on G projects to a Markov chain on $$H \backslash G /K$$ . This allows analysis of the lumped chain using the representation theory of G . Examples include coagulation-fragmentation processes and natural Markov chains on contingency tables. Our main example projects the random transvections walk on $$GL_n(q)$$ onto a Markov chain on $$S_n$$ via the Bruhat decomposition. The chain on $$S_n$$ has a Mallows stationary distribution and interesting mixing time behavior. The projection illuminates the combinatorics of Gaussian elimination. Along the way, we give a representation of the sum of transvections in the Hecke algebra of double cosets, which describes the Markov chain as a mixture of Metropolis chains. Some extensions and examples of double coset Markov chains with G a compact group are discussed. 
    more » « less