Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.
more »
« less
Asymptotics of quasi-stationary distributions of small noise stochastic dynamical systems in unbounded domains
Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.
more »
« less
- PAR ID:
- 10341409
- Date Published:
- Journal Name:
- Advances in Applied Probability
- Volume:
- 54
- Issue:
- 1
- ISSN:
- 0001-8678
- Page Range / eLocation ID:
- 64 to 110
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
The box-ball systems are integrable cellular automata whose long-time behavior is characterized by soliton solutions, with rich connections to other integrable systems such as the Korteweg-de Vries equation. In this paper, we consider a multicolor box-ball system with two types of random initial configurations and obtain sharp scaling limits of the soliton lengths as the system size tends to infinity. We obtain a sharp scaling limit of soliton lengths that turns out to be different from the single color case as established in [LLP20]. A large part of our analysis is devoted to studying the associated carrier process, which is a multi-dimensional Markov chain on the orthant, whose excursions and running maxima are closely related to soliton lengths. We establish the sharp scaling of its ruin probabilities, Skorokhod decomposition, strong law of large numbers, and weak diffusive scaling limit to a semimartingale reflecting Brownian motion with explicit parameters. We also establish and utilize complementary descriptions of the soliton lengths and numbers in terms of the modified Greene-Kleitman invariants for the box-ball systems and associated circular exclusion processes.more » « less
-
We study the asymptotic behavior, uniform-in-time, of a nonlinear dynamical system under the combined effects of fast periodic sampling with period [Formula: see text] and small white noise of size [Formula: see text]. The dynamics depend on both the current and recent measurements of the state, and as such it is not Markovian. Our main results can be interpreted as Law of Large Numbers (LLN) and Central Limit Theorem (CLT) type results. LLN type result shows that the resulting stochastic process is close to an ordinary differential equation (ODE) uniformly in time as [Formula: see text] Further, in regards to CLT, we provide quantitative and uniform-in-time control of the fluctuations process. The interaction of the small parameters provides an additional drift term in the limiting fluctuations, which captures both the sampling and noise effects. As a consequence, we obtain a first-order perturbation expansion of the stochastic process along with time-independent estimates on the remainder. The zeroth- and first-order terms in the expansion are given by an ODE and SDE, respectively. Simulation studies that illustrate and supplement the theoretical results are also provided.more » « less