Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.
more »
« less
Asymptotics of quasi-stationary distributions of small noise stochastic dynamical systems in unbounded domains
Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.
more »
« less
- PAR ID:
- 10341409
- Date Published:
- Journal Name:
- Advances in Applied Probability
- Volume:
- 54
- Issue:
- 1
- ISSN:
- 0001-8678
- Page Range / eLocation ID:
- 64 to 110
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
This paper is devoted to the detection of contingencies in modern power systems. Because the systems we consider are under the framework of cyber-physical systems, it is necessary to take into consideration of the information processing aspect and communication networks. A consequence is that noise and random disturbances are unavoidable. The detection problem then becomes one known as quickest detection. In contrast to running the detection problem in a discretetime setting leading to a sequence of detection problems, this work focuses on the problem in a continuous-time setup. We treat stochastic differential equation models. One of the distinct features is that the systems are hybrid involving both continuous states and discrete events that coexist and interact. The discrete event process is modeled by a continuous-time Markov chain representing random environments that are not resented by a continuous sample path. The quickest detection then can be written as an optimal stopping problem. This paper is devoted to finding numerical solutions to the underlying problem. We use a Markov chain approximation method to construct the numerical algorithms. Numerical examples are used to demonstrate the performance.more » « less
-
We study the asymptotic behavior, uniform-in-time, of a nonlinear dynamical system under the combined effects of fast periodic sampling with period [Formula: see text] and small white noise of size [Formula: see text]. The dynamics depend on both the current and recent measurements of the state, and as such it is not Markovian. Our main results can be interpreted as Law of Large Numbers (LLN) and Central Limit Theorem (CLT) type results. LLN type result shows that the resulting stochastic process is close to an ordinary differential equation (ODE) uniformly in time as [Formula: see text] Further, in regards to CLT, we provide quantitative and uniform-in-time control of the fluctuations process. The interaction of the small parameters provides an additional drift term in the limiting fluctuations, which captures both the sampling and noise effects. As a consequence, we obtain a first-order perturbation expansion of the stochastic process along with time-independent estimates on the remainder. The zeroth- and first-order terms in the expansion are given by an ODE and SDE, respectively. Simulation studies that illustrate and supplement the theoretical results are also provided.more » « less