skip to main content


Title: Asymptotics of quasi-stationary distributions of small noise stochastic dynamical systems in unbounded domains
Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.  more » « less
Award ID(s):
1853968 1814894
NSF-PAR ID:
10341409
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Advances in Applied Probability
Volume:
54
Issue:
1
ISSN:
0001-8678
Page Range / eLocation ID:
64 to 110
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with a ( n ) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are a ( n ) particles at time n . We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that a ( n ) = o ( n ). When a ( n ) ~ n , the fluctuation behavior is expected to be non-standard. Some exploratory numerical results are presented to illustrate the performance of the two approximation schemes. 
    more » « less
  2. This paper is about a class of stochastic reaction networks. Of interest are the dynamics of interconversion among a finite number of substances through reactions that consume some of the substances and produce others. The models we consider are continuous-time Markov jump processes, intended as idealizations of a broad class of biological networks. Reaction rates depend linearly on “enzymes,” which are among the substances produced, and a reaction can occur only in the presence of sufficient upstream material. We present rigorous results for this class of stochastic dynamical systems, the mean-field behaviors of which are described by ordinary differential equations (ODEs). Under the assumption of exponential network growth, we identify certain ODE solutions as being potentially traceable and give conditions on network trajectories which, when rescaled, can with high probability be approximated by these ODE solutions. This leads to a complete characterization of the ω -limit sets of such network solutions (as points or random tori). Dimension reduction is noted depending on the number of enzymes. The second half of this paper is focused on depletion dynamics, i.e., dynamics subsequent to the “phase transition” that occurs when one of the substances becomes unavailable. The picture can be complex, for the depleted substance can be produced intermittently through other network reactions. Treating the model as a slow–fast system, we offer a mean-field description, a first step to understanding what we believe is one of the most natural bifurcations for reaction networks. 
    more » « less
  3. This paper is devoted to the detection of contingencies in modern power systems. Because the systems we consider are under the framework of cyber-physical systems, it is necessary to take into consideration of the information processing aspect and communication networks. A consequence is that noise and random disturbances are unavoidable. The detection problem then becomes one known as quickest detection. In contrast to running the detection problem in a discretetime setting leading to a sequence of detection problems, this work focuses on the problem in a continuous-time setup. We treat stochastic differential equation models. One of the distinct features is that the systems are hybrid involving both continuous states and discrete events that coexist and interact. The discrete event process is modeled by a continuous-time Markov chain representing random environments that are not resented by a continuous sample path. The quickest detection then can be written as an optimal stopping problem. This paper is devoted to finding numerical solutions to the underlying problem. We use a Markov chain approximation method to construct the numerical algorithms. Numerical examples are used to demonstrate the performance. 
    more » « less
  4. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n. 
    more » « less
  5. The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n. 
    more » « less