skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exponential ergodicity and steady-state approximations for a class of markov processes under fast regime switching
Abstract We study ergodic properties of a class of Markov-modulated general birth–death processes under fast regime switching. The first set of results concerns the ergodic properties of the properly scaled joint Markov process with a parameter that is taken to be large. Under very weak hypotheses, we show that if the averaged process is exponentially ergodic for large values of the parameter, then the same applies to the original joint Markov process. The second set of results concerns steady-state diffusion approximations, under the assumption that the ‘averaged’ fluid limit exists. Here, we establish convergence rates for the moments of the approximating diffusion process to those of the Markov-modulated birth–death process. This is accomplished by comparing the generator of the approximating diffusion and that of the joint Markov process. We also provide several examples which demonstrate how the theory can be applied.  more » « less
Award ID(s):
1715875
PAR ID:
10282626
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Advances in Applied Probability
Volume:
53
Issue:
1
ISSN:
0001-8678
Page Range / eLocation ID:
1 to 29
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We study ergodic properties of Markovian multiclass many-server queues that are uniform over scheduling policies and the size of the system. The system is heavily loaded in the Halfin–Whitt regime, and the scheduling policies are work conserving and preemptive. We provide a unified approach via a Lyapunov function method that establishes Foster–Lyapunov equations for both the limiting diffusion and the prelimit diffusion-scaled queuing processes simultaneously. We first study the limiting controlled diffusion and show that if the spare capacity (safety staffing) parameter is positive, the diffusion is exponentially ergodic uniformly over all stationary Markov controls, and the invariant probability measures have uniform exponential tails. This result is sharp because when there is no abandonment and the spare capacity parameter is negative, the controlled diffusion is transient under any Markov control. In addition, we show that if all the abandonment rates are positive, the invariant probability measures have sub-Gaussian tails regardless whether the spare capacity parameter is positive or negative. Using these results, we proceed to establish the corresponding ergodic properties for the diffusion-scaled queuing processes. In addition to providing a simpler proof of previous results in Gamarnik and Stolyar [Gamarnik D, Stolyar AL (2012) Multiclass multiserver queueing system in the Halfin-Whitt heavy traffic regime: asymptotics of the stationary distribution. Queueing Systems 71(1–2):25–51], we extend these results to multiclass models with renewal arrival processes, albeit under the assumption that the mean residual life functions are bounded. For the Markovian model with Poisson arrivals, we obtain stronger results and show that the convergence to the stationary distribution is at an exponential rate uniformly over all work-conserving stationary Markov scheduling policies. 
    more » « less
  2. We study the ergodic properties of a class of controlled stochastic differential equations (SDEs) driven by a-stable processes which arise as the limiting equations of multiclass queueing models in the Halfin–Whitt regime that have heavy–tailed arrival processes. When the safety staffing parameter is positive, we show that the SDEs are uniformly ergodic and enjoy a polynomial rate of convergence to the invariant probability measure in total variation, which is uniform over all stationary Markov controls resulting in a locally Lipschitz continuous drift. We also derive a matching lower bound on the rate of convergence (under no abandonment). On the other hand, when all abandonment rates are positive, we show that the SDEs are exponentially ergodic uniformly over the above-mentioned class of controls. Analogous results are obtained for Lévy–driven SDEs arising from multiclass many-server queues under asymptotically negligible service interruptions. For these equations, we show that the aforementioned ergodic properties are uniform over all stationary Markov controls. We also extend a key functional central limit theorem concerning diffusion approximations so as to make it applicable to the models studied here. 
    more » « less
  3. Ramanan, Kavita (Ed.)
    The paper concerns the stochastic approximation recursion, \[ \theta_{n+1}= \theta_n + \alpha_{n + 1} f(\theta_n, \Phi_{n+1}) \,,\quad n\ge 0, \] where the {\em estimates} $$\{ \theta_n\} $$ evolve on $$\Re^d$$, and $$\bfPhi \eqdef \{ \Phi_n \}$$ is a stochastic process on a general state space, satisfying a conditional Markov property that allows for parameter-dependent noise. In addition to standard Lipschitz assumptions and conditions on the vanishing step-size sequence, it is assumed that the associated \textit{mean flow} $$ \ddt \odestate_t = \barf(\odestate_t)$$ is globally asymptotically stable, with stationary point denoted $$\theta^*$$. The main results are established under additional conditions on the mean flow and an extension of the Donsker-Varadhan Lyapunov drift condition known as~(DV3): (i) A Lyapunov function is constructed for the joint process $$\{\theta_n,\Phi_n\}$$ that implies convergence of the estimates in $$L_4$$. (ii) A functional central limit theorem (CLT) is established, as well as the usual one-dimensional CLT for the normalized error. Moment bounds combined with the CLT imply convergence of the normalized covariance $$\Expect [ z_n z_n^\transpose ]$$ to the asymptotic covariance $$\SigmaTheta$$ in the CLT, where $$z_n\eqdef (\theta_n-\theta^*)/\sqrt{\alpha_n}$$. (iii) The CLT holds for the normalized averaged parameters $$\zPR_n\eqdef \sqrt{n} (\thetaPR_n -\theta^*)$$, with $$\thetaPR_n \eqdef n^{-1} \sum_{k=1}^n\theta_k$$, subject to standard assumptions on the step-size. Moreover, the covariance of $$\zPR_n$$ converges to $$\SigmaPR$$, the minimal covariance of Polyak and Ruppert. (iv) An example is given where $$f$$ and $$\barf$$ are linear in $$\theta$$, and $$\bfPhi$$ is a geometrically ergodic Markov chain but does not satisfy~(DV3). While the algorithm is convergent, the second moment of $$\theta_n$$ is unbounded and in fact diverges. 
    more » « less
  4. null (Ed.)
    In a wireless network with dynamic spectrum sharing, tracking temporal spectrum holes across a wide spectrum band is a challenging task. We consider a scenario in which the spectrum is divided into a large number of bands or channels, each of which has the potential to provide dynamic spectrum access opportunities. The occupancy times of each band by primary users are generally non-exponentially distributed. We develop an approach to determine and parameterize a small selected subset of the bands with good spectrum access opportunities, using limited computational resources under noisy measurements. We model the noisy measurements of the received signal in each band as a bivariate Markov modulated Gaussian process, which can be viewed as a continuous-time bivariate Markov chain observed through Gaussian noise. The underlying bivariate Markov process allows for the characterization of non-exponentially distributed state sojourn times. The proposed scheme combines an online expectation-maximization algorithm for parameter estimation with a computing budget allocation algorithm. Observation time is allocated across the bands to determine the subset of G out of G frequency bands with the largest mean idle times for dynamic spectrum access and at the same time to obtain accurate parameter estimates for this subset of bands. Our simulation results show that when channel holding times are non-exponential, the proposed scheme achieves a substantial improvement in the probability of correct selection of the best subset of bands compared to an approach based on a (univariate) Markov modulated Gaussian process model. 
    more » « less
  5. Optimal designs minimize the number of experimental runs (samples) needed to accurately estimate model parameters, resulting in algorithms that, for instance, efficiently minimize parameter estimate variance. Governed by knowledge of past observations, adaptive approaches adjust sampling constraints online as model parameter estimates are refined, continually maximizing expected information gained or variance reduced. We apply adaptive Bayesian inference to estimate transition rates of Markov chains, a common class of models for stochastic processes in nature. Unlike most previous studies, our sequential Bayesian optimal design is updated with each observation and can be simply extended beyond two-state models to birth–death processes and multistate models. By iteratively finding the best time to obtain each sample, our adaptive algorithm maximally reduces variance, resulting in lower overall error in ground truth parameter estimates across a wide range of Markov chain parameterizations and conformations. 
    more » « less