skip to main content


Search for: All records

Award ID contains: 2001687

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    This paper studies load balancing for many‐server (Nservers) systems. Each server has a buffer of sizeb − 1, and can have at most one job in service andb − 1 jobs in the buffer. The service time of a job follows the Coxian‐2 distribution. We focus on steady‐state performance of load balancing policies in the heavy traffic regime such that the normalized load of system isλ = 1 − Nαfor 0 < α < 0.5. We identify a set of policies that achieve asymptotic zero waiting. The set of policies include several classical policies such as join‐the‐shortest‐queue (JSQ), join‐the‐idle‐queue (JIQ), idle‐one‐first (I1F) and power‐of‐d‐choices (Po d) withd = O(Nα log N). The proof of the main result is based on Stein's method and state space collapse. A key technical contribution of this paper is the iterative state space collapse approach that leads to a simple generator approximation when applying Stein's method.

     
    more » « less
  2. This paper presents a model-free reinforcement learning (RL) algorithm for infinite-horizon average-reward Constrained Markov Decision Processes (CMDPs). Considering a learning horizon K, which is sufficiently large, the proposed algorithm achieves sublinear regret and zero constraint violation. The bounds depend on the number of states S, the number of actions A, and two constants which are independent of the learning horizon K. 
    more » « less
  3. null (Ed.)
    We consider an ultra-dense wireless network with N channels and M = N devices. Messages with fresh information are generated at each device according to a random process and need to be transmitted to an access point. The value of a message decreases as it ages, so each device searches for an idle channel to transmit the message as soon as it can. However, each channel probing is associated with a fixed cost (energy), so a device needs to adapt its probing rate based on the "age" of the message. At each device, the design of the optimal probing strategy can be formulated as an infinite horizon Markov Decision Process (MDP) where the devices compete with each other to find idle channels. While it is natural to view the system as a Bayesian game, it is often intractable to analyze such a system. Thus, we use the Mean Field Game (MFG) approach to analyze the system in a large-system regime, where the number of devices is very large, to understand the structure of the problem and to find efficient probing strategies. We present an analysis based on the MFG perspective. We begin by characterizing the space of valid policies and use this to show the existence of a Mean Field Nash Equilibrium (MFNE) in a constrained set for any general increasing cost functions with diminishing rewards. Further we provide an algorithm for computing the equilibrium for any given device, and the corresponding age-dependent channel probing policy. 
    more » « less
  4. null (Ed.)