We consider a long-term average profit–maximizing admission control problem in an M/M/1 queuing system with unknown service and arrival rates. With a fixed reward collected upon service completion and a cost per unit of time enforced on customers waiting in the queue, a dispatcher decides upon arrivals whether to admit the arriving customer or not based on the full history of observations of the queue length of the system. Naor [Naor P (1969) The regulation of queue size by levying tolls. Econometrica 37(1):15–24] shows that, if all the parameters of the model are known, then it is optimal to use a static threshold policy: admit if the queue length is less than a predetermined threshold and otherwise not. We propose a learning-based dispatching algorithm and characterize its regret with respect to optimal dispatch policies for the full-information model of Naor [Naor P (1969) The regulation of queue size by levying tolls. Econometrica 37(1):15–24]. We show that the algorithm achieves an O(1) regret when all optimal thresholds with full information are nonzero and achieves an [Formula: see text] regret for any specified [Formula: see text] in the case that an optimal threshold with full information is 0 (i.e., an optimal policy is to reject all arrivals), where N is the number of arrivals.
more »
« less
Optimal Pricing in a Single Server System
We study optimal pricing in a single server queue when the customers valuation of service depends on their waiting time. In particular, we consider a very general model, where the customer valuations are random and are sampled from a distribution that depends on the queue length. The goal of the service provider is to set dynamic state dependent prices in order to maximize its revenue, while also managing congestion. We model the problem as a Markov decision process and present structural results on the optimal policy. We also present an algorithm to find an approximate optimal policy. We further present a myopic policy that is easy to evaluate and present bounds on its performance. We finally illustrate the quality of our approximate solution and the myopic solution using numerical simulations.
more »
« less
- PAR ID:
- 10516556
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Modeling and Performance Evaluation of Computing Systems
- Volume:
- 8
- Issue:
- 4
- ISSN:
- 2376-3639
- Page Range / eLocation ID:
- 1 to 32
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study optimal fidelity selection for a human operator servicing a queue of homogeneous tasks. The service time distribution of the human operator depends on her cognitive dynamics and the level of fidelity selected for servicing the task. Cognitive dynamics of the operator evolve as a Markov chain in which the cognitive state increases (decreases) with high probability whenever she is busy (resting). The tasks arrive according to a Poisson process and each task waiting in the queue loses its value at a fixed rate. We address the trade-off between high quality service of a task and consequent loss in value of future tasks using a Semi-Markov Decision Process (SMDP) framework. We numerically determine an optimal policy and establish its structural properties.more » « less
-
null (Ed.)In multi-server queueing systems where there is no central queue holding all incoming jobs, job dispatching policies are used to assign incoming jobs to the queue at one of the servers. Classic job dispatching policies such as join-the-shortest-queue and shortest expected delay assume that the service rates and queue lengths of the servers are known to the dispatcher. In this work, we tackle the problem of job dispatching without the knowledge of service rates and queue lengths, where the dispatcher can only obtain noisy estimates of the service rates by observing job departures. This problem presents a novel exploration-exploitation trade-off between sending jobs to all the servers to estimate their service rates, and exploiting the currently known fastest servers to minimize the expected queueing delay. We propose a bandit-based exploration policy that learns the service rates from observed job departures. Unlike the standard multi-armed bandit problem where only one out of a finite set of actions is optimal, here the optimal policy requires identifying the optimal fraction of incoming jobs to be sent to each server. We present a regret analysis and simulations to demonstrate the effectiveness of the proposed bandit-based exploration policy.more » « less
-
Problem definition: Inspired by new developments in dynamic spectrum access, we study the dynamic pricing of wireless Internet access when demand and capacity (bandwidth) are stochastic. Academic/practical relevance: The demand for wireless Internet access has increased enormously. However, the spectrum available to wireless service providers is limited. The industry has, thus, altered conventional license-based spectrum access policies through unlicensed spectrum operations. The additional spectrum obtained through these operations has stochastic capacity. Thus, the pricing of this service by the service provider has novel challenges. The problem considered in this paper is, therefore, of high practical relevance and new to the academic literature. Methodology: We study this pricing problem using a Markov decision process model in which customers are posted dynamic prices based on their bandwidth requirement and the available capacity. Results: We characterize the structure of the optimal pricing policy as a function of the system state and of the input parameters. Because it is impossible to solve this problem for practically large state spaces, we propose a heuristic dynamic pricing policy that performs very well, particularly when the ratio of capacity to demand rate is low. Managerial implications: We demonstrate the value of using a dynamic heuristic pricing policy compared with the myopic and optimal static policies. The previous literature has studied similar systems with fixed capacity and has characterized conditions under which myopic policies perform well. In contrast, our setting has dynamic (stochastic) capacity, and we find that identifying good state-dependent heuristic pricing policies is of greater importance. Our heuristic policy is computationally more tractable and easier to implement than the optimal dynamic and static pricing policies. It also provides a significant performance improvement relative to the myopic and optimal static policies when capacity is scarce, a condition that holds for the practical setting that motivated this research.more » « less
-
Opioid overdose rescue is very time-sensitive. Hence, drone-delivered naloxone has the potential to be a transformative innovation due to its easily deployable and flexible nature. We formulate a Markov Decision Process (MDP) model to dispatch the appropriate drone after an overdose request arrives and to relocate the drone to its next waiting location after having completed its current task. Since the underlying optimization problem is subject to the curse of dimensionality, we solve it using ad-hoc state aggregation and evaluate it through a simulation with higher granularity. Our simulation-based comparative study is based on emergency medical service data from the state of Indiana. We compare the optimal policy resulting from the scaled-down MDP model with a myopic policy as the baseline. We consider the impact of drone type and service area type on outcomes, which offers insights into the performance of the MDP suboptimal policy under various settings.more » « less
An official website of the United States government

