skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 1, 2025

Title: Distributionally Robust Observable Strategic Queues
This paper presents an extension of Naor’s analysis on the join-or-balk problem in observable M/M/1 queues. Although all other Markovian assumptions still hold, we explore this problem assuming uncertain arrival rates under the distributionally robust settings. We first study the problem with the classical moment ambiguity set, where the support, mean, and mean-absolute deviation of the underlying distribution are known. Next, we extend the model to the data-driven setting, where decision makers only have access to a finite set of samples. We develop three optimal joining threshold strategies from the perspectives of an individual customer, a social optimizer, and a revenue maximizer such that their respective worst-case expected benefit rates are maximized. Finally, we compare our findings with Naor’s original results and the traditional sample average approximation scheme. Funding: This research was supported by the National Science Foundation [Grants 2342505 and 2343869].  more » « less
Award ID(s):
2343869 2342505
PAR ID:
10557081
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Stochastic Systems
Date Published:
Journal Name:
Stochastic Systems
Volume:
14
Issue:
3
ISSN:
1946-5238
Page Range / eLocation ID:
337 to 361
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the optimal control problem in stochastic queueing networks with a set of job dispatchers connected to a set of parallel servers with queues. Jobs arrive at the dispatchers and get routed to the servers following some routing policy. The arrival processes of jobs and the service processes of servers are stochastic with unknown arrival rates and service rates. Upon the completion of each job from dispatcherunat serversm, a random utility whose mean is unknown is obtained. We seek to design a control policy that makes routing decisions at the dispatchers and scheduling decisions at the servers to maximize the total utility obtained by the end of a finite time horizonT. The performance of policies is measured by regret, which is defined as the difference in total expected utility with respect to the optimal dynamic policy that has access to arrival rates, service rates and underlying utilities. We first show that the expected utility of the optimal dynamic policy is upper bounded byTtimes the solution to a static linear program, where the optimization variables correspond to rates of jobs from dispatchers to servers and the feasibility region is parameterized by arrival rates and service rates. We next propose a policy for the optimal control problem that is an integration of a learning algorithm and a control policy. The learning algorithm seeks to learn the optimal extreme point solution to the static linear program based on the information available in the optimal control problem. The control policy, a mixture of priority-based and Joint-the-Shortest-Queue routing at the dispatchers and priority-based scheduling at the servers, makes decisions based on the graphical structure induced by the extreme point solutions provided by the learning algorithm. We prove that our policy achieves logarithmic regret whereas application of existing techniques to the optimal control problem would lead to Ω(√T)-regret. The theoretical analysis is further complemented with simulations to evaluate the empirical performance of our policy. 
    more » « less
  2. Kráľovič, Rastislav; Kučera, Antonín (Ed.)
    Given a set P of n points and a set S of m disks in the plane, the disk hitting set problem asks for a smallest subset of P such that every disk of S contains at least one point in the subset. The problem is NP-hard. This paper considers a line-constrained version in which all disks have their centers on a line. We present an O(mlog²n+(n+m)log(n+m)) time algorithm for the problem. This improves the previous result of O(m²log m+(n+m)log(n+m)) time for the weighted case of the problem where every point of P has a weight and the objective is to minimize the total weight of the hitting set. Our algorithm also solves a more general line-separable problem with a single intersection property: The points of P and the disk centers are separated by a line 𝓁 and the boundary of every two disks intersect at most once on the side of 𝓁 containing P. 
    more » « less
  3. Given data drawn from an unknown distribution, D, to what extent is it possible to amplify'' this dataset and faithfully output an even larger set of samples that appear to have been drawn from D? We formalize this question as follows: an (n,m) amplification procedure takes as input n independent draws from an unknown distribution D, and outputs a set of m > n samples'' which must be indistinguishable from m samples drawn iid from D. We consider this sample amplification problem in two fundamental settings: the case where D is an arbitrary discrete distribution supported on k elements, and the case where D is a d-dimensional Gaussian with unknown mean, and fixed covariance matrix. Perhaps surprisingly, we show a valid amplification procedure exists for both of these settings, even in the regime where the size of the input dataset, n, is significantly less than what would be necessary to learn distribution D to non-trivial accuracy. We also show that our procedures are optimal up to constant factors. Beyond these results, we describe potential applications of such data amplification, and formalize a number of curious directions for future research along this vein. 
    more » « less
  4. Given data drawn from an unknown distribution, D, to what extent is it possible to ``amplify'' this dataset and faithfully output an even larger set of samples that appear to have been drawn from D? We formalize this question as follows: an (n,m) amplification procedure takes as input n independent draws from an unknown distribution D, and outputs a set of m > n ``samples'' which must be indistinguishable from m samples drawn iid from D. We consider this sample amplification problem in two fundamental settings: the case where D is an arbitrary discrete distribution supported on k elements, and the case where D is a d-dimensional Gaussian with unknown mean, and fixed covariance matrix. Perhaps surprisingly, we show a valid amplification procedure exists for both of these settings, even in the regime where the size of the input dataset, n, is significantly less than what would be necessary to learn distribution D to non-trivial accuracy. We also show that our procedures are optimal up to constant factors. Beyond these results, we describe potential applications of sample amplification, and formalize a number of curious directions for future research. 
    more » « less
  5. Iwata, Satoru; Kakimura, Naonori (Ed.)
    Given a set P of n points and a set S of m disks in the plane, the disk coverage problem asks for a smallest subset of disks that together cover all points of P. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of P by a line 𝓁. We present an m^{2/3} n^{2/3} 2^O(log^*(m+n)) + O((n+m)log(n+m)) time algorithm for the problem. This improves the previously best result of O(nm + n log n) time. Our techniques also solve the line-constrained version of the problem, where centers of all disks of S are located on a line 𝓁 while points of P can be anywhere in the plane. Our algorithm runs in O(m√n + (n+m)log(n+m)) time, which improves the previously best result of O(nm log(m+n)) time. In addition, our results lead to an algorithm of n^{10/3} 2^O(log^*n) time for a half-plane coverage problem (given n half-planes and n points, find a smallest subset of half-planes covering all points); this improves the previously best algorithm of O(n⁴log n) time. Further, if all half-planes are lower ones, our algorithm runs in n^{4/3} 2^O(log^*n) time while the previously best algorithm takes O(n²log n) time. 
    more » « less