skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A general framework for handling commitment in online throughput maximization
Abstract We study a fundamental online job admission problem where jobs with deadlines arrive online over time at their release dates, and the task is to determine a preemptive single-server schedule which maximizes the number of jobs that complete on time. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least $$1+\varepsilon $$ 1 + ε times its processing time, for some $$\varepsilon >0$$ ε > 0 . We quantify the impact that different provider commitment requirements have on the performance of online algorithms. Our main contribution is one universal algorithmic framework for online job admission both with and without commitments. Without commitment, our algorithm with a competitive ratio of  $$\mathcal {O}(1/\varepsilon )$$ O ( 1 / ε ) is the best possible (deterministic) for this problem. For commitment models, we give the first non-trivial performance bounds. If the commitment decisions must be made before a job’s slack becomes less than a $$\delta $$ δ -fraction of its size, we prove a competitive ratio of $$\mathcal {O}(\varepsilon /((\varepsilon -\delta )\delta ^2))$$ O ( ε / ( ( ε - δ ) δ 2 ) ) , for $$0<\delta <\varepsilon $$ 0 < δ < ε . When a provider must commit upon starting a job, our bound is  $$\mathcal {O}(1/\varepsilon ^2)$$ O ( 1 / ε 2 ) . Finally, we observe that for scheduling with commitment the restriction to the “unweighted” throughput model is essential; if jobs have individual weights, we rule out competitive deterministic algorithms.  more » « less
Award ID(s):
1714818
PAR ID:
10310212
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Mathematical Programming
Volume:
183
Issue:
1-2
ISSN:
0025-5610
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O((log n)/ε) update and O(log n) query worst-case time. Further, we design a local computation algorithm that uses only O((log N)/ε) queries when all jobs are length at least 1 and have starting/ending times within [0,N]. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are poly(log n,1/ε). Equivalently, this is the first algorithm that maintains a (1+ε)-approximation of the maximum independent set of a collection of weighted intervals in poly(log n,1/ε) time updates/queries. This is an exponential improvement in 1/ε over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with M machines. 
    more » « less
  2. Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O((log n)/ε) update and O(log n) query worst-case time. Further, we design a local computation algorithm that uses only O((log N)/ε) queries when all jobs are length at least 1 and have starting/ending times within [0,N]. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are poly(log n,1/ε). Equivalently, this is the first algorithm that maintains a (1+ε)-approximation of the maximum independent set of a collection of weighted intervals in poly(log n,1/ε) time updates/queries. This is an exponential improvement in 1/ε over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with M machines. 
    more » « less
  3. Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O(lognε) update and O(logn) query worst-case time. Further, we design a local computation algorithm that uses only O(logNε) queries when all jobs are length at least 1 and have starting/ending times within [0,N]. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are poly(logn,1ε). Equivalently, this is the first algorithm that maintains a (1+ε)-approximation of the maximum independent set of a collection of weighted intervals in poly(logn,1ε) time updates/queries. This is an exponential improvement in 1/ε over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese ~[SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with M machines. 
    more » « less
  4. Let { s j } j = 1 n \left \{ s_{j}\right \} _{j=1}^{n} be positive integers. We show that for any 1 ≤ L ≤ n , 1\leq L\leq n, ‖ ∏ j = 1 n ( 1 − z s j ) ‖ L ∞ ( | z | = 1 ) ≥ exp ⁡ ( 1 2 e L ( s 1 s 2 … s L ) 1 / L ) . \begin{equation*} \left \Vert \prod _{j=1}^{n}\left ( 1-z^{s_{j}}\right ) \right \Vert _{L_{\infty }\left ( \left \vert z\right \vert =1\right ) }\geq \exp \left ( \frac {1}{2e}\frac {L}{\left ( s_{1}s_{2}\ldots s_{L}\right ) ^{1/L}}\right ) . \end{equation*} In particular, this gives geometric growth if a positive proportion of the { s j } \left \{ s_{j}\right \} are bounded. We also show that when the { s j } \left \{ s_{j}\right \} grow regularly and faster than j ( log ⁡ j ) 2 + ε j\left ( \log j\right ) ^{2+\varepsilon } , some ε > 0 \varepsilon >0 , then the norms grow faster than exp ⁡ ( ( log ⁡ n ) 1 + δ ) \exp \left ( \left ( \log n\right ) ^{1+\delta }\right ) for some δ > 0 \delta >0 . 
    more » « less
  5. null (Ed.)
    Abstract We rigorously justify the mean-field limit of an N -particle system subject to Brownian motions and interacting through the Newtonian potential in $${\mathbb {R}}^3$$ R 3 . Our result leads to a derivation of the Vlasov–Poisson–Fokker–Planck (VPFP) equations from the regularized microscopic N -particle system. More precisely, we show that the maximal distance between the exact microscopic trajectories and the mean-field trajectories is bounded by $$N^{-\frac{1}{3}+\varepsilon }$$ N - 1 3 + ε ( $$\frac{1}{63}\le \varepsilon <\frac{1}{36}$$ 1 63 ≤ ε < 1 36 ) with a blob size of $$N^{-\delta }$$ N - δ ( $$\frac{1}{3}\le \delta <\frac{19}{54}-\frac{2\varepsilon }{3}$$ 1 3 ≤ δ < 19 54 - 2 ε 3 ) up to a probability of $$1-N^{-\alpha }$$ 1 - N - α for any $$\alpha >0$$ α > 0 . Moreover, we prove the convergence rate between the empirical measure associated to the regularized particle system and the solution of the VPFP equations. The technical novelty of this paper is that our estimates rely on the randomness coming from the initial data and from the Brownian motions. 
    more » « less