Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O((log n)/ε) update and O(log n) query worstcase time. Further, we design a local computation algorithm that uses only O((log N)/ε) queries when all jobs are length at least 1 and have starting/ending times within [0,N]. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worstcase update and query time are poly(log n,1/ε). Equivalently, this is the first algorithm that maintains a (1+ε)approximation of the maximum independent set of a collection of weighted intervals in poly(log n,1/ε) time updates/queries. This is an exponential improvement in 1/ε over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with M machines.
more »
« less
A general framework for handling commitment in online throughput maximization
Abstract We study a fundamental online job admission problem where jobs with deadlines arrive online over time at their release dates, and the task is to determine a preemptive singleserver schedule which maximizes the number of jobs that complete on time. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least $$1+\varepsilon $$ 1 + ε times its processing time, for some $$\varepsilon >0$$ ε > 0 . We quantify the impact that different provider commitment requirements have on the performance of online algorithms. Our main contribution is one universal algorithmic framework for online job admission both with and without commitments. Without commitment, our algorithm with a competitive ratio of $$\mathcal {O}(1/\varepsilon )$$ O ( 1 / ε ) is the best possible (deterministic) for this problem. For commitment models, we give the first nontrivial performance bounds. If the commitment decisions must be made before a job’s slack becomes less than a $$\delta $$ δ fraction of its size, we prove a competitive ratio of $$\mathcal {O}(\varepsilon /((\varepsilon \delta )\delta ^2))$$ O ( ε / ( ( ε  δ ) δ 2 ) ) , for $$0<\delta <\varepsilon $$ 0 < δ < ε . When a provider must commit upon starting a job, our bound is $$\mathcal {O}(1/\varepsilon ^2)$$ O ( 1 / ε 2 ) . Finally, we observe that for scheduling with commitment the restriction to the “unweighted” throughput model is essential; if jobs have individual weights, we rule out competitive deterministic algorithms.
more »
« less
 Award ID(s):
 1714818
 NSFPAR ID:
 10310212
 Date Published:
 Journal Name:
 Mathematical Programming
 Volume:
 183
 Issue:
 12
 ISSN:
 00255610
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O((log n)/ε) update and O(log n) query worstcase time. Further, we design a local computation algorithm that uses only O((log N)/ε) queries when all jobs are length at least 1 and have starting/ending times within [0,N]. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worstcase update and query time are poly(log n,1/ε). Equivalently, this is the first algorithm that maintains a (1+ε)approximation of the maximum independent set of a collection of weighted intervals in poly(log n,1/ε) time updates/queries. This is an exponential improvement in 1/ε over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with M machines.more » « less

null (Ed.)Abstract We rigorously justify the meanfield limit of an N particle system subject to Brownian motions and interacting through the Newtonian potential in $${\mathbb {R}}^3$$ R 3 . Our result leads to a derivation of the Vlasov–Poisson–Fokker–Planck (VPFP) equations from the regularized microscopic N particle system. More precisely, we show that the maximal distance between the exact microscopic trajectories and the meanfield trajectories is bounded by $$N^{\frac{1}{3}+\varepsilon }$$ N  1 3 + ε ( $$\frac{1}{63}\le \varepsilon <\frac{1}{36}$$ 1 63 ≤ ε < 1 36 ) with a blob size of $$N^{\delta }$$ N  δ ( $$\frac{1}{3}\le \delta <\frac{19}{54}\frac{2\varepsilon }{3}$$ 1 3 ≤ δ < 19 54  2 ε 3 ) up to a probability of $$1N^{\alpha }$$ 1  N  α for any $$\alpha >0$$ α > 0 . Moreover, we prove the convergence rate between the empirical measure associated to the regularized particle system and the solution of the VPFP equations. The technical novelty of this paper is that our estimates rely on the randomness coming from the initial data and from the Brownian motions.more » « less

null (Ed.)Abstract We show how to construct a $$(1+\varepsilon )$$ ( 1 + ε ) spanner over a set $${P}$$ P of n points in $${\mathbb {R}}^d$$ R d that is resilient to a catastrophic failure of nodes. Specifically, for prescribed parameters $${\vartheta },\varepsilon \in (0,1)$$ ϑ , ε ∈ ( 0 , 1 ) , the computed spanner $${G}$$ G has $$\begin{aligned} {{\mathcal {O}}}\bigl (\varepsilon ^{O(d)} {\vartheta }^{6} n(\log \log n)^6 \log n \bigr ) \end{aligned}$$ O ( ε  O ( d ) ϑ  6 n ( log log n ) 6 log n ) edges. Furthermore, for any k , and any deleted set $${{B}}\subseteq {P}$$ B ⊆ P of k points, the residual graph $${G}\setminus {{B}}$$ G \ B is a $$(1+\varepsilon )$$ ( 1 + ε ) spanner for all the points of $${P}$$ P except for $$(1+{\vartheta })k$$ ( 1 + ϑ ) k of them. No previous constructions, beyond the trivial clique with $${{\mathcal {O}}}(n^2)$$ O ( n 2 ) edges, were known with this resilience property (i.e., only a tiny additional fraction of vertices, $$\vartheta B$$ ϑ  B  , lose their distance preserving connectivity). Our construction works by first solving the exact problem in one dimension, and then showing a surprisingly simple and elegant construction in higher dimensions, that uses the onedimensional construction in a blackbox fashion.more » « less

Aardal, Karen ; Sanita, Laura (Ed.)This paper considers the basic problem of scheduling jobs online with preemption to maximize the number of jobs completed by their deadline on m identical machines. The main result is an O(1) competitive deterministic algorithm for any number of machines 𝑚>1.more » « less