Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Salakhutdinov, Ruslan ; Kolter, Zico ; Heller, Katherine ; Weller, Adrian ; Oliver, Nuria ; Scarlett, Jonathan ; Berkenkamp, Felix (Ed.)This paper leverages the framework of algorithms-with-predictions to design data structures for two fundamental dynamic graph problems: incremental topological ordering and cycle detection. In these problems, the input is a directed graph on n nodes, and the m edges arrive one by one. The data structure must maintain a topological ordering of the vertices at all times and detect if the newly inserted edge creates a cycle. The theoretically best worst-case algorithms for these problems have high update cost (polynomial in n and m). In practice, greedy heuristics (that recompute the solution from scratch each time) perform well but can have high update cost in the worst case. In this paper, we bridge this gap by leveraging predictions to design a learned new data structure for the problems. Our data structure guarantees consistency, robustness, and smoothness with respect to predictions--that is, it has the best possible running time under perfect predictions, never performs worse than the best-known worst-case methods, and its running time degrades smoothly with the prediction error. Moreover, we demonstrate empirically that predictions, learned from a very small training dataset, are sufficient to provide significant speed-ups on real datasets.more » « lessFree, publicly-accessible full text available January 3, 2026
-
Free, publicly-accessible full text available December 31, 2025
-
Free, publicly-accessible full text available August 21, 2025
-
Abstract The configuration balancing problem with stochastic requests generalizes well-studied resource allocation problems such as load balancing and virtual circuit routing. There are given
m resources andn requests; each request has multiple possibleconfigurations , each of which increases the load of each resource by some amount. The goal is to select one configuration for each request to minimize themakespan : the load of the most-loaded resource. In the stochastic setting, the amount by which a configuration increases the resource load is uncertain until the configuration is chosen, but we are given a probability distribution. We develop both offline and online algorithms for configuration balancing with stochastic requests. When the requests are known offline, we give a non-adaptive policy for configuration balancing with stochastic requests that -approximates the optimal adaptive policy, which matches a known lower bound for the special case of load balancing on identical machines. When requests arrive online in a list, we give a non-adaptive policy that is$$O(\frac{\log m}{\log \log m})$$ competitive. Again, this result is asymptotically tight due to information-theoretic lower bounds for special cases (e.g., for load balancing on unrelated machines). Finally, we show how to leverage adaptivity in the special case of load balancing on$$O(\log m)$$ related machines to obtain a constant-factor approximation offline and an -approximation online. A crucial technical ingredient in all of our results is a new structural characterization of the optimal adaptive policy that allows us to limit the correlations between its decisions.$$O(\log \log m)$$ -
Abstract This paper considers approximation algorithms for generalized
k -median problems. These problems can be informally described ask -median with a constant number of extra constraints, and includesk -median with outliers, and knapsack median. Our first contribution is a pseudo-approximation algorithm for generalizedk -median that outputs a 6.387-approximate solution, with a constant number of fractional variables. The algorithm builds on the iterative rounding framework introduced by Krishnaswamy, Li, and Sandeep fork -median with outliers as reported (Krishnaswamy et al. in: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, 2018). The main technical innovation is allowing richer constraint sets in the iterative rounding and using the structure of the resulting extreme points. Using our pseudo-approximation algorithm, we give improved approximation algorithms fork -median with outliers and knapsack median. This involves combining our pseudo-approximation with pre- and post-processing steps to round a constant number of fractional variables at a small increase in cost. Our algorithms achieve approximation ratios and$$6.994 + \epsilon $$ for$$6.387 + \epsilon $$ k -median with outliers and knapsack median, respectively. These improve on the best-known approximation ratio for both problems as reported (Krishnaswamy et al. in: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, 2018).$$7.081 + \epsilon $$ -
Abstract This paper considers the basic problem of scheduling jobs online with preemption to maximize the number of jobs completed by their deadline on
m identical machines. The main result is anO (1) competitive deterministic algorithm for any number of machines .$$m >1$$ Free, publicly-accessible full text available July 1, 2025 -
Free, publicly-accessible full text available June 21, 2025
-
Free, publicly-accessible full text available June 17, 2025
-
A growing line of work shows how learned predictions can be used to break through worst-cast barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Θ(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong theoretical guarantees— it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure performs well on numerous real datasets, including temporal datasets where predictions are constructed from elements that arrived in the past (as is typically done in a practical use case).more » « lessFree, publicly-accessible full text available May 30, 2025
-
The feedback arc set problem is one of the most fundamental and well-studied ranking problems where n objects are to be ordered based on their pairwise comparison. The problem enjoys several efficient approximation algorithms in the offline setting. Unfortunately, online there are strong lower bounds on the competitive ratio establishing that no algorithm can perform well in the worst case.This paper introduces a new beyond-worst-case model for online feedback arc set. In the model, a sample of the input is given to the algorithm offline before the remaining instance is revealed online. This models the case in practice where yesterday's data is available and is similar to today's online instance. This sample is drawn from a known distribution which may not be uniform. We design an online algorithm with strong theoretical guarantees. The algorithm has a small constant competitive ratio when the sample is uniform---if not, we show we can recover the same result by adding a provably minimal sample. Empirical results validate the theory and show that such algorithms can be used on temporal data to obtain strong results.
Free, publicly-accessible full text available March 25, 2025