First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems
- Award ID(s):
- 2053493
- PAR ID:
- 10406931
- Date Published:
- Journal Name:
- SIAM Journal on Optimization
- Volume:
- 32
- Issue:
- 3
- ISSN:
- 1052-6234
- Page Range / eLocation ID:
- 1759 to 1790
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
This article presentsuniversalalgorithms for clustering problems, including the widely studiedk-median,k-means, andk-center objectives. The input is a metric space containing allpotentialclient locations. The algorithm must selectkcluster centers such that they are a good solution foranysubset of clients that actually realize. Specifically, we aim for lowregret, defined as the maximum over all subsets of the difference between the cost of the algorithm’s solution and that of an optimal solution. A universal algorithm’s solutionSolfor a clustering problem is said to be an α , β-approximation if for all subsets of clientsC′, it satisfiessol(C′) ≤ α ċopt(C′) + β ċmr, whereopt(C′ is the cost of the optimal solution for clients (C′) andmris the minimum regret achievable by any solution. Our main results are universal algorithms for the standard clustering objectives ofk-median,k-means, andk-center that achieve (O(1),O(1))-approximations. These results are obtained via a novel framework for universal algorithms using linear programming (LP) relaxations. These results generalize to other ℓp-objectives and the setting where some subset of the clients arefixed. We also give hardness results showing that (α, β)-approximation is NP-hard if α or β is at most a certain constant, even for the widely studied special case of Euclidean metric spaces. This shows that in some sense, (O(1),O(1))-approximation is the strongest type of guarantee obtainable for universal clustering.more » « less
-
We introduce a representation for generating explanations for the outcomes of combinatorial optimization algorithms. The two key ideas are (A) to maintain fine-grained representations of the values manipulated by these algorithms and (B) to derive explanations from these representations through merge, filter, and aggregation operations. An explanation in our model presents essentially a high-level comparison of the solution to a problem with a hypothesized alternative, illuminating why the solution is better than the alternative. Our value representation results in explanations smaller than other dynamic program representations, such as traces. Based on a measure for the conciseness of explanations we demonstrate through a number of experiments that the explanations produced by our approach are small and scale well with problem size across a number of different applications.more » « less
An official website of the United States government

