skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1718820

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Bringmann, Karl; Grohe, Martin; Puppis, Gabriele; Svensson, Ola (Ed.)
    We present polylogarithmic approximation algorithms for variants of the Shortest Path, Group Steiner Tree, and Group ATSP problems with vector costs. In these problems, each edge e has a vector cost c_e ∈ ℝ_{≥0}^𝓁;. For a feasible solution - a path, subtree, or tour (respectively) - we find the total vector cost of all the edges in the solution and then compute the 𝓁_p-norm of the obtained cost vector (we assume that p ≥ 1 is an integer). Our algorithms for series-parallel graphs run in polynomial time and those for arbitrary graphs run in quasi-polynomial time. To obtain our results, we introduce and use new flow-based Sum-of-Squares relaxations. We also obtain a number of hardness results. 
    more » « less
  2. We study the problem of fair k-median where each cluster is required to have a fair representation of individuals from different groups. In the fair representation k-median problem, we are given a set of points X in a metric space. Each point x ∈ X belongs to one of ℓ groups. Further, we are given fair representation parameters αj and β_j for each group j ∈ [ℓ]. We say that a k-clustering C_1, ⋅⋅⋅, C_k fairly represents all groups if the number of points from group j in cluster C_i is between α_j |C_i| and β_j |C_i| for every j ∈ [ℓ] and i ∈ [k]. The goal is to find a set of k centers and an assignment such that the clustering defined by fairly represents all groups and minimizes the ℓ_1-objective ∑_{x ∈ X} d(x, ϕ(x)). We present an O(log k)-approximation algorithm that runs in time n^{O(ℓ)}. Note that the known algorithms for the problem either (i) violate the fairness constraints by an additive term or (ii) run in time that is exponential in both k and ℓ. We also consider an important special case of the problem where and for all j ∈ [ℓ]. For this special case, we present an O(log k)-approximation algorithm that runs in time. 
    more » « less
  3. Po-Ling Loh and Maxim Raginsky (Ed.)
    We give efficient deterministic one-pass streaming algorithms for finding an ellipsoidal approximation of a symmetric convex polytope. The algorithms are near-optimal in that their approximation factors differ from that of the optimal offline solution only by a factor sub-logarithmic in the aspect ratio of the polytope 
    more » « less
  4. Buchin, Kevin; Colin de Verdiere, Eric (Ed.)
    In this paper, we prove a two-sided variant of the Kirszbraun theorem. Consider an arbitrary subset X of Euclidean space and its superset Y. Let f be a 1-Lipschitz map from X to ℝ^m. The Kirszbraun theorem states that the map f can be extended to a 1-Lipschitz map ̃ f from Y to ℝ^m. While the extension ̃ f does not increase distances between points, there is no guarantee that it does not decrease distances significantly. In fact, ̃ f may even map distinct points to the same point (that is, it can infinitely decrease some distances). However, we prove that there exists a (1 + ε)-Lipschitz outer extension f̃:Y → ℝ^{m'} that does not decrease distances more than "necessary". Namely, ‖f̃(x) - f̃(y)‖ ≥ c √{ε} min(‖x-y‖, inf_{a,b ∈ X} (‖x - a‖ + ‖f(a) - f(b)‖ + ‖b-y‖)) for some absolutely constant c > 0. This bound is asymptotically optimal, since no L-Lipschitz extension g can have ‖g(x) - g(y)‖ > L min(‖x-y‖, inf_{a,b ∈ X} (‖x - a‖ + ‖f(a) - f(b)‖ + ‖b-y‖)) even for a single pair of points x and y. In some applications, one is interested in the distances ‖f̃(x) - f̃(y)‖ between images of points x,y ∈ Y rather than in the map f̃ itself. The standard Kirszbraun theorem does not provide any method of computing these distances without computing the entire map ̃ f first. In contrast, our theorem provides a simple approximate formula for distances ‖f̃(x) - f̃(y)‖. 
    more » « less
  5. Meila, Marina; Zhang, Tong (Ed.)
    In the Correlation Clustering problem, we are given a complete weighted graph $$G$$ with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering $$\mathcal{C}$$ of graph $$G$$, a similar edge is in disagreement with $$\mathcal{C}$$, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with $$\mathcal{C}$$ if its endpoints belong to the same cluster. The disagreements vector, $$\mathbf{disagree}$$, is a vector indexed by the vertices of $$G$$ such that the $$v$$-th coordinate $$\mathbf{disagree}_v$$ equals the weight of all disagreeing edges incident on $$v$$. The goal is to produce a clustering that minimizes the $$\ell_p$$ norm of the disagreements vector for $$p\geq 1$$. We study the $$\ell_p$$ objective in Correlation Clustering under the following assumption: Every similar edge has weight in $$[\alpha\mathbf{w},\mathbf{w}]$$ and every dissimilar edge has weight at least $$\alpha\mathbf{w}$$ (where $$\alpha \leq 1$$ and $$\mathbf{w}>0$$ is a scaling parameter). We give an $$O\left((\frac{1}{\alpha})^{\frac{1}{2}-\frac{1}{2p}}\cdot \log\frac{1}{\alpha}\right)$$ approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap. 
    more » « less
  6. null (Ed.)
    In the Correlation Clustering problem, we are given a complete weighted graph G with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering C of graph G, a similar edge is in disagreement with C, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with C if its endpoints belong to the same cluster. The disagreements vector is a vector indexed by the vertices of G such that the v-th coordinate of the disagreements vector equals the weight of all disagreeing edges incident on v. The goal is to produce a clustering that minimizes the ℓp norm of the disagreements vector for p≥1. We study the ℓ_p objective in Correlation Clustering under the following assumption: Every similar edge has weight in [αw,w] and every dissimilar edge has weight at least αw (where α ≤ 1 and w > 0 is a scaling parameter). We give an O((1/α)^{1/2−1/(2p)} log 1/α) approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap. 
    more » « less
  7. Belkin, Mikhail; Kpotufe, Samor (Ed.)
    We present an $$e^{O(p)} (\log \ell) / (\log \log \ell)$$-approximation algorithm for socially fair clustering with the $$\ell_p$$-objective. In this problem, we are given a set of points in a metric space. Each point belongs to one (or several) of $$\ell$$ groups. The goal is to find a $$k$$-medians, $$k$$-means, or, more generally, $$\ell_p$$-clustering that is simultaneously good for all of the groups. More precisely, we need to find a set of $$k$$ centers $$C$$ so as to minimize the maximum over all groups $$j$$ of $$\sum_{u \text{ in group } j} d(u, C)^p$$. The socially fair clustering problem was independently proposed by Abbasi, Bhaskara, and Venkatasubramanian (2021) and Ghadiri, Samadi, and Vempala (2021). Our algorithm improves and generalizes their $$O(\ell)$$-approximation algorithms for the problem. The natural LP relaxation for the problem has an integrality gap of $$\Omega(\ell)$$. In order to obtain our result, we introduce a strengthened LP relaxation and show that it has an integrality gap of $$\Theta((\log \ell) / (\log \log \ell))$$ for a fixed p. Additionally, we present a bicriteria approximation algorithm, which generalizes the bicriteria approximation of Abbasi et al. (2021). 
    more » « less
  8. In the Correlation Clustering problem, we are given a weighted graph $$G$$ with its edges labelled as "similar" or "dissimilar" by a binary classifier. The goal is to produce a clustering that minimizes the weight of "disagreements": the sum of the weights of "similar" edges across clusters and "dissimilar" edges within clusters. We study the correlation clustering problem under the following assumption: Every "similar" edge $$e$$ has weight $$w_e \in [ \alpha w, w ]$$ and every "dissimilar" edge $$e$$ has weight $$w_e \geq \alpha w$$ (where $$\alpha \leq 1$$ and $w > 0$ is a scaling parameter). We give a $$(3 + 2 \log_e (1/\alpha))$$ approximation algorithm for this problem. This assumption captures well the scenario when classification errors are asymmetric. Additionally, we show an asymptotically matching Linear Programming integrality gap of $$\Omega(\log 1/\alpha)$$ 
    more » « less
  9. In this paper, we introduce the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a γ-certified algorithm is also a γ-approximation algorithm - it finds a γ-approximation no matter what the input is. Second, it exactly solves γ-perturbation-resilient instances (γ-perturbation-resilient instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly perturbation-resilient instances, and solve optimization problems with hard constraints. In the paper, we define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means. We also present some negative results. 
    more » « less