skip to main content


This content will become publicly available on February 1, 2025

Title: Clustering with faulty centers
In this paper we introduce and formally study the problem of $k$-clustering with faulty centers. Specifically, we study the faulty versions of $k$-center, $k$-median, and $k$-means clustering, where centers have some probability of not existing, as opposed to prior work where clients had some probability of not existing. For all three problems we provide fixed parameter tractable algorithms, in the parameters $k$, $d$, and $\eps$, that $(1+\eps)$-approximate the minimum expected cost solutions for points in $d$ dimensional Euclidean space. For Faulty $k$-center we additionally provide a 5-approximation for general metrics. Significantly, all of our algorithms have only a linear dependence on $n$.  more » « less
Award ID(s):
1942597 1750780
PAR ID:
10528734
Author(s) / Creator(s):
; ;
Publisher / Repository:
ScienceDirect
Date Published:
Journal Name:
Computational Geometry
Volume:
117
Issue:
C
ISSN:
0925-7721
Page Range / eLocation ID:
102052
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bae, Sang Won ; Park, Heejin (Ed.)
    In this paper we introduce and formally study the problem of k-clustering with faulty centers. Specifically, we study the faulty versions of k-center, k-median, and k-means clustering, where centers have some probability of not existing, as opposed to prior work where clients had some probability of not existing. For all three problems we provide fixed parameter tractable algorithms, in the parameters k, d, and ε, that (1+ε)-approximate the minimum expected cost solutions for points in d dimensional Euclidean space. For Faulty k-center we additionally provide a 5-approximation for general metrics. Significantly, all of our algorithms have a small dependence on n. Specifically, our Faulty k-center algorithms have only linear dependence on n, while for our algorithms for Faulty k-median and Faulty k-means the dependence is still only n^(1 + o(1)). 
    more » « less
  2. Bansal, Nikhil and (Ed.)
    his paper presents universal algorithms for clustering problems, including the widely studied k-median, k-means, and k-center objectives. The input is a metric space containing all potential client locations. The algorithm must select k cluster centers such that they are a good solution for any subset of clients that actually realize. Specifically, we aim for low regret, defined as the maximum over all subsets of the difference between the cost of the algorithm’s solution and that of an optimal solution. A universal algorithm’s solution sol for a clustering problem is said to be an (α, β)-approximation if for all subsets of clients C', it satisfies sol(C') ≤ α ⋅ opt(C') + β ⋅ mr, where opt(C') is the cost of the optimal solution for clients C' and mr is the minimum regret achievable by any solution. Our main results are universal algorithms for the standard clustering objectives of k-median, k-means, and k-center that achieve (O(1), O(1))-approximations. These results are obtained via a novel framework for universal algorithms using linear programming (LP) relaxations. These results generalize to other 𝓁_p-objectives and the setting where some subset of the clients are fixed. We also give hardness results showing that (α, β)-approximation is NP-hard if α or β is at most a certain constant, even for the widely studied special case of Euclidean metric spaces. This shows that in some sense, (O(1), O(1))-approximation is the strongest type of guarantee obtainable for universal clustering. 
    more » « less
  3. Recent years have witnessed an increasing popularity of algorithm design for distributed data, largely due to the fact that massive datasets are often collected and stored in different locations. In the distributed setting communication typically dominates the query processing time. Thus it becomes crucial to design communication efficient algorithms for queries on distributed data. Simultaneously, it has been widely recognized that partial optimizations, where we are allowed to disregard a small part of the data, provide us significantly better solutions. The motivation for disregarded points often arise from noise and other phenomena that are pervasive in large data scenarios. In this paper we focus on partial clustering problems, k-center, k-median and k-means, in the distributed model, and provide algorithms with communication sublinear of the input size. As a consequence we develop the first algorithms for the partial k-median and means objectives that run in subquadratic running time. We also initiate the study of distributed algorithms for clustering uncertain data, where each data point can possibly fall into multiple locations under certain probability distribution. 
    more » « less
  4. We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted. Recent work gave the first polynomial time algorithms for this problem with near-optimal error guarantees for several natural structured distributions. Our main contribution is to develop faster algorithms for this problem whose running time nearly matches that of computing the empirical covariance. Given N = Ω(d^2/\eps^2) samples from a d-dimensional Gaussian distribution, an \eps-fraction of which may be arbitrarily corrupted, our algorithm runs in time O(d^{3.26}/ poly(\eps)) and approximates the unknown covariance matrix to optimal error up to a logarithmic factor. Previous robust algorithms with comparable error guarantees all have runtimes Ω(d^{2ω}) when \eps = Ω(1), where ω is the exponent of matrix multiplication. We also provide evidence that improving the running time of our algorithm may require new algorithmic techniques. 
    more » « less
  5. We study a clustering problem where the goal is to maximize the coverage of the input points by k chosen centers. Specifically, given a set of n points P ⊆ ℝ^d, the goal is to pick k centers C ⊆ ℝ^d that maximize the service ∑_{p∈P}φ(𝖽(p,C)) to the points P, where 𝖽(p,C) is the distance of p to its nearest center in C, and φ is a non-increasing service function φ: ℝ+ → ℝ+. This includes problems of placing k base stations as to maximize the total bandwidth to the clients - indeed, the closer the client is to its nearest base station, the more data it can send/receive, and the target is to place k base stations so that the total bandwidth is maximized. We provide an n^{ε^-O(d)} time algorithm for this problem that achieves a (1-ε)-approximation. Notably, the runtime does not depend on the parameter k and it works for an arbitrary non-increasing service function φ: ℝ+ → ℝ+. 
    more » « less