Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

We show that the RandomCoordinateCut algorithm gives the optimal competitive ratio for explainable kmedians in l1. The problem of explainable kmedians was introduced by Dasgupta, Frost, Moshkovitz, and Rashtchian in 2020. Several groups of authors independently proposed a simple polynomialtime randomized algorithm for the problem and showed that this algorithm is O(log k loglog k) competitive. We provide a tight analysis of the algorithm and prove that its competitive ratio is upper bounded by 2ln k +2. This bound matches the Omega(log k) lower bound by Dasgupta et al (2020).more » « lessFree, publiclyaccessible full text available December 10, 2024

We prove a new generalization of the higherorder Cheeger inequality for partitioning with buffers. Consider a graph G = (V, E). The buffered expansion of a set S ⊆ V with a buffer B ⊆ V∖S is the edge expansion of S after removing all the edges from set S to its buffer B. An εbuffered kpartitioning is a partitioning of a graph into disjoint components P_i and buffers B_i, in which the size of buffer B_i for P_i is small relative to the size of P_i: B_i ≤ εP_i. The buffered expansion of a buffered partition is the maximum of buffered expansions of the k sets P_i with buffers B_i. Let h^{k,ε}_G be the buffered expansion of the optimal εbuffered kpartitioning, then for every δ>0, h^{k,ε}_G ≤ O(1)⋅(log k) ⋅λ_{⌊(1+δ)k⌋} / ε, where λ_{⌊(1+δ)k⌋} is the ⌊(1+δ)k⌋th smallest eigenvalue of the normalized Laplacian of G. Our inequality is constructive and avoids the ``squareroot loss'' that is present in the standard Cheeger inequalities (even for k=2). We also provide a complementary lower bound, and a novel generalization to the setting with arbitrary vertex weights and edge costs. Moreover our result implies and generalizes the standard higherorder Cheeger inequalities and another recent Cheegertype inequality by Kwok, Lau, and Lee (2017) involving robust vertex expansion.more » « lessFree, publiclyaccessible full text available January 1, 2025

We prove a new generalization of the higherorder Cheeger inequality for partitioning with buffers. Consider a graph G=(V,E). The buffered expansion of a set S⊆V with a buffer B⊆V∖S is the edge expansion of S after removing all the edges from set S to its buffer B. An εbuffered kpartitioning is a partitioning of a graph into disjoint components Pi and buffers Bi, in which the size of buffer Bi for Pi is small relative to the size of Pi: Bi≤εPi. The buffered expansion of a buffered partition is the maximum of buffered expansions of the k sets Pi with buffers Bi. Let hk,εG be the buffered expansion of the optimal εbuffered kpartitioning, then for every δ>0, hk,εG≤Oδ(1)⋅(logkε)⋅λ⌊(1+δ)k⌋, where λ⌊(1+δ)k⌋ is the ⌊(1+δ)k⌋th smallest eigenvalue of the normalized Laplacian of G. Our inequality is constructive and avoids the ``squareroot loss'' that is present in the standard Cheeger inequalities (even for k=2). We also provide a complementary lower bound, and a novel generalization to the setting with arbitrary vertex weights and edge costs. Moreover our result implies and generalizes the standard higherorder Cheeger inequalities and another recent Cheegertype inequality by Kwok, Lau, and Lee (2017) involving robust vertex expansion.more » « lessFree, publiclyaccessible full text available January 1, 2025

Free, publiclyaccessible full text available January 1, 2025

Gørtz, Inge Li ; FarachColton, Martin ; Puglisi, Simon J ; Herman, Grzegorz (Ed.)We consider variants of the classic Multiway Cut problem. Multiway Cut asks to partition a graph G into k parts so as to separate k given terminals. Recently, Chandrasekaran and Wang (ESA 2021) introduced l_pnorm Multiway Cut, a generalization of the problem, in which the goal is to minimize the l_p norm of the edge boundaries of k parts. We provide an O(log^{1/2} n log^{1/2 + 1/p} k) approximation algorithm for this problem, improving upon the approximation guarantee of O(log^{3/2} n log^{1/2} k) due to Chandrasekaran and Wang. We also introduce and study Norm Multiway Cut, a further generalization of Multiway Cut. We assume that we are given access to an oracle, which answers certain queries about the norm. We present an O(log^{1/2} n log^{7/2} k) approximation algorithm with a weaker oracle and an O(log^{1/2} n log^{5/2} k) approximation algorithm with a stronger oracle. Additionally, we show that without any oracle access, there is no n^{1/4ε} approximation algorithm for every ε > 0 assuming the Hypergraph DensevsRandom Conjecture.more » « less

We provide a new bicriteria O(log2k) competitive algorithm for explainable kmeans clustering. Explainable kmeans was recently introduced by Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML 2020). It is described by an easy to interpret and understand (threshold) decision tree or diagram. The cost of the explainable kmeans clustering equals to the sum of costs of its clusters; and the cost of each cluster equals the sum of squared distances from the points in the cluster to the center of that cluster. The best non bicriteria algorithm for explainable clustering O(k) competitive, and this bound is tight. Our randomized bicriteria algorithm constructs a threshold decision tree that partitions the data set into (1+δ)k clusters (where δ∈(0,1) is a parameter of the algorithm). The cost of this clustering is at most O(1/δ⋅log2k) times the cost of the optimal unconstrained kmeans clustering. We show that this bound is almost optimal.more » « less

null (Ed.)We consider the problem of explainable kmedians and kmeans introduced by Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML 2020). In this problem, our goal is to find a threshold decision tree that partitions data into k clusters and minimizes the kmedians or kmeans objective. The obtained clustering is easy to interpret because every decision node of a threshold tree splits data based on a single feature into two groups. We propose a new algorithm for this problem which is O(log k) competitive with kmedians with ℓ1 norm and O(k) competitive with kmeans. This is an improvement over the previous guarantees of O(k) and O(k^2) by Dasgupta et al (2020). We also provide a new algorithm which is O(log^{3}{2}k) competitive for kmedians with ℓ2 norm. Our first algorithm is nearoptimal: Dasgupta et al (2020) showed a lower bound of Ω(log k) for kmedians; in this work, we prove a lower bound of Ω(k) for kmeans. We also provide a lower bound of Ω(log k) for kmedians with ℓ2 norm.more » « less

In this paper, we study kmeans++ and kmeans++ parallel, the two most popular algorithms for the classic kmeans clustering problem. We provide novel analyses and show improved approximation and bicriteria approximation guarantees for kmeans++ and kmeans++ parallel. Our results give a better theoretical justification for why these algorithms perform extremely well in practice. We also propose a new variant of kmeans++ parallel algorithm (Exponential Race kmeans++) that has the same approximation guarantees as kmeans++.more » « less