Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Gørtz, Inge Li; Farach-Colton, Martin; Puglisi, Simon J; Herman, Grzegorz (Ed.)We consider variants of the classic Multiway Cut problem. Multiway Cut asks to partition a graph G into k parts so as to separate k given terminals. Recently, Chandrasekaran and Wang (ESA 2021) introduced l_p-norm Multiway Cut, a generalization of the problem, in which the goal is to minimize the l_p norm of the edge boundaries of k parts. We provide an O(log^{1/2} n log^{1/2 + 1/p} k) approximation algorithm for this problem, improving upon the approximation guarantee of O(log^{3/2} n log^{1/2} k) due to Chandrasekaran and Wang. We also introduce and study Norm Multiway Cut, a further generalization of Multiway Cut. We assume that we are given access to an oracle, which answers certain queries about the norm. We present an O(log^{1/2} n log^{7/2} k) approximation algorithm with a weaker oracle and an O(log^{1/2} n log^{5/2} k) approximation algorithm with a stronger oracle. Additionally, we show that without any oracle access, there is no n^{1/4-ε} approximation algorithm for every ε > 0 assuming the Hypergraph Dense-vs-Random Conjecture.more » « less
- 
            null (Ed.)In the Correlation Clustering problem, we are given a complete weighted graph G with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering C of graph G, a similar edge is in disagreement with C, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with C if its endpoints belong to the same cluster. The disagreements vector, Agree, is a vector indexed by the vertices of G such that the v-th coordinate Disagre equals the weight of all disagreeing edges incident on v. The goal is to produce a clustering that minimizes the ℓp norm of the disagreements vector for p≥1. We study the ℓ_p objective in Correlation Clustering under the following assumption: Every similar edge has weight in [αw,w] and every dissimilar edge has weight at least αw (where α≤1 and w>0 is a scaling parameter). We give an O((1/α)^{1/2−1/2p}⋅log(1/α)) approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap.more » « less
- 
            Meila, Marina; Zhang, Tong (Ed.)In the Correlation Clustering problem, we are given a complete weighted graph $$G$$ with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering $$\mathcal{C}$$ of graph $$G$$, a similar edge is in disagreement with $$\mathcal{C}$$, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with $$\mathcal{C}$$ if its endpoints belong to the same cluster. The disagreements vector, $$\mathbf{disagree}$$, is a vector indexed by the vertices of $$G$$ such that the $$v$$-th coordinate $$\mathbf{disagree}_v$$ equals the weight of all disagreeing edges incident on $$v$$. The goal is to produce a clustering that minimizes the $$\ell_p$$ norm of the disagreements vector for $$p\geq 1$$. We study the $$\ell_p$$ objective in Correlation Clustering under the following assumption: Every similar edge has weight in $$[\alpha\mathbf{w},\mathbf{w}]$$ and every dissimilar edge has weight at least $$\alpha\mathbf{w}$$ (where $$\alpha \leq 1$$ and $$\mathbf{w}>0$$ is a scaling parameter). We give an $$O\left((\frac{1}{\alpha})^{\frac{1}{2}-\frac{1}{2p}}\cdot \log\frac{1}{\alpha}\right)$$ approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap.more » « less
- 
            null (Ed.)In the Correlation Clustering problem, we are given a complete weighted graph G with its edges labeled as “similar" and “dissimilar" by a noisy binary classifier. For a clustering C of graph G, a similar edge is in disagreement with C, if its endpoints belong to distinct clusters; and a dissimilar edge is in disagreement with C if its endpoints belong to the same cluster. The disagreements vector is a vector indexed by the vertices of G such that the v-th coordinate of the disagreements vector equals the weight of all disagreeing edges incident on v. The goal is to produce a clustering that minimizes the ℓp norm of the disagreements vector for p≥1. We study the ℓ_p objective in Correlation Clustering under the following assumption: Every similar edge has weight in [αw,w] and every dissimilar edge has weight at least αw (where α ≤ 1 and w > 0 is a scaling parameter). We give an O((1/α)^{1/2−1/(2p)} log 1/α) approximation algorithm for this problem. Furthermore, we show an almost matching convex programming integrality gap.more » « less
- 
            In the Correlation Clustering problem, we are given a weighted graph $$G$$ with its edges labelled as "similar" or "dissimilar" by a binary classifier. The goal is to produce a clustering that minimizes the weight of "disagreements": the sum of the weights of "similar" edges across clusters and "dissimilar" edges within clusters. We study the correlation clustering problem under the following assumption: Every "similar" edge $$e$$ has weight $$w_e \in [ \alpha w, w ]$$ and every "dissimilar" edge $$e$$ has weight $$w_e \geq \alpha w$$ (where $$\alpha \leq 1$$ and $w > 0$ is a scaling parameter). We give a $$(3 + 2 \log_e (1/\alpha))$$ approximation algorithm for this problem. This assumption captures well the scenario when classification errors are asymmetric. Additionally, we show an asymptotically matching Linear Programming integrality gap of $$\Omega(\log 1/\alpha)$$more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available