Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 1, 2023
-
Goaoc, Xavier ; Kerber, Michael (Ed.)We consider the following surveillance problem: Given a set P of n sites in a metric space and a set R of k robots with the same maximum speed, compute a patrol schedule of minimum latency for the robots. Here a patrol schedule specifies for each robot an infinite sequence of sites to visit (in the given order) and the latency L of a schedule is the maximum latency of any site, where the latency of a site s is the supremum of the lengths of the time intervals between consecutive visits to s. When k = 1 the problem is equivalent to the travelling salesman problem (TSP) and thus it is NP-hard. For k ≥ 2 (which is the version we are interested in) the problem becomes even more challenging; for example, it is not even clear if the decision version of the problem is decidable, in particular in the Euclidean case. We have two main results. We consider cyclic solutions in which the set of sites must be partitioned into 𝓁 groups, for some 𝓁 ≤ k, and each group is assigned a subset of the robots that move along the travelling salesman tour of the group atmore »
-
Ahn, Hee-Kap ; Sadakane, Kunihiko (Ed.)In the standard planar k-center clustering problem, one is given a set P of n points in the plane, and the goal is to select k center points, so as to minimize the maximum distance over points in P to their nearest center. Here we initiate the systematic study of the clustering with neighborhoods problem, which generalizes the k-center problem to allow the covered objects to be a set of general disjoint convex objects C rather than just a point set P. For this problem we first show that there is a PTAS for approximating the number of centers. Specifically, if r_opt is the optimal radius for k centers, then in n^O(1/ε²) time we can produce a set of (1+ε)k centers with radius ≤ r_opt. If instead one considers the standard goal of approximating the optimal clustering radius, while keeping k as a hard constraint, we show that the radius cannot be approximated within any factor in polynomial time unless P = NP, even when C is a set of line segments. When C is a set of unit disks we show the problem is hard to approximate within a factor of (√{13}-√3)(2-√3) ≈ 6.99. This hardness result complements ourmore »
-
Bojanczyk, Mikolaj ; Chekuri, Chandra (Ed.)Given a point set P in the plane, we seek a subset Q ⊆ P, whose convex hull gives a smaller and thus simpler representation of the convex hull of P. Specifically, let cost(Q,P) denote the Hausdorff distance between the convex hulls CH(Q) and CH(P). Then given a value ε > 0 we seek the smallest subset Q ⊆ P such that cost(Q,P) ≤ ε. We also consider the dual version, where given an integer k, we seek the subset Q ⊆ P which minimizes cost(Q,P), such that |Q| ≤ k. For these problems, when P is in convex position, we respectively give an O(n log²n) time algorithm and an O(n log³n) time algorithm, where the latter running time holds with high probability. When there is no restriction on P, we show the problem can be reduced to APSP in an unweighted directed graph, yielding an O(n^2.5302) time algorithm when minimizing k and an O(min{n^2.5302, kn^2.376}) time algorithm when minimizing ε, using prior results for APSP. Finally, we show our near linear algorithms for convex position give 2-approximations for the general case.
-
Given a matrix D describing the pairwise dissimilarities of a data set, a common task is to embed the data points into Euclidean space. The classical multidimensional scaling (cMDS) algorithm is a widespread method to do this. However, theoretical analysis of the robustness of the algorithm and an in-depth analysis of its performance on non-Euclidean metrics is lacking. In this paper, we derive a formula, based on the eigenvalues of a matrix obtained from D, for the Frobenius norm of the difference between D and the metric Dcmds returned by cMDS. This error analysis leads us to the conclusion that when the derived matrix has a significant number of negative eigenvalues, then ∥D−Dcmds∥F, after initially decreasing, willeventually increase as we increase the dimension. Hence, counterintuitively, the quality of the embedding degrades as we increase the dimension. We empirically verify that the Frobenius norm increases as we increase the dimension for a variety of non-Euclidean metrics. We also show on several benchmark datasets that this degradation in the embedding results in the classification accuracy of both simple (e.g., 1-nearest neighbor) and complex (e.g., multi-layer neural nets) classifiers decreasing as we increase the embedding dimension.Finally, our analysis leads us to a newmore »