skip to main content


Title: Cutting Polygons into Small Pieces with Chords: Laser-Based Localization
Motivated by indoor localization by tripwire lasers, we study the problem of cutting a polygon into small-size pieces, using the chords of the polygon. Several versions are considered, depending on the definition of the "size" of a piece. In particular, we consider the area, the diameter, and the radius of the largest inscribed circle as a measure of the size of a piece. We also consider different objectives, either minimizing the maximum size of a piece for a given number of chords, or minimizing the number of chords that achieve a given size threshold for the pieces. We give hardness results for polygons with holes and approximation algorithms for multiple variants of the problem.  more » « less
Award ID(s):
1910873 1800734 1938709 1763680 1716252
NSF-PAR ID:
10194795
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Leibniz international proceedings in informatics
Volume:
173
ISSN:
1868-8969
Page Range / eLocation ID:
7:1--7:23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Motivated by indoor localization by tripwire lasers, we study the problem of cutting a polygon into small-size pieces, using the chords of the polygon. Several versions are considered, depending on the definition of the “size” of a piece. In particular, we consider the area, the diameter, and the radius of the largest inscribed circle as a measure of the size of a piece. We also consider different objectives, either minimizing the maximum size of a piece for a given number of chords, or minimizing the number of chords that achieve a given size threshold for the pieces. We give hardness results for polygons with holes and approximation algorithms for multiple variants of the problem. 
    more » « less
  2. We consider the construction of a polygon P with n vertices whose turning angles at the vertices are given by a sequence A = (α0 , . . . , αn−1 ), αi ∈ (−π,π), for i ∈ {0,...,n − 1}. The problem of realizing A by a polygon can be seen as that of constructing a straight-line drawing of a graph with prescribed angles at vertices, and hence, it is a special case of the well studied problem of constructing an angle graph. In 2D, we characterize sequences A for which every generic polygon P ⊂ R2 realizing A has at least c crossings, and describe an efficient algorithm that constructs, for a given sequence A, a generic polygon P ⊂ R2 that realizes A with the minimum number of crossings. In 3D, we describe an efficient algorithm that tests whether a given sequence A can be realized by a (not necessarily generic) polygon P ⊂ R3, and for every realizable sequence finds a realization. 
    more » « less
  3. null (Ed.)
    We consider the construction of a polygon P with n vertices whose turning angles at the vertices are given by a sequence A=(α0,…,αn−1) , αi∈(−π,π) , for i∈{0,…,n−1} . The problem of realizing A by a polygon can be seen as that of constructing a straight-line drawing of a graph with prescribed angles at vertices, and hence, it is a special case of the well studied problem of constructing an angle graph. In 2D, we characterize sequences A for which every generic polygon P⊂R2 realizing A has at least c crossings, for every c∈N , and describe an efficient algorithm that constructs, for a given sequence A, a generic polygon P⊂R2 that realizes A with the minimum number of crossings. In 3D, we describe an efficient algorithm that tests whether a given sequence A can be realized by a (not necessarily generic) polygon P⊂R3 , and for every realizable sequence the algorithm finds a realization. 
    more » « less
  4. We consider the problem of political redistricting: given the locations of people in a geographical area (e.g. a US state), the goal is to decompose the area into subareas, called districts, so that the populations of the districts are as close as possible and the districts are ``compact'' and ``contiguous,'' to use the terms referred to in most US state constitutions and/or US Supreme Court rulings. We study a method that outputs a solution in which each district is the intersection of a convex polygon with the geographical area. The average number of sides per polygon is less than six. The polygons tend to be quite compact. Every two districts differ in population by at most one (so we call the solution balanced). In fact, the solution is a centroidal power diagram: each polygon has an associated center in ℝ³ such that * the projection of the center onto the plane z = 0 is the centroid of the locations of people assigned to the polygon, and * for each person assigned to that polygon, the polygon's center is closest among all centers. The polygons are convex because they are the intersections of 3D Voronoi cells with the plane. The solution is, in a well-defined sense, a locally optimal solution to the problem of choosing centers in the plane and choosing an assignment of people to those 2-d centers so as to minimize the sum of squared distances subject to the assignment being balanced. * A practical problem with this approach is that, in real-world redistricting, exact locations of people are unknown. Instead, the input consists of polygons (census blocks) and associated populations. A real redistricting must not split census blocks. We therefore propose a second phase that perturbs the solution slightly so it does not split census blocks. In our experiments, the second phase achieves this while preserving perfect population balance. The district polygons are no longer convex at the fine scale because their boundaries must follow the boundaries of census blocks, but at a coarse scale they preserve the shape of the original polygons. 
    more » « less
  5. null (Ed.)
    We present approximation and exact algorithms for piecewise regression of univariate and bivariate data using fixed-degree polynomials. Specifically, given a set S of n data points (x1, y1), . . . , (xn, yn) ∈ Rd × R where d ∈ {1, 2}, the goal is to segment xi’s into some (arbitrary) number of disjoint pieces P1, . . . , Pk, where each piece Pj is associated with a fixed-degree polynomial fj : Rd → R, to minimize the total loss function λk+􏰄ni=1(yi −f(xi))2, where λ ≥ 0 is a regularization term that penalizes model complexity (number of pieces) and f : 􏰇kj=1 Pj → R is the piecewise polynomial function defined as f|Pj = fj. The pieces P1,...,Pk are disjoint intervals of R in the case of univariate data and disjoint axis-aligned rectangles in the case of bivariate data. Our error approximation allows use of any fixed-degree polynomial, not just linear functions. Our main results are the following. For univariate data, we present a (1 + ε)-approximation algorithm with time complexity O(nε log1ε), assuming that data is presented in sorted order of xi’s. For bivariate data, we √ present three results: a sub-exponential exact algorithm with running time nO( n); a polynomial-time constant- approximation algorithm; and a quasi-polynomial time approximation scheme (QPTAS). The bivariate case is believed to be NP-hard in the folklore but we could not find a published record in the literature, so in this paper we also present a hardness proof for completeness. 
    more » « less