skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Boundary Sketching with Asymptotically Optimal Distance and Rotation
We address the problem of designing a distributed algorithm for two robots that sketches the boundary of an unknown shape. Critically, we assume a certain amount of delay in how quickly our robots can react to external feedback. In particular, when a robot moves, it commits to move along path of length at least λ, or turn an amount of radians at least λ for some positive λ ≤ (1/2)^6, that is normalized based on a unit diameter shape. Then, our algorithm outputs a polygon that is an ϵ-sketch, for ϵ = 8λ^(1/2), in the sense that every point on the shape boundary is within distance ϵ of the output polygon. Moreover, our costs are asymptotically optimal in two key criteria for the robots: total distance travelled and total amount of rotation. Additionally, we implement our algorithm, and illustrate its output on some specific shapes.  more » « less
Award ID(s):
2024520
PAR ID:
10458817
Author(s) / Creator(s):
; ;
Editor(s):
Rajsbaum, Sergio; Balliu, Alkida; Daymude, Joshua J.; Olivetti, Dennis
Date Published:
Journal Name:
Structural Information and Communication Complexity
Volume:
978-3-031-32733-9
Page Range / eLocation ID:
357--385
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We address the problem of designing a distributed algorithm for two robots that sketches the boundary of an unknown shape. Critically, we assume a certain amount of delay in how quickly our robots can react to external feedback. In particular, when a robot moves, it commits to move along path of length at least 𝜆, or turn an amount of radians at least 𝜆 for some positive 𝜆 ≤ 1∕26, that is normalized based on a unit diameter shape. Then, our algorithm outputs a polygon that is √ an 𝜖-sketch, for 𝜖 =8 𝜆, in the sense that every point on the shape boundary is within distance 𝜖 of the output polygon. Moreover, our costs are asymptotically optimal in two key criteria for the robots: total distance traveled and total amount of rotation. Additionally, we implement our algorithm, and illustrate its output on some specific shapes. 
    more » « less
  2. Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution D, unlabeled samples from test distribution D′, and the goal is to output a classifier with low error on D′ whenever the training samples pass a corresponding test. Their model deviates from all prior work in that no assumptions are made on D′. Instead, the test must accept (with high probability) when the marginals of the training and test distributions are equal. Here we focus on the fundamental case of intersections of halfspaces with respect to Gaussian training distributions and prove a variety of new upper bounds including a 2(k/ϵ)O(1)poly(d)-time algorithm for TDS learning intersections of k homogeneous halfspaces to accuracy ϵ (prior work achieved d(k/ϵ)O(1)). We work under the mild assumption that the Gaussian training distribution contains at least an ϵ fraction of both positive and negative examples (ϵ-balanced). We also prove the first set of SQ lower-bounds for any TDS learning problem and show (1) the ϵ-balanced assumption is necessary for poly(d,1/ϵ)-time TDS learning for a single halfspace and (2) a dΩ~(log1/ϵ) lower bound for the intersection of two general halfspaces, even with the ϵ-balanced assumption. Our techniques significantly expand the toolkit for TDS learning. We use dimension reduction and coverings to give efficient algorithms for computing a localized version of discrepancy distance, a key metric from the domain adaptation literature. 
    more » « less
  3. Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution , unlabeled samples from test distribution ′, and the goal is to output a classifier with low error on ′ whenever the training samples pass a corresponding test. Their model deviates from all prior work in that no assumptions are made on ′. Instead, the test must accept (with high probability) when the marginals of the training and test distributions are equal. Here we focus on the fundamental case of intersections of halfspaces with respect to Gaussian training distributions and prove a variety of new upper bounds including a 2(k/ϵ)O(1)𝗉𝗈𝗅𝗒(d)-time algorithm for TDS learning intersections of k homogeneous halfspaces to accuracy ϵ (prior work achieved d(k/ϵ)O(1)). We work under the mild assumption that the Gaussian training distribution contains at least an ϵ fraction of both positive and negative examples (ϵ-balanced). We also prove the first set of SQ lower-bounds for any TDS learning problem and show (1) the ϵ-balanced assumption is necessary for 𝗉𝗈𝗅𝗒(d,1/ϵ)-time TDS learning for a single halfspace and (2) a dΩ̃ (log1/ϵ) lower bound for the intersection of two general halfspaces, even with the ϵ-balanced assumption. Our techniques significantly expand the toolkit for TDS learning. We use dimension reduction and coverings to give efficient algorithms for computing a localized version of discrepancy distance, a key metric from the domain adaptation literature. 
    more » « less
  4. We develop a general framework, called approximately-diverse dynamic programming (ADDP) that can be used to generate a collection of k≥2 maximally diverse solutions to various geometric and combinatorial optimization problems. Given an approximation factor 0≤c≤1, this framework also allows for maximizing diversity in the larger space of c-approximate solutions. We focus on two geometric problems to showcase this technique: 1. Given a polygon P, an integer k≥2 and a value c≤1, generate k maximally diverse c-nice triangulations of P. Here, a c-nice triangulation is one that is c-approximately optimal with respect to a given quality measure σ. 2. Given a planar graph G, an integer k≥2 and a value c≤1, generate k maximally diverse c-optimal Independent Sets (or, Vertex Covers). Here, an independent set S is said to be c-optimal if |S|≥c|S′| for any independent set S′ of G. Given a set of k solutions to the above problems, the diversity measure we focus on is the average distance between the solutions, where d(X,Y)=|XΔY|. For arbitrary polygons and a wide range of quality measures, we give poly(n,k) time (1−Θ(1/k))-approximation algorithms for the diverse triangulation problem. For the diverse independent set and vertex cover problems on planar graphs, we give an algorithm that runs in time 2^(O(k.δ^(−1).ϵ^(−2)).n^O(1/ϵ) and returns (1−ϵ)-approximately diverse (1−δ)c-optimal independent sets or vertex covers. Our triangulation results are the first algorithmic results on computing collections of diverse geometric objects, and our planar graph results are the first PTAS for the diverse versions of any NP-complete problem. Additionally, we also provide applications of this technique to diverse variants of other geometric problems. 
    more » « less
  5. We provide faster algorithms for approximating the optimal transport distance, e.g. earth mover's distance, between two discrete probability distributions on n elements. We present two algorithms that compute couplings between marginal distributions with an expected transportation cost that is within an additive ϵ of optimal in time O(n^2/eps); one algorithm is straightforward to parallelize and implementable in depth O(1/eps). Further, we show that additional improvements on our results must be coupled with breakthroughs in algorithmic graph theory. 
    more » « less