skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Lower Envelopes of Surface Patches in 3-Space
Let Σ be a collection of n surface patches, each being the graph of a partially defined semi-algebraic function of constant description complexity, and assume that any triple of them intersect in at most s = 2 points. We show that the complexity of the lower envelope of the surfaces in Σ is O(n² log^{6+ε} n), for any ε > 0. This almost settles a long-standing open problem posed by Halperin and Sharir, thirty years ago, who showed the nearly-optimal albeit weaker bound of O(n²⋅ 2^{c√{log n}}) on the complexity of the lower envelope, where c > 0 is some constant. Our approach is fairly simple and is based on hierarchical cuttings and gradations, as well as a simple charging scheme. We extend our analysis to the case s > 2, under a "favorable cross section" assumption, in which case we show that the bound on the complexity of the lower envelope is O(n² log^{11+ε} n), for any ε > 0. Incorporating these bounds with the randomized incremental construction algorithms of Boissonnat and Dobrindt, we obtain efficient constructions of lower envelopes of surface patches with the above properties, whose overall expected running time is O(n² polylog), as well as efficient data structures that support point location queries in their minimization diagrams in O(log²n) expected time.  more » « less
Award ID(s):
2223870
PAR ID:
10608507
Author(s) / Creator(s):
; ;
Editor(s):
Chan, Timothy; Fischer, Johannes; Iacono, John; Herman, Grzegorz
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
308
ISSN:
1868-8969
ISBN:
978-3-95977-338-6
Page Range / eLocation ID:
6:1-6:17
Subject(s) / Keyword(s):
Hierarchical cuttings surface patches in 3-space lower envelopes charging scheme gradation Theory of computation → Computational geometry
Format(s):
Medium: X Size: 17 pages; 721226 bytes Other: application/pdf
Size(s):
17 pages 721226 bytes
Right(s):
Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
Sponsoring Org:
National Science Foundation
More Like this
  1. Guruswami, Venkatesan (Ed.)
    A fundamental problem in circuit complexity is to find explicit functions that require large depth to compute. When considering the natural DeMorgan basis of {OR,AND}, where negations incur no cost, the best known depth lower bounds for an explicit function in NP have the form (3-o(1))log₂ n, established by Håstad (building on others) in the early 1990s. We make progress on the problem of improving this factor of 3, in two different ways: - We consider an "algorithmic method" approach to proving stronger depth lower bounds for non-uniform circuits in the DeMorgan basis. We show that slightly faster algorithms (than what is known) for counting the number of satisfying assignments on subcubic-size DeMorgan formulas would imply supercubic-size DeMorgan formula lower bounds, implying that the depth must be at least (3+ε)log₂ n for some ε > 0. For example, if #SAT on formulas of size n^{2+2ε} can be solved in 2^{n - n^{1-ε}log^k n} time for some ε > 0 and a sufficiently large constant k, then there is a function computable in 2^{O(n)} time with a SAT oracle which does not have n^{3+ε}-size formulas. In fact, the #SAT algorithm only has to work on formulas that are a conjunction of n^{1-ε} subformulas, each of which is n^{1+3ε} size, in order to obtain the supercubic lower bound. As a proof of concept, we show that our new algorithms-to-lower-bounds connection can be applied to prove new lower bounds for "hybrid" DeMorgan formula models which compute interesting functions at their leaves. - Turning to the {NAND} basis, we establish a greater-than-(3 log₂ n) depth lower bound against uniform circuits solving the SAT problem, using an extension of the "indirect diagonalization" method for NAND formulas. Note that circuits over the NAND basis are a special case of circuits over the DeMorgan basis; however, hard functions such as Andreev’s function (known to require depth (3-o(1))log₂ n in the DeMorgan basis) can still be computed with NAND circuits of depth (3+o(1))log₂ n. Our results imply that SAT requires polylogtime-uniform NAND circuits of depth at least 3.603 log₂ n. 
    more » « less
  2. Santhanam, Rahul (Ed.)
    {"Abstract":["A long-standing open problem dating back to the 1960s is whether there exists a search-to-decision reduction for the time-bounded Kolmogorov complexity problem - that is, the problem of determining whether the length of the shortest time-t program generating a given string x is at most s.\r\nIn this work, we consider the more "robust" version of the time-bounded Kolmogorov complexity problem, referred to as the GapMINKT problem, where given a size bound s and a running time bound t, the goal is to determine whether there exists a poly(t,|x|)-time program of length s+O(log |x|) that generates x. We present the first non-trivial search-to-decision reduction R for the GapMINKT problem; R has a running-time bound of 2^{ε n} for any ε > 0 and additionally only queries its oracle on "thresholds" s of size s+O(log |x|). As such, we get that any algorithm with running-time (resp. circuit size) 2^{α s} poly(|x|,t,s) for solving GapMINKT (given an instance (x,t,s), yields an algorithm for finding a witness with running-time (resp. circuit size) 2^{(α+ε) s} poly(|x|,t,s).\r\nOur second result is a polynomial-time search-to-decision reduction for the time-bounded Kolmogorov complexity problem in the average-case regime. Such a reduction was recently shown by Liu and Pass (FOCS'20), heavily relying on cryptographic techniques. Our reduction is more direct and additionally has the advantage of being length-preserving, and as such also applies in the exponential time/size regime.\r\nA central component in both of these results is the use of Kolmogorov and Levin’s Symmetry of Information Theorem."]} 
    more » « less
  3. Mulzer, Wolfgang; Phillips, Jeff M (Ed.)
    Let P be a set of m points in ℝ², let Σ be a set of n semi-algebraic sets of constant complexity in ℝ², let (S,+) be a semigroup, and let w: P → S be a weight function on the points of P. We describe a randomized algorithm for computing w(P∩σ) for every σ ∈ Σ in overall expected time O^*(m^{2s/(5s-4)}n^{(5s-6)/(5s-4)} + m^{2/3}n^{2/3} + m + n), where s > 0 is a constant that bounds the maximum complexity of the regions of Σ, and where the O^*(⋅) notation hides subpolynomial factors. For s ≥ 3, surprisingly, this bound is smaller than the best-known bound for answering m such queries in an on-line manner. The latter takes O^*(m^{s/(2s-1)}n^{(2s-2)/(2s-1)} + m + n) time. Let Φ: Σ × P → {0,1} be the Boolean predicate (of constant complexity) such that Φ(σ,p) = 1 if p ∈ σ and 0 otherwise, and let Σ_Φ P = {(σ,p) ∈ Σ× P ∣ Φ(σ,p) = 1}. Our algorithm actually computes a partition ℬ_Φ of Σ_Φ P into bipartite cliques (bicliques) of size (i.e., sum of the sizes of the vertex sets of its bicliques) O^*(m^{2s/(5s-4)}n^{(5s-6)/(5s-4)} + m^{2/3}n^{2/3} + m + n). It is straightforward to compute w(P∩σ) for all σ ∈ Σ from ℬ_Φ. Similarly, if η: Σ → S is a weight function on the regions of Σ, ∑_{σ ∈ Σ: p ∈ σ} η(σ), for every point p ∈ P, can be computed from ℬ_Φ in a straightforward manner. We also mention a few other applications of computing ℬ_Φ. 
    more » « less
  4. Censor-Hillel, Keren; Grandoni, Fabrizio; Ouaknine, Joel; Puppis, Gabriele (Ed.)
    We study the problem of indexing a text T[1..n] to support pattern matching with wildcards. The input of a query is a pattern P[1..m] containing h ∈ [0, k] wildcard (a.k.a. don't care) characters and the output is the set of occurrences of P in T (i.e., starting positions of substrings of T that matches P), where k = o(log n) is fixed at index construction. A classic solution by Cole et al. [STOC 2004] provides an index with space complexity O(n ⋅ (clog n)^k/k!)) and query time O(m+2^h log log n+occ), where c > 1 is a constant, and occ denotes the number of occurrences of P in T. We introduce a new data structure that significantly reduces space usage for highly repetitive texts while maintaining efficient query processing. Its space (in words) and query time are as follows: O(δ log (n/δ)⋅ c^k (1+(log^k (δ log n))/k!)) and O((m+2^h +occ)log n)) The parameter δ, known as substring complexity, is a recently introduced measure of repetitiveness that serves as a unifying and lower-bounding metric for several popular measures, including the number of phrases in the LZ77 factorization (denoted by z) and the number of runs in the Burrows-Wheeler Transform (denoted by r). Moreover, O(δ log (n/δ)) represents the optimal space required to encode the data in terms of n and δ, helping us see how close our space is to the minimum required. In another trade-off, we match the query time of Cole et al.’s index using O(n+δ log (n/δ) ⋅ (clogδ)^{k+ε}/k!) space, where ε > 0 is an arbitrarily small constant. We also demonstrate how these techniques can be applied to a more general indexing problem, where the query pattern includes k-gaps (a gap can be interpreted as a contiguous sequence of wildcard characters). 
    more » « less
  5. Mulzer, Wolfgang; Phillips, Jeff M (Ed.)
    Polynomial partitioning techniques have recently led to improved geometric data structures for a variety of fundamental problems related to semialgebraic range searching and intersection searching in 3D and higher dimensions (e.g., see [Agarwal, Aronov, Ezra, and Zahl, SoCG 2019; Ezra and Sharir, SoCG 2021; Agarwal, Aronov, Ezra, Katz, and Sharir, SoCG 2022]). They have also led to improved algorithms for offline versions of semialgebraic range searching in 2D, via lens-cutting [Sharir and Zahl (2017)]. In this paper, we show that these techniques can yield new data structures for a number of other 2D problems even for online queries: 1) Semialgebraic range stabbing. We present a data structure for n semialgebraic ranges in 2D of constant description complexity with O(n^{3/2+ε}) preprocessing time and space, so that we can count the number of ranges containing a query point in O(n^{1/4+ε}) time, for an arbitrarily small constant ε > 0. (The query time bound is likely close to tight for this space bound.) 2) Ray shooting amid algebraic arcs. We present a data structure for n algebraic arcs in 2D of constant description complexity with O(n^{3/2+ε}) preprocessing time and space, so that we can find the first arc hit by a query (straight-line) ray in O(n^{1/4+ε}) time. (The query bound is again likely close to tight for this space bound, and they improve a result by Ezra and Sharir with near n^{3/2} space and near √n query time.) 3) Intersection counting amid algebraic arcs. We present a data structure for n algebraic arcs in 2D of constant description complexity with O(n^{3/2+ε}) preprocessing time and space, so that we can count the number of intersection points with a query algebraic arc of constant description complexity in O(n^{1/2+ε}) time. In particular, this implies an O(n^{3/2+ε})-time algorithm for counting intersections between two sets of n algebraic arcs in 2D. (This generalizes a classical O(n^{3/2+ε})-time algorithm for circular arcs by Agarwal and Sharir from SoCG 1991.) 
    more » « less