This paper describes a numerically robust data structure for encoding intrinsic triangulations of polyhedral surfaces. Many applications demand a correspondence between the intrinsic triangulation and the input surface, but existing data structures either rely on floating point values to encode correspondence, or do not support remeshing operations beyond basic edge flips. We instead provide an integer-based data structure that guarantees valid correspondence, even for meshes with near-degenerate elements. Our starting point is the framework ofnormal coordinatesfrom geometric topology, which we extend to the broader set of operations needed for mesh processing (vertex insertion, edge splits,etc.). The resulting data structure can be used as a drop-in replacement for earlier schemes, automatically improving reliability across a wide variety of applications. As a stress test, we successfully compute an intrinsic Delaunay refinement and associated subdivision for all manifold meshes in the Thingi10k dataset. In turn, we can compute reliable and highly accurate solutions to partial differential equations even on extremely low-quality meshes.
more »
« less
Constrained Delaunay Tetrahedrization: A Robust and Practical Approach
We present a numerically robust algorithm for computing the constrained Delaunay tetrahedrization (CDT) of a piecewise-linear complex, which has a 100% success rate on the 4408 valid models in the Thingi10k dataset. We build on the underlying theory of the well-known tetgen software, but use a floating-point implementation based on indirect geometric predicates to implicitly represent Steiner points: this new approach dramatically simplifies the implementation, removing the need for ad-hoc tolerances in geometric operations. Our approach leads to a robust and parameter-free implementation, with an empirically manageable number of added Steiner points. Furthermore, our algorithm addresses a major gap in tetgen's theory which may lead to algorithmic failure on valid models, even when assuming perfect precision in the calculations. Our output tetrahedrization conforms with the input geometry without approximations. We can further round our output to floating-point coordinates for downstream applications, which almost always results in valid floating-point meshes unless the input triangulation is very close to being degenerate.
more »
« less
- PAR ID:
- 10529177
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Graphics
- Volume:
- 42
- Issue:
- 6
- ISSN:
- 0730-0301
- Page Range / eLocation ID:
- 1 to 15
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of designing and analyzing differentially private algorithms that can be implemented on discrete models of computation in strict polynomial time, motivated by known attacks on floating point implementations of real-arithmetic differentially private algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomial-time algorithms. As a case study, we examine the basic problem of approximating the histogram of a categorical dataset over a possibly large data universe X. The classic Laplace Mechanism (Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy \& Confidentiality 2017) does not satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP 2012), take time at least linear in |X|, which can be exponential in the bit length of the input. In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee. One of our algorithms produces a sparse histogram as output. Its ``"per-bin accuracy" (the error on individual bins) is worse than that of the Laplace Mechanism by a factor of log|X|, but we prove a lower bound showing that this is necessary for any algorithm that produces a sparse histogram. A second algorithm avoids this lower bound, and matches the per-bin accuracy of the Laplace Mechanism, by producing a compact and efficiently computable representation of a dense histogram; it is based on an (n+1)-wise independent implementation of an appropriately clamped version of the Discrete Geometric Mechanism.more » « less
-
Abstract We study the problem of fitting a piecewise affine (PWA) function to input–output data. Our algorithm divides the input domain into finitely many regions whose shapes are specified by a user-provided template and such that the input–output data in each region are fit by an affine function within a user-provided error tolerance. We first prove that this problem is NP-hard. Then, we present a top-down algorithmic approach for solving the problem. The algorithm considers subsets of the data points in a systematic manner, trying to fit an affine function for each subset using linear regression. If regression fails on a subset, the algorithm extracts a minimal set of points from the subset (an unsatisfiable core) that is responsible for the failure. The identified core is then used to split the current subset into smaller ones. By combining this top-down scheme with a set-covering algorithm, we derive an overall approach that provides optimal PWA models for a given error tolerance, where optimality refers to minimizing the number of pieces of the PWA model. We demonstrate our approach on three numerical examples that include PWA approximations of a widely used nonlinear insulin–glucose regulation model and a double inverted pendulum with soft contacts.more » « less
-
An emerging problem in trustworthy machine learning is to train models that pro- duce robust interpretations for their predictions. We take a step towards solving this problem through the lens of axiomatic attribution of neural networks. Our theory is grounded in the recent work, Integrated Gradients (IG) [STY17], in axiomatically attributing a neural network’s output change to its input change. We propose training objectives in classic robust optimization models to achieve robust IG attributions. Our objectives give principled generalizations of previous objectives designed for robust predictions, and they naturally degenerate to classic soft-margin training for one-layer neural networks. We also generalize previous theory and prove that the objectives for different robust optimization models are closely related. Experiments demonstrate the effectiveness of our method, and also point to intriguing problems which hint at the need for better optimization techniques or better neural network architectures for robust attribution training.more » « less
-
Robust optimization is a widely studied area in operations research, where the algorithm takes as input a range of values and outputs a single solution that performs well for the entire range. Specifically, a robust algorithm aims to minimizeregret, defined as the maximum difference between the solution’s cost and that of an optimal solution in hindsight once the input has been realized. For graph problems inP, such as shortest path and minimum spanning tree, robust polynomial-time algorithms that obtain a constant approximation on regret are known. In this paper, we study robust algorithms for minimizing regret inNP-hard graph optimization problems, and give constant approximations on regret for the classical traveling salesman and Steiner tree problems.more » « less