skip to main content


Title: Discrete conformal equivalence of polyhedral surfaces
This paper describes a numerical method for surface parameterization, yielding maps that are locally injective and discretely conformal in an exact sense. Unlike previous methods for discrete conformal parameterization, the method is guaranteed to work for any manifold triangle mesh, with no restrictions on triangulatiothat each task can be formulated as a convex problem where the triangulation is allowed to change---we complete the picture by introducing the machinery needed to actually construct a discrete conformal map. In particular, we introduce a new scheme for tracking correspondence between triangulations based on normal coordinates , and a new interpolation procedure based on layout in the light cone. Stress tests involving difficult cone configurations and near-degenerate triangulations indicate that the method is extremely robust in practice, and provides high-quality interpolation even on meshes with poor elements.  more » « less
Award ID(s):
1943123
NSF-PAR ID:
10313021
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
40
Issue:
4
ISSN:
0730-0301
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Angle-preserving or conformal surface parameterization has proven to be a powerful tool across applications ranging from geometry processing, to digital manufacturing, to machine learning, yet conformal maps can still suffer from severe area distortion. Cone singularities provide a way to mitigate this distortion, but finding the best configuration of cones is notoriously difficult. This paper develops a strategy that is globally optimal in the sense that it minimizes total area distortion among all possible cone configurations (number, placement, and size) that have no more than a fixed total cone angle. A key insight is that, for the purpose of optimization, one should not work directly with curvature measures (which naturally represent cone configurations), but can instead apply Fenchel-Rockafellar duality to obtain a formulation involving only ordinary functions. The result is a convex optimization problem, which can be solved via a sequence of sparse linear systems easily built from the usual cotangent Laplacian. The method supports user-defined notions of importance, constraints on cone angles (e.g., positive, or within a given range), and sophisticated boundary conditions (e.g., convex, or polygonal). We compare our approach to previous techniques on a variety of challenging models, often achieving dramatically lower distortion, and demonstrating that global optimality leads to extreme robustness in the presence of noise or poor discretization. 
    more » « less
  2. null (Ed.)
    In this paper, the authors propose a new dimension reduction method for level-set-based topology optimization of conforming thermal structures on free-form surfaces. Both the Hamilton-Jacobi equation and the Laplace equation, which are the two governing PDEs for boundary evolution and thermal conduction, are transformed from the 3D manifold to the 2D rectangular domain using conformal parameterization. The new method can significantly simplify the computation of topology optimization on a manifold without loss of accuracy. This is achieved due to the fact that the covariant derivatives on the manifold can be represented by the Euclidean gradient operators multiplied by a scalar with the conformal mapping. The original governing equations defined on the 3D manifold can now be properly modified and solved on a 2D domain. The objective function, constraint, and velocity field are also equivalently computed with the FEA on the 2D parameter domain with the properly modified form. In this sense, we are solving a 3D topology optimization problem equivalently on the 2D parameter domain. This reduction in dimension can greatly reduce the computing cost and complexity of the algorithm. The proposed concept is proved through two examples of heat conduction on manifolds. 
    more » « less
  3. Abstract

    As the use of spectral/hpelement methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hpelement methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hpelement libraryNektar++by running a series of baseline evaluations against the ‘standard’ Lagrangian method, where an interpolation matrix is generated and matrix-multiplication applied to evaluate a point at a given location. We present results from a rigorous series of benchmarking tests for a variety of element shapes, polynomial orders and dimensions. We show that when the point of interest is to be repeatedly evaluated, the barycentric method performs at worst$$50\%$$50%slower, when compared to a cached matrix evaluation. However, when the point of interest changes repeatedly so that the interpolation matrix must be regenerated in the ‘standard’ approach, the barycentric method yields far greater performance, with a minimum speedup factor of$$7\times $$7×. Furthermore, when derivatives of the solution evaluation are also required, the barycentric method in general slightly outperforms the cached interpolation matrix method across all elements and orders, with an up to$$30\%$$30%speedup. Finally we investigate a real-world example of scalar transport using a non-conformal discontinuous Galerkin simulation, in which we observe around$$6\times $$6×speedup in computational time for the barycentric method compared to the matrix-based approach. We also explore the complexity of both interpolation methods and show that the barycentric interpolation method requires$${\mathcal {O}}(k)$$O(k)storage compared to a best case space complexity of$${\mathcal {O}}(k^2)$$O(k2)for the Lagrangian interpolation matrix method.

     
    more » « less
  4. In an extremal eigenvalue problem, one considers a family of eigenvalue problems, each with discrete spectra, and extremizes a chosen eigenvalue over the family. In this chapter, we consider eigenvalue problems defined on Riemannian manifolds and extremize over the metric structure. For example, we consider the problem of maximizing the principal Laplace–Beltrami eigenvalue over a family of closed surfaces of fixed volume. Computational approaches to such extremal geometric eigenvalue problems present new computational challenges and require novel numerical tools, such as the parameterization of conformal classes and the development of accurate and efficient methods to solve eigenvalue problems on domains with nontrivial genus and boundary. We highlight recent progress on computational approaches for extremal geometric eigenvalue problems, including (i) maximizing Laplace–Beltrami eigenvalues on closed surfaces and (ii) maximizing Steklov eigenvalues on surfaces with boundary. 
    more » « less
  5. Abstract

    Gridded monthly rainfall estimates can be used for a number of research applications, including hydrologic modeling and weather forecasting. Automated interpolation algorithms, such as the “autoKrige” function in R, can produce gridded rainfall estimates that validate well but produce unrealistic spatial patterns. In this work, an optimized geostatistical kriging approach is used to interpolate relative rainfall anomalies, which are then combined with long-term means to develop the gridded estimates. The optimization consists of the following: 1) determining the most appropriate offset (constant) to use when log-transforming data; 2) eliminating poor quality data prior to interpolation; 3) detecting erroneous maps using a machine learning algorithm; and 4) selecting the most appropriate parameterization scheme for fitting the model used in the interpolation. Results of this effort include a 30-yr (1990–2019), high-resolution (250-m) gridded monthly rainfall time series for the state of Hawai‘i. Leave-one-out cross validation (LOOCV) is performed using an extensive network of 622 observation stations. LOOCV results are in good agreement with observations (R2= 0.78; MAE = 55 mm month−1; 1.4%); however, predictions can underestimate high rainfall observations (bias = 34 mm month−1; −1%) due to a well-known smoothing effect that occurs with kriging. This research highlights the fact that validation statistics should not be the sole source of error assessment and that default parameterizations for automated interpolation may need to be modified to produce realistic gridded rainfall surfaces. Data products can be accessed through the Hawai‘i Data Climate Portal (HCDP;http://www.hawaii.edu/climate-data-portal).

    Significance Statement

    A new method is developed to map rainfall in Hawai‘i using an optimized geostatistical kriging approach. A machine learning technique is used to detect erroneous rainfall maps and several conditions are implemented to select the optimal parameterization scheme for fitting the model used in the kriging interpolation. A key finding is that optimization of the interpolation approach is necessary because maps may validate well but have unrealistic spatial patterns. This approach demonstrates how, with a moderate amount of data, a low-level machine learning algorithm can be trained to evaluate and classify an unrealistic map output.

     
    more » « less