skip to main content


Title: Algorithm 1012: DELAUNAYSPARSE: Interpolation via a Sparse Subset of the Delaunay Triangulation in Medium to High Dimensions
DELAUNAYSPARSE contains both serial and parallel codes written in Fortran 2003 (with OpenMP) for performing medium- to high-dimensional interpolation via the Delaunay triangulation. To accommodate the exponential growth in the size of the Delaunay triangulation in high dimensions, DELAUNAYSPARSE computes only a sparse subset of the complete Delaunay triangulation, as necessary for performing interpolation at the user specified points. This article includes algorithm and implementation details, complexity and sensitivity analyses, usage information, and a brief performance study.  more » « less
Award ID(s):
1838271
NSF-PAR ID:
10272224
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Mathematical Software
Volume:
46
Issue:
4
ISSN:
0098-3500
Page Range / eLocation ID:
1 to 20
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The work presented here introduces a new data set for inclusion of energetic electron precipitation (EEP) in climate model simulations. Measurements made by the medium energy proton and electron detector (MEPED) instruments onboard both the Polar Orbiting Environmental Satellites and the European Space Agency Meteorological Operational satellites are used to create global maps of precipitating electron fluxes. Unlike most previous data sets, the electron fluxes are computed using both the 0° and 90° MEPED detectors. Conversion of observed, broadband electron count rates to differential spectral fluxes uses a linear combination of analytical functions instead of a single function. Two dimensional maps of electron spectral flux are created using Delaunay triangulation to account for the relatively sparse nature of the MEPED sampling. This improves on previous studies that use a 1D interpolation over magnetic local time or L‐shell zonal averaging of the MEPED data. A Whole Atmosphere Community Climate Model (WACCM) simulation of the southern hemisphere 2003 winter using the new precipitating electron data set is shown to agree more closely with observations of odd nitrogen than WACCM simulations using other MEPED‐based electron data sets. Simulated EEP‐induced odd nitrogen increases led to ozone losses of more than 15% in the polar stratosphere near 10 hPa in September of 2003.

     
    more » « less
  2. null (Ed.)
    In this article, we show that many sequential randomized incremental algorithms are in fact parallel. We consider algorithms for several problems, including Delaunay triangulation, linear programming, closest pair, smallest enclosing disk, least-element lists, and strongly connected components. We analyze the dependencies between iterations in an algorithm and show that the dependence structure is shallow with high probability or that, by violating some dependencies, the structure is shallow and the work is not increased significantly. We identify three types of algorithms based on their dependencies and present a framework for analyzing each type. Using the framework gives work-efficient polylogarithmic-depth parallel algorithms for most of the problems that we study. This article shows the first incremental Delaunay triangulation algorithm with optimal work and polylogarithmic depth. This result is important, since most implementations of parallel Delaunay triangulation use the incremental approach. Our results also improve bounds on strongly connected components and least-element lists and significantly simplify parallel algorithms for several problems. 
    more » « less
  3. Persistent Homology (PH) is a method of Topological Data Analysis that analyzes the topological structure of data to help data scientists infer relationships in the data to assist in informed decision- making. A significant component in the computation of PH is the construction and use of a complex that represents the topological structure of the data. Some complex types are fast to construct but space inefficient whereas others are costly to construct and space efficient. Unfortunately, existing complex types are not both fast to construct and compact. This paper works to increase the scope of PH to support the computation of low dimensional homologies (H0 –H10 ) in high-dimension, big data. In particular, this paper exploits the desirable properties of the Vietoris–Rips Complex (VR-Complex) and the Delaunay Complex in order to construct a sparsified complex. The VR-Complex uses a distance matrix to quickly generate a complex up to the desired homology dimension. In contrast, the Delaunay Complex works at the dimensionality of the data to generate a sparsified complex. While construction of the VR-Complex is fast, its size grows exponentially by the size and dimension of the data set; in contrast, the Delaunay complex is significantly smaller for any given data dimension. However, its construction requires the computation of a Delaunay Triangulation that has high computational complexity. As a result, it is difficult to construct a Delaunay Complex for data in dimensions d > 6 that contains more than a few hundred points. The techniques in this paper enable the computation of topological preserving sparsification of k-Simplices (where k ≪ d) to quickly generate a reduced sparsified complex sufficient to compute homologies up to k-subspace, irrespective of the data dimensionality d. 
    more » « less
  4. Abstract

    We describe a discrete Laplacian suitable for any triangle mesh, including those that are nonmanifold or nonorientable (with or without boundary). Our Laplacian is a robust drop‐in replacement for the usual cotan matrix, and is guaranteed to have nonnegative edge weights on both interior and boundary edges, even for extremely poor‐quality meshes. The key idea is to build what we call a “tufted cover” over the input domain, which has nonmanifold vertices but manifold edges. Since all edges are manifold, we can flip to an intrinsic Delaunay triangulation; our Laplacian is then the cotan Laplacian of this new triangulation. This construction also provides a high‐quality point cloud Laplacian, via a nonmanifold triangulation of the point set. We validate our Laplacian on a variety of challenging examples (including all models from Thingi10k), and a variety of standard tasks including geodesic distance computation, surface deformation, parameterization, and computing minimal surfaces.

     
    more » « less
  5. In many applications of graph processing, the input data is often generated from an underlying geometric point data set. However, existing high-performance graph processing frameworks assume that the input data is given as a graph. Therefore, to use these frameworks, the user must write or use external programs based on computational geometry algorithms to convert their point data set to a graph, which requires more programming effort and can also lead to performance degradation. In this paper, we present our ongoing work on the Geo- Graph framework for shared-memory multicore machines, which seamlessly supports routines for parallel geometric graph construction and parallel graph processing within the same environment. GeoGraph supports graph construction based on k-nearest neighbors, Delaunay triangulation, and b-skeleton graphs. It can then pass these generated graphs to over 25 graph algorithms. GeoGraph contains highperformance parallel primitives and algorithms implemented in C++, and includes a Python interface. We present four examples of using GeoGraph, and some experimental results showing good parallel speedups and improvements over the Higra library. We conclude with a vision of future directions for research in bridging graph and geometric data processing. 
    more » « less