 Award ID(s):
 2017980
 NSFPAR ID:
 10316747
 Editor(s):
 Buchin, Kevin and
 Date Published:
 Journal Name:
 Leibniz international proceedings in informatics
 Volume:
 189
 ISSN:
 18688969
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We first introduce the notion of metarank for a 2parameter persistence module, an invariant that captures the information behind images of morphisms between 1D slices of the module. We then define the metadiagram of a 2parameter persistence module to be the Möbius inversion of the metarank, resulting in a function that takes values from signed 1parameter persistence modules. We show that the metarank and metadiagram contain information equivalent to the rank invariant and the signed barcode. This equivalence leads to computational benefits, as we introduce an algorithm for computing the metarank and metadiagram of a 2parameter module M indexed by a bifiltration of n simplices in O(n^3) time. This implies an improvement upon the existing algorithm for computing the signed barcode, which has O(n^4) time complexity. This also allows us to improve the existing upper bound on the number of rectangles in the rank decomposition of M from O(n^4) to O(n^3). In addition, we define notions of erosion distance between metaranks and between metadiagrams, and show that under these distances, metaranks and metadiagrams are stable with respect to the interleaving distance. Lastly, the metadiagram can be visualized in an intuitive fashion as a persistence diagram of diagrams, which generalizes the wellunderstood persistent diagram in the 1parameter setting.more » « less

Abstract Matrix reduction is the standard procedure for computing the persistent homology of a filtered simplicial complex with
m simplices. Its output is a particular decomposition of the total boundary matrix, from which the persistence diagrams and generating cycles are derived. Persistence diagrams are known to vary continuously with respect to their input, motivating the study of their computation for timevarying filtered complexes. Computing persistence dynamically can be reduced to maintaining a valid decomposition under adjacent transpositions in the filtration order. Since there are such transpositions, this maintenance procedure exhibits limited scalability and is often too fine for many applications. We propose a coarser strategy for maintaining the decomposition over a 1parameter family of filtrations. By reduction to a particular longest common subsequence problem, we show that the minimal number of decomposition updates$$O(m^2)$$ $O\left({m}^{2}\right)$d can be found in time and$$O(m \log \log m)$$ $O(mloglogm)$O (m ) space, and that the corresponding sequence of permutations—which we call aschedule —can be constructed in time. We also show that, in expectation, the storage needed to employ this strategy is actually sublinear in$$O(d m \log m)$$ $O(dmlogm)$m . Exploiting this connection, we show experimentally that the decrease in operations to compute diagrams across a family of filtrations is proportional to the difference between the expected quadratic number of states and the proposed sublinear coarsening. Applications to video data, dynamic metric space data, and multiparameter persistence are also presented. 
Zigzag persistence is a powerful extension of the standard persistence which allows deletions of simplices besides insertions. However, computing zigzag persistence usually takes considerably more time than the standard persistence. We propose an algorithm called FastZigzag which narrows this efficiency gap. Our main result is that an input simplexwise zigzag filtration can be converted to a cellwise nonzigzag filtration of a ∆complex with the same length, where the cells are copies of the input simplices. This conversion step in FastZigzag incurs very little cost. Furthermore, the barcode of the original filtration can be easily read from the barcode of the new cellwise filtration because the conversion embodies a series of diamond switches known in topological data analysis. This seemingly simple observation opens up the vast possibilities for improving the computation of zigzag persistence because any efficient algorithm/software for standard persistence can now be applied to computing zigzag persistence. Our experiment shows that this indeed achieves substantial performance gain over the existing stateoftheart softwares.more » « less

Persistent Homology (PH) is a method of Topological Data Analysis that analyzes the topological structure of data to help data scientists infer relationships in the data to assist in informed decision making. A significant component in the computation of PH is the construction and use of a complex that represents the topological structure of the data. Some complex types are fast to construct but space inefficient whereas others are costly to construct and space efficient. Unfortunately, existing complex types are not both fast to construct and compact. This paper works to increase the scope of PH to support the computation of low dimensional homologies (H0 –H10 ) in highdimension, big data. In particular, this paper exploits the desirable properties of the Vietoris–Rips Complex (VRComplex) and the Delaunay Complex in order to construct a sparsified complex. The VRComplex uses a distance matrix to quickly generate a complex up to the desired homology dimension. In contrast, the Delaunay Complex works at the dimensionality of the data to generate a sparsified complex. While construction of the VRComplex is fast, its size grows exponentially by the size and dimension of the data set; in contrast, the Delaunay complex is significantly smaller for any given data dimension. However, its construction requires the computation of a Delaunay Triangulation that has high computational complexity. As a result, it is difficult to construct a Delaunay Complex for data in dimensions d > 6 that contains more than a few hundred points. The techniques in this paper enable the computation of topological preserving sparsification of kSimplices (where k ≪ d) to quickly generate a reduced sparsified complex sufficient to compute homologies up to ksubspace, irrespective of the data dimensionality d.more » « less

null (Ed.)Graphs model realworld circumstances in many applications where they may constantly change to capture the dynamic behavior of the phenomena. Topological persistence which provides a set of birth and death pairs for the topological features is one instrument for analyzing such changing graph data. However, standard persistent homology defined over a growing space cannot always capture such a dynamic process unless shrinking with deletions is also allowed. Hence, zigzag persistence which incorporates both insertions and deletions of simplices is more appropriate in such a setting. Unlike standard persistence which admits nearly lineartime algorithms for graphs, such results for the zigzag version improving the general O(m^ω) time complexity are not known, where ω < 2.37286 is the matrix multiplication exponent. In this paper, we propose algorithms for zigzag persistence on graphs which run in nearlinear time. Specifically, given a filtration with m additions and deletions on a graph with n vertices and edges, the algorithm for 0dimension runs in O(mlog² n+mlog m) time and the algorithm for 1dimension runs in O(mlog⁴ n) time. The algorithm for 0dimension draws upon another algorithm designed originally for pairing critical points of Morse functions on 2manifolds. The algorithm for 1dimension pairs a negative edge with the earliest positive edge so that a 1cycle containing both edges resides in all intermediate graphs. Both algorithms achieve the claimed time complexity via dynamic graph data structures proposed by Holm et al. In the end, using Alexander duality, we extend the algorithm for 0dimension to compute the (p1)dimensional zigzag persistence for ℝ^pembedded complexes in O(mlog² n+mlog m+nlog n) time.more » « less