skip to main content


Title: Computing Generalized Rank Invariant for 2-Parameter Persistence Modules via Zigzag Persistence and Its Applications
Abstract

The notion of generalized rank in the context of multiparameter persistence has become an important ingredient for defining interesting homological structures such as generalized persistence diagrams. However, its efficient computation has not yet been studied in the literature. We show that the generalized rank over a finite intervalIof a$$\textbf{Z}^2$$Z2-indexed persistence moduleMis equal to the generalized rank of the zigzag module that is induced on a certain path inItracing mostly its boundary. Hence, we can compute the generalized rank ofMoverIby computing the barcode of the zigzag module obtained by restricting to that path. IfMis the homology of a bifiltrationFof$$t$$tsimplices (while accounting for multi-criticality) andIconsists of$$t$$tpoints, this computation takes$$O(t^\omega )$$O(tω)time where$$\omega \in [2,2.373)$$ω[2,2.373)is the exponent of matrix multiplication. We apply this result to obtain an improved algorithm for the following problem. Given a bifiltration inducing a moduleM, determine whetherMis interval decomposable and, if so, compute all intervals supporting its indecomposable summands.

 
more » « less
NSF-PAR ID:
10468074
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Discrete & Computational Geometry
Volume:
71
Issue:
1
ISSN:
0179-5376
Format(s):
Medium: X Size: p. 67-94
Size(s):
["p. 67-94"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Using extensive numerical simulation of the Navier–Stokes equations, we study the transition from the Darcy’s law for slow flow of fluids through a disordered porous medium to the nonlinear flow regime in which the effect of inertia cannot be neglected. The porous medium is represented by two-dimensional slices of a three-dimensional image of a sandstone. We study the problem over wide ranges of porosity and the Reynolds number, as well as two types of boundary conditions, and compute essential features of fluid flow, namely, the strength of the vorticity, the effective permeability of the pore space, the frictional drag, and the relationship between the macroscopic pressure gradient$${\varvec{\nabla }}P$$Pand the fluid velocityv. The results indicate that when the Reynolds number Re is low enough that the Darcy’s law holds, the magnitude$$\omega _z$$ωzof the vorticity is nearly zero. As Re increases, however, so also does$$\omega _z$$ωz, and its rise from nearly zero begins at the same Re at which the Darcy’s law breaks down. We also show that a nonlinear relation between the macroscopic pressure gradient and the fluid velocityv, given by,$$-{\varvec{\nabla }}P=(\mu /K_e)\textbf{v}+\beta _n\rho |\textbf{v}|^2\textbf{v}$$-P=(μ/Ke)v+βnρ|v|2v, provides accurate representation of the numerical data, where$$\mu$$μand$$\rho$$ρare the fluid’s viscosity and density,$$K_e$$Keis the effective Darcy permeability in the linear regime, and$$\beta _n$$βnis a generalized nonlinear resistance. Theoretical justification for the relation is presented, and its predictions are also compared with those of the Forchheimer’s equation.

     
    more » « less
  2. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  3. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

     
    more » « less
  4. Abstract

    Matrix reduction is the standard procedure for computing the persistent homology of a filtered simplicial complex withmsimplices. Its output is a particular decomposition of the total boundary matrix, from which the persistence diagrams and generating cycles are derived. Persistence diagrams are known to vary continuously with respect to their input, motivating the study of their computation for time-varying filtered complexes. Computing persistence dynamically can be reduced to maintaining a valid decomposition under adjacent transpositions in the filtration order. Since there are$$O(m^2)$$O(m2)such transpositions, this maintenance procedure exhibits limited scalability and is often too fine for many applications. We propose a coarser strategy for maintaining the decomposition over a 1-parameter family of filtrations. By reduction to a particular longest common subsequence problem, we show that the minimal number of decomposition updatesdcan be found in$$O(m \log \log m)$$O(mloglogm)time andO(m) space, and that the corresponding sequence of permutations—which we call aschedule—can be constructed in$$O(d m \log m)$$O(dmlogm)time. We also show that, in expectation, the storage needed to employ this strategy is actually sublinear inm. Exploiting this connection, we show experimentally that the decrease in operations to compute diagrams across a family of filtrations is proportional to the difference between the expected quadratic number of states and the proposed sublinear coarsening. Applications to video data, dynamic metric space data, and multiparameter persistence are also presented.

     
    more » « less
  5. Abstract

    Let$$X\rightarrow {{\mathbb {P}}}^1$$XP1be an elliptically fiberedK3 surface, admitting a sequence$$\omega _{i}$$ωiof Ricci-flat metrics collapsing the fibers. LetVbe a holomorphicSU(n) bundle overX, stable with respect to$$\omega _i$$ωi. Given the corresponding sequence$$\Xi _i$$Ξiof Hermitian–Yang–Mills connections onV, we prove that, ifEis a generic fiber, the restricted sequence$$\Xi _i|_{E}$$Ξi|Econverges to a flat connection$$A_0$$A0. Furthermore, if the restriction$$V|_E$$V|Eis of the form$$\oplus _{j=1}^n{\mathcal {O}}_E(q_j-0)$$j=1nOE(qj-0)forndistinct points$$q_j\in E$$qjE, then these points uniquely determine$$A_0$$A0.

     
    more » « less