skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.

Title: Algorithmic reconstruction of the fiber of persistent homology on cell complexes

Let Kbe a finite simplicial, cubical, delta or CW complex. The persistence map $$\textrm{PH}$$PHtakes a filter $$f:K\rightarrow \mathbb {R}$$f:KRas input and returns the barcodes of the sublevel set persistent homology of fin each dimension. We address the inverse problem: given target barcodes D, computing the fiber $$\textrm{PH}^{-1}(D)$$PH-1(D). For this, we use the fact that $$\textrm{PH}^{-1}(D)$$PH-1(D)decomposes as a polyhedral complex when Kis a simplicial complex, and we generalise this result to arbitrary based chain complexes. We then design and implement a depth-first search that recovers the polytopes forming the fiber $$\textrm{PH}^{-1}(D)$$PH-1(D). As an application, we solve a corpus of 120 sample problems, providing a first insight into the statistical structure of these fibers, for general CW complexes.

more » « less
Author(s) / Creator(s):
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Applied and Computational Topology
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

    more » « less
  2. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

    more » « less
  3. Abstract

    We present the first unquenched lattice-QCD calculation of the form factors for the decay$$B\rightarrow D^*\ell \nu $$BDνat nonzero recoil. Our analysis includes 15 MILC ensembles with$$N_f=2+1$$Nf=2+1flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$a\approx 0.15$$a0.15fm down to 0.045 fm, while the ratio between the light- and the strange-quark masses ranges from 0.05 to 0.4. The valencebandcquarks are treated using the Wilson-clover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavy-light meson chiral perturbation theory. Then we apply a model-independent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint lattice-QCD/experiment fit using several experimental datasets to determine the CKM matrix element$$|V_{cb}|$$|Vcb|. We obtain$$\left| V_{cb}\right| = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{-3}$$Vcb=(38.40±0.68th±0.34exp±0.18EM)×10-3. The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\chi ^2\text {/dof} = 126/84$$χ2/dof=126/84, which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is in agreement with previous exclusive determinations, but the tension with the inclusive determination remains. Finally, we integrate the differential decay rate obtained solely from lattice data to predict$$R(D^*) = 0.265 \pm 0.013$$R(D)=0.265±0.013, which confirms the current tension between theory and experiment.

    more » « less
  4. Abstract

    Leptoquarks ($$\textrm{LQ}$$LQs) are hypothetical particles that appear in various extensions of the Standard Model (SM), that can explain observed differences between SM theory predictions and experimental results. The production of these particles has been widely studied at various experiments, most recently at the Large Hadron Collider (LHC), and stringent bounds have been placed on their masses and couplings, assuming the simplest beyond-SM (BSM) hypotheses. However, the limits are significantly weaker for$$\textrm{LQ}$$LQmodels with family non-universal couplings containing enhanced couplings to third-generation fermions. We present a new study on the production of a$$\textrm{LQ}$$LQat the LHC, with preferential couplings to third-generation fermions, considering proton-proton collisions at$$\sqrt{s} = 13 \, \textrm{TeV}$$s=13TeVand$$\sqrt{s} = 13.6 \, \textrm{TeV}$$s=13.6TeV. Such a hypothesis is well motivated theoretically and it can explain the recent anomalies in the precision measurements of$$\textrm{B}$$B-meson decay rates, specifically the$$R_{D^{(*)}}$$RD()ratios. Under a simplified model where the$$\textrm{LQ}$$LQmasses and couplings are free parameters, we focus on cases where the$$\textrm{LQ}$$LQdecays to a$$\tau $$τlepton and a$$\textrm{b}$$bquark, and study how the results are affected by different assumptions about chiral currents and interference effects with other BSM processes with the same final states, such as diagrams with a heavy vector boson,$$\textrm{Z}^{\prime }$$Z. The analysis is performed using machine learning techniques, resulting in an increased discovery reach at the LHC, allowing us to probe new physics phase space which addresses the$$\textrm{B}$$B-meson anomalies, for$$\textrm{LQ}$$LQmasses up to$$5.00\, \textrm{TeV}$$5.00TeV, for the high luminosity LHC scenario.

    more » « less
  5. Abstract

    Let$$\phi $$ϕbe a positive map from the$$n\times n$$n×nmatrices$$\mathcal {M}_n$$Mnto the$$m\times m$$m×mmatrices$$\mathcal {M}_m$$Mm. It is known that$$\phi $$ϕis 2-positive if and only if for all$$K\in \mathcal {M}_n$$KMnand all strictly positive$$X\in \mathcal {M}_n$$XMn,$$\phi (K^*X^{-1}K) \geqslant \phi (K)^*\phi (X)^{-1}\phi (K)$$ϕ(KX-1K)ϕ(K)ϕ(X)-1ϕ(K). This inequality is not generally true if$$\phi $$ϕis merely a Schwarz map. We show that the corresponding tracial inequality$${{\,\textrm{Tr}\,}}[\phi (K^*X^{-1}K)] \geqslant {{\,\textrm{Tr}\,}}[\phi (K)^*\phi (X)^{-1}\phi (K)]$$Tr[ϕ(KX-1K)]Tr[ϕ(K)ϕ(X)-1ϕ(K)]holds for a wider class of positive maps that is specified here. We also comment on the connections of this inequality with various monotonicity statements that have found wide use in mathematical physics, and apply it, and a close relative, to obtain some new, definitive results.

    more » « less