The notion of generalized rank in the context of multiparameter persistence has become an important ingredient for defining interesting homological structures such as generalized persistence diagrams. However, its efficient computation has not yet been studied in the literature. We show that the generalized rank over a finite interval
Onedimensional persistent homology is arguably the most important and heavily used computational tool in topological data analysis. Additional information can be extracted from datasets by studying multidimensional persistence modules and by utilizing cohomological ideas, e.g. the cohomological cup product. In this work, given a single parameter filtration, we investigate a certain 2dimensional persistence module structure associated with persistent cohomology, where one parameter is the cuplength
 Award ID(s):
 1901360
 NSFPAR ID:
 10468088
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Journal of Applied and Computational Topology
 Volume:
 8
 Issue:
 1
 ISSN:
 23671726
 Format(s):
 Medium: X Size: p. 93148
 Size(s):
 ["p. 93148"]
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract I of a indexed persistence module$$\textbf{Z}^2$$ ${Z}^{2}$M is equal to the generalized rank of the zigzag module that is induced on a certain path inI tracing mostly its boundary. Hence, we can compute the generalized rank ofM overI by computing the barcode of the zigzag module obtained by restricting to that path. IfM is the homology of a bifiltrationF of simplices (while accounting for multicriticality) and$$t$$ $t$I consists of points, this computation takes$$t$$ $t$ time where$$O(t^\omega )$$ $O\left({t}^{\omega}\right)$ is the exponent of matrix multiplication. We apply this result to obtain an improved algorithm for the following problem. Given a bifiltration inducing a module$$\omega \in [2,2.373)$$ $\omega \in [2,2.373)$M , determine whetherM is interval decomposable and, if so, compute all intervals supporting its indecomposable summands. 
Abstract The double differential cross sections of the Drell–Yan lepton pair (
, dielectron or dimuon) production are measured as functions of the invariant mass$$\ell ^+\ell ^$$ ${\ell}^{+}{\ell}^{}$ , transverse momentum$$m_{\ell \ell }$$ ${m}_{\ell \ell}$ , and$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ . The$$\varphi ^{*}_{\eta }$$ ${\phi}_{\eta}^{\ast}$ observable, derived from angular measurements of the leptons and highly correlated with$$\varphi ^{*}_{\eta }$$ ${\phi}_{\eta}^{\ast}$ , is used to probe the low$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ region in a complementary way. Dilepton masses up to 1$$p_{\textrm{T}} (\ell \ell )$$ ${p}_{\text{T}}\left(\ell \ell \right)$ are investigated. Additionally, a measurement is performed requiring at least one jet in the final state. To benefit from partial cancellation of the systematic uncertainty, the ratios of the differential cross sections for various$$\,\text {Te\hspace{.08em}V}$$ $\phantom{\rule{0ex}{0ex}}\text{Te}\phantom{\rule{0ex}{0ex}}\text{V}$ ranges to those in the Z mass peak interval are presented. The collected data correspond to an integrated luminosity of 36.3$$m_{\ell \ell }$$ ${m}_{\ell \ell}$ of proton–proton collisions recorded with the CMS detector at the LHC at a centreofmass energy of 13$$\,\text {fb}^{1}$$ $\phantom{\rule{0ex}{0ex}}{\text{fb}}^{1}$ . Measurements are compared with predictions based on perturbative quantum chromodynamics, including softgluon resummation.$$\,\text {Te\hspace{.08em}V}$$ $\phantom{\rule{0ex}{0ex}}\text{Te}\phantom{\rule{0ex}{0ex}}\text{V}$ 
Abstract Matrix reduction is the standard procedure for computing the persistent homology of a filtered simplicial complex with
m simplices. Its output is a particular decomposition of the total boundary matrix, from which the persistence diagrams and generating cycles are derived. Persistence diagrams are known to vary continuously with respect to their input, motivating the study of their computation for timevarying filtered complexes. Computing persistence dynamically can be reduced to maintaining a valid decomposition under adjacent transpositions in the filtration order. Since there are such transpositions, this maintenance procedure exhibits limited scalability and is often too fine for many applications. We propose a coarser strategy for maintaining the decomposition over a 1parameter family of filtrations. By reduction to a particular longest common subsequence problem, we show that the minimal number of decomposition updates$$O(m^2)$$ $O\left({m}^{2}\right)$d can be found in time and$$O(m \log \log m)$$ $O(mloglogm)$O (m ) space, and that the corresponding sequence of permutations—which we call aschedule —can be constructed in time. We also show that, in expectation, the storage needed to employ this strategy is actually sublinear in$$O(d m \log m)$$ $O(dmlogm)$m . Exploiting this connection, we show experimentally that the decrease in operations to compute diagrams across a family of filtrations is proportional to the difference between the expected quadratic number of states and the proposed sublinear coarsening. Applications to video data, dynamic metric space data, and multiparameter persistence are also presented. 
Abstract We study the sparsity of the solutions to systems of linear Diophantine equations with and without nonnegativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the
norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$ ${\ell}_{0}$ norm of solutions to systems$$\ell _0$$ ${\ell}_{0}$ , where$$A\varvec{x}=\varvec{b}$$ $Ax=b$ ,$$A\in \mathbb {Z}^{m\times n}$$ $A\in {Z}^{m\times n}$ and$${\varvec{b}}\in \mathbb {Z}^m$$ $b\in {Z}^{m}$ is either a general integer vector (lattice case) or a nonnegative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\varvec{x}$$ $x$ norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\ell _0$$ ${\ell}_{0}$ , to other subdomains such as$$\mathbb {R}$$ $R$ . We show that these new ranklike functions are all NPhard to compute in general, but polynomialtime computable for fixed number of variables.$$\mathbb {Z}$$ $Z$ 
Abstract This paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the biobjective of estimation loss versus solution sparsity. Three such paths are considered: the “
path” where the discontinuous$$\ell _0$$ ${\ell}_{0}$ function provides the exact sparsity count; the “$$\ell _0$$ ${\ell}_{0}$ path” where the$$\ell _1$$ ${\ell}_{1}$ function provides a convex surrogate of sparsity count; and the “capped$$\ell _1$$ ${\ell}_{1}$ path” where the nonconvex nondifferentiable capped$$\ell _1$$ ${\ell}_{1}$ function aims to enhance the$$\ell _1$$ ${\ell}_{1}$ approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped$$\ell _1$$ ${\ell}_{1}$ path. Our study of the capped$$\ell _1$$ ${\ell}_{1}$ path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped$$\ell _1$$ ${\ell}_{1}$ regularized problem offers interesting theoretical properties and practical compromise between the$$\ell _1$$ ${\ell}_{1}$ path and the$$\ell _0$$ ${\ell}_{0}$ path. Indeed, while the$$\ell _1$$ ${\ell}_{1}$ path is computationally prohibitive and greatly handicapped by the repeated solution of mixedinteger nonlinear programs, the quality of$$\ell _0$$ ${\ell}_{0}$ path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped$$\ell _1$$ ${\ell}_{1}$ path; the latter can be obtained efficiently by a combination of a parametric pivotinglike scheme supplemented by an algorithm that takes advantage of the Zmatrix structure of the loss function.$$\ell _1$$ ${\ell}_{1}$