Abstract We study the sparsity of the solutions to systems of linear Diophantine equations with and without non-negativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the$$\ell _0$$ -norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$ -norm of solutions to systems$$A\varvec{x}=\varvec{b}$$ , where$$A\in \mathbb {Z}^{m\times n}$$ ,$${\varvec{b}}\in \mathbb {Z}^m$$ and$$\varvec{x}$$ is either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\ell _0$$ -norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\mathbb {R}$$ , to other subdomains such as$$\mathbb {Z}$$ . We show that these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.
more »
« less
Strong lensing time-delay cosmography in the 2020s
Abstract Multiply imaged time-variable sources can be used to measure absolute distances as a function of redshifts and thus determine cosmological parameters, chiefly the Hubble Constant H$$_0$$ . In the two decades up to 2020, through a number of observational and conceptual breakthroughs, this so-called time-delay cosmography has reached a precision sufficient to be an important independent voice in the current “Hubble tension” debate between early- and late-universe determinations of H$$_0$$ . The 2020s promise to deliver major advances in time-delay cosmography, owing to the large number of lenses to be discovered by new and upcoming surveys and the vastly improved capabilities for follow-up and analysis. In this review, after a brief summary of the foundations of the method and recent advances, we outline the opportunities for the decade and the challenges that will need to be overcome in order to meet the goal of the determination of H$$_0$$ from time-delay cosmography with 1% precision and accuracy.
more »
« less
- Award ID(s):
- 1906976
- PAR ID:
- 10473662
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- The Astronomy and Astrophysics Review
- Volume:
- 30
- Issue:
- 1
- ISSN:
- 0935-4956
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract LetXbe a compact normal complex space of dimensionnandLbe a holomorphic line bundle onX. Suppose that$$\Sigma =(\Sigma _1,\ldots ,\Sigma _\ell )$$ is an$$\ell $$ -tuple of distinct irreducible proper analytic subsets ofX,$$\tau =(\tau _1,\ldots ,\tau _\ell )$$ is an$$\ell $$ -tuple of positive real numbers, and let$$H^0_0(X,L^p)$$ be the space of holomorphic sections of$$L^p:=L^{\otimes p}$$ that vanish to order at least$$\tau _jp$$ along$$\Sigma _j$$ ,$$1\le j\le \ell $$ . If$$Y\subset X$$ is an irreducible analytic subset of dimensionm, we consider the space$$H^0_0 (X|Y, L^p)$$ of holomorphic sections of$$L^p|_Y$$ that extend to global holomorphic sections in$$H^0_0(X,L^p)$$ . Assuming that the triplet$$(L,\Sigma ,\tau )$$ is big in the sense that$$\dim H^0_0(X,L^p)\sim p^n$$ , we give a general condition onYto ensure that$$\dim H^0_0(X|Y,L^p)\sim p^m$$ . WhenLis endowed with a continuous Hermitian metric, we show that the Fubini-Study currents of the spaces$$H^0_0(X|Y,L^p)$$ converge to a certain equilibrium current onY. We apply this to the study of the equidistribution of zeros inYof random holomorphic sections in$$H^0_0(X|Y,L^p)$$ as$$p\rightarrow \infty $$ .more » « less
-
Abstract We studyinexactfixed-point proximity algorithms for solving a class of sparse regularization problems involving the$$\ell _0$$ norm. Specifically, the$$\ell _0$$ model has an objective function that is the sum of a convex fidelity term and a Moreau envelope of the$$\ell _0$$ norm regularization term. Such an$$\ell _0$$ model is non-convex. Existing exact algorithms for solving the problems require the availability of closed-form formulas for the proximity operator of convex functions involved in the objective function. When such formulas are not available, numerical computation of the proximity operator becomes inevitable. This leads to inexact iteration algorithms. We investigate in this paper how the numerical error for every step of the iteration should be controlled to ensure global convergence of the inexact algorithms. We establish a theoretical result that guarantees the sequence generated by the proposed inexact algorithm converges to a local minimizer of the optimization problem. We implement the proposed algorithms for three applications of practical importance in machine learning and image science, which include regression, classification, and image deblurring. The numerical results demonstrate the convergence of the proposed algorithm and confirm that local minimizers of the$$\ell _0$$ models found by the proposed inexact algorithm outperform global minimizers of the corresponding$$\ell _1$$ models, in terms of approximation accuracy and sparsity of the solutions.more » « less
-
Abstract Given$$g \in \mathbb N \cup \{0, \infty \}$$ , let$$\Sigma _g$$ denote the closed surface of genusgwith a Cantor set removed, if$$g<\infty $$ ; or the blooming Cantor tree, when$$g= \infty $$ . We construct a family$$\mathfrak B(H)$$ of subgroups of$${{\,\textrm{Map}\,}}(\Sigma _g)$$ whose elements preserve ablock decompositionof$$\Sigma _g$$ , andeventually like actlike an element ofH, whereHis a prescribed subgroup of the mapping class group of the block. The group$$\mathfrak B(H)$$ surjects onto an appropriate symmetric Thompson group of Farley–Hughes; in particular, it answers positively. Our main result asserts that$$\mathfrak B(H)$$ is of type$$F_n$$ if and only ifHis. As a consequence, for every$$g\in \mathbb N \cup \{0, \infty \}$$ and every$$n\ge 1$$ , we construct a subgroup$$G <{{\,\textrm{Map}\,}}(\Sigma _g)$$ that is of type$$F_n$$ but not of type$$F_{n+1}$$ , and which contains the mapping class group of every compact surface of genus$$\le g$$ and with non-empty boundary.more » « less
-
Abstract This paper studies several solution paths of sparse quadratic minimization problems as a function of the weighing parameter of the bi-objective of estimation loss versus solution sparsity. Three such paths are considered: the “$$\ell _0$$ -path” where the discontinuous$$\ell _0$$ -function provides the exact sparsity count; the “$$\ell _1$$ -path” where the$$\ell _1$$ -function provides a convex surrogate of sparsity count; and the “capped$$\ell _1$$ -path” where the nonconvex nondifferentiable capped$$\ell _1$$ -function aims to enhance the$$\ell _1$$ -approximation. Serving different purposes, each of these three formulations is different from each other, both analytically and computationally. Our results deepen the understanding of (old and new) properties of the associated paths, highlight the pros, cons, and tradeoffs of these sparse optimization models, and provide numerical evidence to support the practical superiority of the capped$$\ell _1$$ -path. Our study of the capped$$\ell _1$$ -path is interesting in its own right as the path pertains to computable directionally stationary (= strongly locally minimizing in this context, as opposed to globally optimal) solutions of a parametric nonconvex nondifferentiable optimization problem. Motivated by classical parametric quadratic programming theory and reinforced by modern statistical learning studies, both casting an exponential perspective in fully describing such solution paths, we also aim to address the question of whether some of them can be fully traced in strongly polynomial time in the problem dimensions. A major conclusion of this paper is that a path of directional stationary solutions of the capped$$\ell _1$$ -regularized problem offers interesting theoretical properties and practical compromise between the$$\ell _0$$ -path and the$$\ell _1$$ -path. Indeed, while the$$\ell _0$$ -path is computationally prohibitive and greatly handicapped by the repeated solution of mixed-integer nonlinear programs, the quality of$$\ell _1$$ -path, in terms of the two criteria—loss and sparsity—in the estimation objective, is inferior to the capped$$\ell _1$$ -path; the latter can be obtained efficiently by a combination of a parametric pivoting-like scheme supplemented by an algorithm that takes advantage of the Z-matrix structure of the loss function.more » « less
An official website of the United States government

