skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A topological approach to inferring the intrinsic dimension of convex sensing data
Abstract We consider a common measurement paradigm, where an unknown subset of an affine space is measured by unknown continuous quasi-convex functions. Given the measurement data, can one determine the dimension of this space? In this paper, we develop a method for inferring the intrinsic dimension of the data from measurements by quasi-convex functions, under natural assumptions. The dimension inference problem depends only on discrete data of the ordering of the measured points of space, induced by the sensor functions. We construct a filtration of Dowker complexes, associated to measurements by quasi-convex functions. Topological features of these complexes are then used to infer the intrinsic dimension. We prove convergence theorems that guarantee obtaining the correct intrinsic dimension in the limit of large data, under natural assumptions. We also illustrate the usability of this method in simulations.  more » « less
Award ID(s):
2014217
PAR ID:
10304987
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Applied and Computational Topology
Volume:
6
Issue:
1
ISSN:
2367-1726
Page Range / eLocation ID:
p. 127-176
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Multivariate adaptive regression splines (MARS) is a popular method for nonparametric regression introduced by Friedman in 1991. MARS fits sim- ple nonlinear and non-additive functions to regression data. We propose and study a natural lasso variant of the MARS method. Our method is based on least squares estimation over a convex class of functions obtained by con- sidering infinite-dimensional linear combinations of functions in the MARS basis and imposing a variation based complexity constraint. Our estimator can be computed via finite-dimensional convex optimization, although it is defined as a solution to an infinite-dimensional optimization problem. Under a few standard design assumptions, we prove that our estimator achieves a rate of convergence that depends only logarithmically on dimension and thus avoids the usual curse of dimensionality to some extent. We also show that our method is naturally connected to nonparametric estimation techniques based on smoothness constraints. We implement our method with a cross- validation scheme for the selection of the involved tuning parameter and compare it to the usual MARS method in various simulation and real data settings. 
    more » « less
  2. Abstract Objective.Electrical impedance tomography (EIT) is a noninvasive imaging method whereby electrical measurements on the periphery of a heterogeneous conductor are inverted to map its internal conductivity. The EIT method proposed here aims to improve computational speed and noise tolerance by introducing sensitivity volume as a figure-of-merit for comparing EIT measurement protocols.Approach.Each measurement is shown to correspond to a sensitivity vector in model space, such that the set of measurements, in turn, corresponds to a set of vectors that subtend a sensitivity volume in model space. A maximal sensitivity volume identifies the measurement protocol with the greatest sensitivity and greatest mutual orthogonality. A distinguishability criterion is generalized to quantify the increased noise tolerance of high sensitivity measurements.Main result.The sensitivity volume method allows the model space dimension to be minimized to match that of the data space, and the data importance to be increased within an expanded space of measurements defined by an increased number of contacts.Significance.The reduction in model space dimension is shown to increasecomputational efficiency, accelerating tomographic inversion by several orders of magnitude, while the enhanced sensitivitytolerates higher noiselevels up to several orders of magnitude larger than standard methods. 
    more » « less
  3. Topological Data Analysis (TDA) is a data mining technique to characterize the topological features of data. Persistent Homology (PH) is an important tool of TDA that has been applied to a wide range of applications. However its time and space complexities motivates a need for new methods to compute the PH of high-dimensional data. An important, and memory intensive, element in the computation of PH is the complex constructed from the input data. In general, PH tools use and focus on optimizing simplicial complexes; less frequently cubical complexes are also studied. This paper develops a method to construct polytopal complexes (or complexes constructed of any mix of convex polytopes) in any dimension Rn . In general, polytopal complexes are significantly smaller than simplicial or cubical complexes. This paper includes an experimental assessment of the impact that polytopal complexes have on memory complexity and output results of a PH computation. 
    more » « less
  4. We introduce a time-dimensional reduction method for the inverse source problem in linear elasticity, where the goal is to reconstruct the initial displacement and velocity fields from partial boundary measurements of elastic wave propagation. The key idea is to employ a novel spectral representation in time, using an orthonormal basis composed of Legendre polynomials weighted by exponential functions. This Legendre polynomial-exponential basis enables a stable and accurate decomposition in the time variable, effectively reducing the original space-time inverse problem to a sequence of coupled spatial elasticity systems that no longer depend on time. These resulting systems are solved using the quasi-reversibility method. On the theoretical side, we establish a convergence theorem ensuring the stability and consistency of the regularized solution obtained by the quasi-reversibility method as the noise level tends to zero. On the computational side, two-dimensional numerical experiments confirm the theory and demonstrate the method's ability to accurately reconstruct both the geometry and amplitude of the initial data, even under substantial measurement noise. The results highlight the effectiveness of the proposed framework as a robust and computationally efficient strategy for inverse elastic source problems. 
    more » « less
  5. Abstract In many applications, we seek to recover signals from linear measurements far fewer than the ambient dimension, given the signals have exploitable structures such as sparse vectors or low rank matrices. In this paper, we work in a general setting where signals are approximately sparse in a so-called atomic set. We provide general recovery results stating that a convex programming can stably and robustly recover signals if the null space of the sensing map satisfies certain properties. Moreover, we argue that such null space property can be satisfied with high probability if each measurement is sub-Gaussian even when the number of measurements are very few. Some new results for recovering signals sparse in a frame, and recovering low rank matrices are also derived as a result. 
    more » « less