Abstract Computational inverse problems utilize a finite number of measurements to infer a discrete approximation of the unknown parameter function. With motivation from the setting of PDE-based optimization, we study the unique reconstruction of discretized inverse problems by examining the positivity of the Hessian matrix. What is the reconstruction power of a fixed number of data observations? How many parameters can one reconstruct? Here we describe a probabilistic approach, and spell out the interplay of the observation size (r) and the number of parameters to be uniquely identified (m). The technical pillar here is the random sketching strategy, in which the matrix concentration inequality and sampling theory are largely employed. By analyzing a randomly subsampled Hessian matrix, we attain a well-conditioned reconstruction problem with high probability. Our main theory is validated in numerical experiments, using an elliptic inverse problem as an example.
more »
« less
This content will become publicly available on March 19, 2026
Discrete inverse problems with internal functionals *
Abstract We study the problem of finding the resistors in a resistor network from measurements of the power dissipated by the resistors under different loads. We give sufficient conditions for local uniqueness, i.e. conditions that guarantee that the linearization of this non-linear inverse problem admits a unique solution. Our method is inspired by a method to study local uniqueness of inverse problems with internal functionals in the continuum, where the inverse problem is reformulated as a redundant system of differential equations. We use our method to derive local uniqueness conditions for other discrete inverse problems with internal functionals including a discrete analogue of the inverse Schrödinger problem and problems where the resistors are replaced by impedances and dissipated power at the zero and a positive frequency are available. Moreover, we show that the dissipated power measurements can be obtained from measurements of thermal noise induced currents.
more »
« less
- PAR ID:
- 10595482
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Inverse Problems
- Volume:
- 41
- Issue:
- 4
- ISSN:
- 0266-5611
- Page Range / eLocation ID:
- 045004
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The majority of iterative algorithms for CT reconstruction rely on discrete-to-discrete modeling, where both the sinogram measurements and image to be estimated are discrete arrays. However, tomographic projections are ideally modeled as line integrals of a continuous attenuation function, i.e., the true inverse problem is discrete-to-continuous in nature. Recently, coordinate-based neural networks (CBNNs), also known as implicit neural representations, have gained traction as a flexible type of continuous domain image representation in a variety of inverse problems arising in computer vision and computational imaging. Using standard neural network training techniques, a CBNN can be fit to measurements to give a continuous domain estimate of the image. In this study, we empirically investigate the potential of CBNNs to solve the continuous domain inverse problems in CT imaging. In particular, we experiment with reconstructing an analytical phantom from its ideal sparse-view sinogram measurements. Our results illustrate that reconstruction with a CBNN are more accurate than filtered back projection and algebraic reconstruction techniques at a variety of resolutions, and competitive with total variation regularized iterative reconstruction.more » « less
-
Debiased machine learning is a meta-algorithm based on bias correction and sample splitting to calculate confidence intervals for functionals, i.e., scalar summaries, of machine learning algorithms. For example, an analyst may seek the confidence interval for a treatment effect estimated with a neural network. We present a non-asymptotic debiased machine learning theorem that encompasses any global or local functional of any machine learning algorithm that satisfies a few simple, interpretable conditions. Formally, we prove consistency, Gaussian approximation and semiparametric efficiency by finite-sample arguments. The rate of convergence is $$n^{-1/2}$$ for global functionals, and it degrades gracefully for local functionals. Our results culminate in a simple set of conditions that an analyst can use to translate modern learning theory rates into traditional statistical inference. The conditions reveal a general double robustness property for ill-posed inverse problems.more » « less
-
Shawe-Taylor, John (Ed.)Learning a function from a finite number of sampled data points (measurements) is a fundamental problem in science and engineering. This is often formulated as a minimum norm interpolation (MNI) problem, a regularized learning problem or, in general, a semi-discrete inverse problem (SDIP), in either Hilbert spaces or Banach spaces. The goal of this paper is to systematically study solutions of these problems in Banach spaces. We aim at obtaining explicit representer theorems for their solutions, on which convenient solution methods can then be developed. For the MNI problem, the explicit representer theorems enable us to express the infimum in terms of the norm of the linear combination of the interpolation functionals. For the purpose of developing efficient computational algorithms, we establish the fixed-point equation formulation of solutions of these problems. We reveal that unlike in a Hilbert space, in general, solutions of these problems in a Banach space may not be able to be reduced to truly finite dimensional problems (with certain infinite dimensional components hidden). We demonstrate how this obstacle can be removed, reducing the original problem to a truly finite dimensional one, in the special case when the Banach space is ℓ1(N).more » « less
-
Abstract In this paper, we introduce a bilevel optimization framework for addressing inverse mean-field games, alongside an exploration of numerical methods tailored for this bilevel problem. The primary benefit of our bilevel formulation lies in maintaining the convexity of the objective function and the linearity of constraints in the forward problem. Our paper focuses on inverse mean-field games characterized by unknown obstacles and metrics. We show numerical stability for these two types of inverse problems. More importantly, we, for the first time, establish the identifiability of the inverse mean-field game with unknown obstacles via the solution of the resultant bilevel problem. The bilevel approach enables us to employ an alternating gradient-based optimization algorithm with a provable convergence guarantee. To validate the effectiveness of our methods in solving the inverse problems, we have designed comprehensive numerical experiments, providing empirical evidence of its efficacy.more » « less
An official website of the United States government
