Abstract A grand challenge to solve a large-scale linear inverse problem (LIP) is to retain computational efficiency and accuracy regardless of the growth of the problem size. Despite the plenitude of methods available for solving LIPs, various challenges have emerged in recent times due to the sheer volume of data, inadequate computational resources to handle an oversized problem, security and privacy concerns, and the interest in the associated incremental or decremental problems. Removing these barriers requires a holistic upgrade of the existing methods to be computationally efficient, tractable, and equipped with scalable features. We, therefore, develop the parallel residual projection (PRP), a parallel computational framework involving the decomposition of a large-scale LIP into sub-problems of low complexity and the fusion of the sub-problem solutions to form the solution to the original LIP. We analyze the convergence properties of the PRP and accentuate its benefits through its application to complex problems of network inference and gravimetric survey. We show that any existing algorithm for solving an LIP can be integrated into the PRP framework and used to solve the sub-problems while handling the prevailing challenges.
more »
« less
Solving, tracking and stopping streaming linear inverse problems
Abstract In large-scale applications including medical imaging, collocation differential equation solvers, and estimation with differential privacy, the underlying linear inverse problem can be reformulated as a streaming problem. In theory, the streaming problem can be effectively solved using memory-efficient, exponentially-converging streaming solvers. In special cases when the underlying linear inverse problem is finite-dimensional, streaming solvers can periodically evaluate the residual norm at a substantial computational cost. When the underlying system is infinite dimensional, streaming solver can only access noisy estimates of the residual. While such noisy estimates are computationally efficient, they are useful only when their accuracy is known. In this work, we rigorously develop a general family of computationally-practical residual estimators and their uncertainty sets for streaming solvers, and we demonstrate the accuracy of our methods on a number of large-scale linear problems. Thus, we further enable the practical use of streaming solvers for important classes of linear inverse problems.
more »
« less
- PAR ID:
- 10517769
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Inverse Problems
- Volume:
- 40
- Issue:
- 8
- ISSN:
- 0266-5611
- Format(s):
- Medium: X Size: Article No. 085003
- Size(s):
- Article No. 085003
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Bringmann, Karl; Grohe, Martin; Puppis, Gabriele; Svensson, Ola (Ed.)The multicommodity flow problem is a classic problem in network flow and combinatorial optimization, with applications in transportation, communication, logistics, and supply chain management, etc. Existing algorithms often focus on low-accuracy approximate solutions, while high-accuracy algorithms typically rely on general linear program solvers. In this paper, we present efficient high-accuracy algorithms for a broad family of multicommodity flow problems on undirected graphs, demonstrating improved running times compared to general linear program solvers. Our main result shows that we can solve the 𝓁_{q, p}-norm multicommodity flow problem to a (1 + ε) approximation in time O_{q, p}(m^{1+o(1)} k² log(1/ε)), where k is the number of commodities, and O_{q, p}(⋅) hides constants depending only on q or p. As q and p approach to 1 and ∞ respectively, 𝓁_{q, p}-norm flow tends to maximum concurrent flow. We introduce the first iterative refinement framework for 𝓁_{q, p}-norm minimization problems, which reduces the problem to solving a series of decomposable residual problems. In the case of k-commodity flow, each residual problem can be decomposed into k single commodity convex flow problems, each of which can be solved in almost-linear time. As many classical variants of multicommodity flows were shown to be complete for linear programs in the high-accuracy regime [Ding-Kyng-Zhang, ICALP'22], our result provides new directions for studying more efficient high-accuracy multicommodity flow algorithms.more » « less
-
Abstract We consider the simulation of Bayesian statistical inverse problems governed by large-scale linear and nonlinear partial differential equations (PDEs). Markov chain Monte Carlo (MCMC) algorithms are standard techniques to solve such problems. However, MCMC techniques are computationally challenging as they require a prohibitive number of forward PDE solves. The goal of this paper is to introduce a fractional deep neural network (fDNN) based approach for the forward solves within an MCMC routine. Moreover, we discuss some approximation error estimates. We illustrate the efficiency of fDNN on inverse problems governed by nonlinear elliptic PDEs and the unsteady Navier–Stokes equations. In the former case, two examples are discussed, respectively depending on two and 100 parameters, with significant observed savings. The unsteady Navier–Stokes example illustrates that fDNN can outperform existing DNNs, doing a better job of capturing essential features such as vortex shedding.more » « less
-
null (Ed.)We present three new algorithms for constructing differentially private synthetic data—a sanitized version of a sensitive dataset that approximately preserves the answers to a large collection of statistical queries. All three algorithms are \emph{oracle-efficient} in the sense that they are computationally efficient when given access to an optimization oracle. Such an oracle can be implemented using many existing (non-private) optimization tools such as sophisticated integer program solvers. While the accuracy of the synthetic data is contingent on the oracle’s optimization performance, the algorithms satisfy differential privacy even in the worst case. For all three algorithms, we provide theoretical guarantees for both accuracy and privacy. Through empirical evaluation, we demonstrate that our methods scale well with both the dimensionality of the data and the number of queries. Compared to the state-of-the-art method High-Dimensional Matrix Mechanism (McKenna et al. VLDB 2018), our algorithms provide better accuracy in the large workload and high privacy regime (corresponding to low privacy loss epsilon).more » « less
-
We introduce a computational framework that incorporates multiple scattering for large-scale three-dimensional (3-D) particle localization using single-shot in-line holography. Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts. Existing multiple scattering solvers become computationally prohibitive for large-scale problems, which comprise millions of voxels within the scattering volume. Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework. In the forward model, each recursion estimates the next higher-order multiple scattered field among the object slices. In the inverse model, each order of scattering is recursively estimated by a nonlinear optimization procedure. This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles. We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation. Our experiments demonstrate robust inverse multiple scattering, allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior. The performance bound of our approach is quantified in simulation and validated experimentally. Our work promises utilization of multiple scattering for versatile large-scale applications.more » « less
An official website of the United States government
