skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning to Solve Linear Inverse Problems in Imaging with Neumann Networks
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least-squares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least-squares problem. Here we summarize the Neumann network approach and show that it has a form compatible with the optimal reconstruction function for a given inverse problem. We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach.  more » « less
Award ID(s):
1934637 1925101 1740707
PAR ID:
10183693
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
NeurIPS 2019 Workshop on Solving Inverse Problems with Deep Networks
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We present a data-driven method for computing approximate forward reachable sets using separating kernels in a reproducing kernel Hilbert space. We frame the problem as a support estimation problem, and learn a classifier of the support as an element in a reproducing kernel Hilbert space using a data-driven approach. Kernel methods provide a computationally efficient representation for the classifier that is the solution to a regularized least squares problem. The solution converges almost surely as the sample size increases, and admits known finite sample bounds. This approach is applicable to stochastic systems with arbitrary disturbances and neural network verification problems by treating the network as a dynamical system, or by considering neural network controllers as part of a closed-loop system. We present our technique on several examples, including a spacecraft rendezvous and docking problem, and two nonlinear system benchmarks with neural network controllers. 
    more » « less
  2. We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching. We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT). While current randomized solvers for least-squares optimization prescribe an embedding dimension at least greater than the data dimension, we show that the embedding dimension can be reduced to the effective dimension of the optimization problem, and still preserve high-probability convergence guarantees. In this regard, we derive sharp matrix deviation inequalities over ellipsoids for both Gaussian and SRHT embeddings. Specifically, we improve on the constant of a classical Gaussian concentration bound whereas, for SRHT embeddings, our deviation inequality involves a novel technical approach. Leveraging these bounds, we are able to design a practical and adaptive algorithm which does not require to know the effective dimension beforehand. Our method starts with an initial embedding dimension equal to 1 and, over iterations, increases the embedding dimension up to the effective one at most. Hence, our algorithm improves the state-of-the-art computational complexity for solving regularized least-squares problems. Further, we show numerically that it outperforms standard iterative solvers such as the conjugate gradient method and its pre-conditioned version on several standard machine learning datasets. 
    more » « less
  3. null (Ed.)
    We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching. We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT). While current randomized solvers for least-squares optimization prescribe an embedding dimension at least greater than the data dimension, we show that the embedding dimension can be reduced to the effective dimension of the optimization problem, and still preserve high-probability convergence guarantees. In this regard, we derive sharp matrix deviation inequalities over ellipsoids for both Gaussian and SRHT embeddings. Specifically, we improve on the constant of a classical Gaussian concentration bound whereas, for SRHT embeddings, our deviation inequality involves a novel technical approach. Leveraging these bounds, we are able to design a practical and adaptive algorithm which does not require to know the effective dimension beforehand. Our method starts with an initial embedding dimension equal to 1 and, over iterations, increases the embedding dimension up to the effective one at most. Hence, our algorithm improves the state-of-the-art computational complexity for solving regularized least-squares problems. Further, we show numerically that it outperforms standard iterative solvers such as the conjugate gradient method and its pre-conditioned version on several standard machine learning datasets. 
    more » « less
  4. Network traffic is difficult to characterize due to its random, bursty nature. Even if a traffic source could be fit to a stochastic model with reasonable accuracy, analysis of end-to-end network performance metrics for such traffic models is generally intractable. In prior work, an approach to characterize traffic burstiness using a bound based on the class of phase-type distributions was proposed. Such phase-type bounds could be applied in conjunction with stochastic network calculus to derive probabilistic end-to-end delay bounds for a traffic stream. In this paper, we focus on the problem of estimating a tight phase-type burstiness bound for a given traffic trace. We investigate a method based on least squares and another based on the expectation-maximization algorithm. Our numerical results compare the two approaches in the scenario of a heavy-tailed M/G/1 queue. We find that while both methods are viable approaches for deriving phase-type bounds on traffic burstiness, the least squares approach performs better, particularly when a tail limit is imposed. 
    more » « less
  5. Abstract A parameter identification inverse problem in the form of nonlinear least squares is considered.In the lack of stability, the frozen iteratively regularized Gauss–Newton (FIRGN) algorithm is proposed and its convergence is justified under what we call a generalized normal solvability condition.The penalty term is constructed based on a semi-norm generated by a linear operator yielding a greater flexibility in the use of qualitative and quantitative a priori information available for each particular model.Unlike previously known theoretical results on the FIRGN method, our convergence analysis does not rely on any nonlinearity conditions and it is applicable to a large class of nonlinear operators.In our study, we leverage the nature of ill-posedness in order to establish convergence in the noise-free case.For noise contaminated data, we show that, at least theoretically, the process does not require a stopping rule and is no longer semi-convergent.Numerical simulations for a parameter estimation problem in epidemiology illustrate the efficiency of the algorithm. 
    more » « less