skip to main content

Title: Learning to Solve Linear Inverse Problems in Imaging with Neumann Networks
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least-squares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least-squares problem. Here we summarize the Neumann network approach and show that it has a form compatible with the optimal reconstruction function for a given inverse problem. We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach.  more » « less
Award ID(s):
1934637 1925101 1740707
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
NeurIPS 2019 Workshop on Solving Inverse Problems with Deep Networks
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We present a data-driven method for computing approximate forward reachable sets using separating kernels in a reproducing kernel Hilbert space. We frame the problem as a support estimation problem, and learn a classifier of the support as an element in a reproducing kernel Hilbert space using a data-driven approach. Kernel methods provide a computationally efficient representation for the classifier that is the solution to a regularized least squares problem. The solution converges almost surely as the sample size increases, and admits known finite sample bounds. This approach is applicable to stochastic systems with arbitrary disturbances and neural network verification problems by treating the network as a dynamical system, or by considering neural network controllers as part of a closed-loop system. We present our technique on several examples, including a spacecraft rendezvous and docking problem, and two nonlinear system benchmarks with neural network controllers. 
    more » « less
  2. Abstract

    Many methods have been developed for estimating the parameters of biexponential decay signals, which arise throughout magnetic resonance relaxometry (MRR) and the physical sciences. This is an intrinsically ill‐posed problem so that estimates can depend strongly on noise and underlying parameter values. Regularization has proven to be a remarkably efficient procedure for providing more reliable solutions to ill‐posed problems, while, more recently, neural networks have been used for parameter estimation. We re‐address the problem of parameter estimation in biexponential models by introducing a novel form of neural network regularization which we call input layer regularization (ILR). Here, inputs to the neural network are composed of a biexponential decay signal augmented by signals constructed from parameters obtained from a regularized nonlinear least‐squares estimate of the two decay time constants. We find that ILR results in a reduction in the error of time constant estimates on the order of 15%–50% or more, depending on the metric used and signal‐to‐noise level, with greater improvement seen for the time constant of the more rapidly decaying component. ILR is compatible with existing regularization techniques and should be applicable to a wide range of parameter estimation problems.

    more » « less
  3. Abstract A parameter identification inverse problem in the form of nonlinear least squares is considered.In the lack of stability, the frozen iteratively regularized Gauss–Newton (FIRGN) algorithm is proposed and its convergence is justified under what we call a generalized normal solvability condition.The penalty term is constructed based on a semi-norm generated by a linear operator yielding a greater flexibility in the use of qualitative and quantitative a priori information available for each particular model.Unlike previously known theoretical results on the FIRGN method, our convergence analysis does not rely on any nonlinearity conditions and it is applicable to a large class of nonlinear operators.In our study, we leverage the nature of ill-posedness in order to establish convergence in the noise-free case.For noise contaminated data, we show that, at least theoretically, the process does not require a stopping rule and is no longer semi-convergent.Numerical simulations for a parameter estimation problem in epidemiology illustrate the efficiency of the algorithm. 
    more » « less
  4. null (Ed.)
    Power system state estimation (PSSE) aims at finding the voltage magnitudes and angles at all generation and load buses, using meter readings and other available information. PSSE is often formulated as a nonconvex and nonlinear least-squares (NLS) cost function, which is traditionally solved by the Gauss-Newton method. However, Gauss-Newton iterations for minimizing nonconvex problems are sensitive to the initialization, and they can diverge. In this context, we advocate a deep neural network (DNN) based “trainable regularizer” to incorporate prior information for accurate and reliable state estimation. The resulting regularized NLS does not admit a neat closed form solution. To handle this, a novel end-to-end DNN is constructed subsequently by unrolling a Gauss-Newton-type solver which alternates between least-squares loss and the regularization term. Our DNN architecture can further offer a suite of advantages, e.g., accommodating network topology via graph neural networks based prior. Numerical tests using real load data on the IEEE 118-bus benchmark system showcase the improved estimation performance of the proposed scheme compared with state-of-the-art alternatives. Interestingly, our results suggest that a simple feed forward network based prior implicitly exploits the topology information hidden in data. 
    more » « less
  5. Network tomography aims at estimating source–destination traffic rates from link traffic measurements. This inverse problem was formulated by Vardi in 1996 for Poisson traffic over networks operating under deterministic as well as random routing regimes. In this article, we expand Vardi's second-order moment matching rate estimation approach to higher-order cumulant matching with the goal of increasing the column rank of the mapping and consequently improving the rate estimation accuracy. We develop a systematic set of linear cumulant matching equations and express them compactly in terms of the Khatri–Rao product. Both least squares estimation and iterative minimum I-divergence estimation are considered. We develop an upper bound on the mean squared error (MSE) in least squares rate estimation from empirical cumulants. We demonstrate that supplementing Vardi's approach with the third-order empirical cumulant reduces its minimum averaged normalized MSE in rate estimation by almost 20% when iterative minimum I-divergence estimation was used. 
    more » « less