 NSFPAR ID:
 10183693
 Date Published:
 Journal Name:
 NeurIPS 2019 Workshop on Solving Inverse Problems with Deep Networks
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

null (Ed.)We present a datadriven method for computing approximate forward reachable sets using separating kernels in a reproducing kernel Hilbert space. We frame the problem as a support estimation problem, and learn a classifier of the support as an element in a reproducing kernel Hilbert space using a datadriven approach. Kernel methods provide a computationally efficient representation for the classifier that is the solution to a regularized least squares problem. The solution converges almost surely as the sample size increases, and admits known finite sample bounds. This approach is applicable to stochastic systems with arbitrary disturbances and neural network verification problems by treating the network as a dynamical system, or by considering neural network controllers as part of a closedloop system. We present our technique on several examples, including a spacecraft rendezvous and docking problem, and two nonlinear system benchmarks with neural network controllers.more » « less

Abstract Many methods have been developed for estimating the parameters of biexponential decay signals, which arise throughout magnetic resonance relaxometry (MRR) and the physical sciences. This is an intrinsically ill‐posed problem so that estimates can depend strongly on noise and underlying parameter values. Regularization has proven to be a remarkably efficient procedure for providing more reliable solutions to ill‐posed problems, while, more recently, neural networks have been used for parameter estimation. We re‐address the problem of parameter estimation in biexponential models by introducing a novel form of neural network regularization which we call input layer regularization (ILR). Here, inputs to the neural network are composed of a biexponential decay signal augmented by signals constructed from parameters obtained from a regularized nonlinear least‐squares estimate of the two decay time constants. We find that ILR results in a reduction in the error of time constant estimates on the order of 15%–50% or more, depending on the metric used and signal‐to‐noise level, with greater improvement seen for the time constant of the more rapidly decaying component. ILR is compatible with existing regularization techniques and should be applicable to a wide range of parameter estimation problems.

Abstract A parameter identification inverse problem in the form of nonlinear least squares is considered.In the lack of stability, the frozen iteratively regularized Gauss–Newton (FIRGN) algorithm is proposed and its convergence is justified under what we call a generalized normal solvability condition.The penalty term is constructed based on a seminorm generated by a linear operator yielding a greater flexibility in the use of qualitative and quantitative a priori information available for each particular model.Unlike previously known theoretical results on the FIRGN method, our convergence analysis does not rely on any nonlinearity conditions and it is applicable to a large class of nonlinear operators.In our study, we leverage the nature of illposedness in order to establish convergence in the noisefree case.For noise contaminated data, we show that, at least theoretically, the process does not require a stopping rule and is no longer semiconvergent.Numerical simulations for a parameter estimation problem in epidemiology illustrate the efficiency of the algorithm.more » « less

null (Ed.)Power system state estimation (PSSE) aims at finding the voltage magnitudes and angles at all generation and load buses, using meter readings and other available information. PSSE is often formulated as a nonconvex and nonlinear leastsquares (NLS) cost function, which is traditionally solved by the GaussNewton method. However, GaussNewton iterations for minimizing nonconvex problems are sensitive to the initialization, and they can diverge. In this context, we advocate a deep neural network (DNN) based “trainable regularizer” to incorporate prior information for accurate and reliable state estimation. The resulting regularized NLS does not admit a neat closed form solution. To handle this, a novel endtoend DNN is constructed subsequently by unrolling a GaussNewtontype solver which alternates between leastsquares loss and the regularization term. Our DNN architecture can further offer a suite of advantages, e.g., accommodating network topology via graph neural networks based prior. Numerical tests using real load data on the IEEE 118bus benchmark system showcase the improved estimation performance of the proposed scheme compared with stateoftheart alternatives. Interestingly, our results suggest that a simple feed forward network based prior implicitly exploits the topology information hidden in data.more » « less

Network tomography aims at estimating source–destination traffic rates from link traffic measurements. This inverse problem was formulated by Vardi in 1996 for Poisson traffic over networks operating under deterministic as well as random routing regimes. In this article, we expand Vardi's secondorder moment matching rate estimation approach to higherorder cumulant matching with the goal of increasing the column rank of the mapping and consequently improving the rate estimation accuracy. We develop a systematic set of linear cumulant matching equations and express them compactly in terms of the Khatri–Rao product. Both least squares estimation and iterative minimum Idivergence estimation are considered. We develop an upper bound on the mean squared error (MSE) in least squares rate estimation from empirical cumulants. We demonstrate that supplementing Vardi's approach with the thirdorder empirical cumulant reduces its minimum averaged normalized MSE in rate estimation by almost 20% when iterative minimum Idivergence estimation was used.more » « less