The plug-and-play priors (PnP) and regularization by denoising (RED) methods have become widely used for solving inverse problems by leveraging pre-trained deep denoisers as image priors. While the empirical imaging performance and the theoretical convergence properties of these algorithms have been widely investigated, their recovery properties have not previously been theoretically analyzed. We address this gap by showing how to establish theoretical recovery guarantees for PnP/RED by assuming that the solution of these methods lies near the fixed-points of a deep neural network. We also present numerical results comparing the recovery performance of PnP/RED in compressive sensing against that of recent compressive sensing algorithms based on generative models. Our numerical results suggest that PnP with a pre-trained artifact removal network provides significantly better results compared to the existing state-of-the-art methods.
more »
« less
Global Exponential Stability of a Neural Network for Inverse Variational Inequalities
We investigate the convergence properties of a projected neural network for solving inverse variational inequalities. Under standard assumptions, we establish the exponential stability of the proposed neural network. A discrete version of the proposed neural network is considered, leading to a new projection method for solving inverse variational inequalities, for which we obtain the linear convergence. We illustrate the effectiveness of the proposed neural network and its explicit discretization by considering applications in the road pricing problem arising in transportation science. The results obtained in this paper provide a positive answer to a recent open question and improve several recent results in the literature.
more »
« less
- Award ID(s):
- 2047793
- PAR ID:
- 10318501
- Date Published:
- Journal Name:
- Journal of optimization theory and applications
- Volume:
- 190
- ISSN:
- 1573-2878
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Neural networks have become ubiquitous tools for solving signal and image processing problems, and they often outperform standard approaches. Nevertheless, training the layers of a neural network is a challenging task in many applications. The prevalent training procedure consists of minimizing highly non-convex objectives based on data sets of huge dimension. In this context, current methodologies are not guaranteed to produce global solutions. We present an alternative approach which foregoes the optimization framework and adopts a variational inequality formalism. The associated algorithm guarantees convergence of the iterates to a true solution of the variational inequality and it possesses an efficient block-iterative structure. A numerical application is presented.more » « less
-
In this paper, we generalize the classical extragradient algorithm for solving variational inequality problems by utilizing nonzero normal vectors of the feasible set. In particular, conceptual algorithms are proposed with two different linesearchs. We then establish convergence results for these algorithms under mild assumptions. Our study suggests that nonzero normal vectors may significantly improve convergence if chosen appropriately.more » « less
-
Solving a bilevel optimization problem is at the core of several machine learning problems such as hyperparameter tuning, data denoising, meta- and few-shot learning, and training-data poisoning. Different from simultaneous or multi-objective optimization, the steepest descent direction for minimizing the upper-level cost in a bilevel problem requires the inverse of the Hessian of the lower-level cost. In this work, we propose a novel algorithm for solving bilevel optimization problems based on the classical penalty function approach. Our method avoids computing the Hessian inverse and can handle constrained bilevel problems easily. We prove the convergence of the method under mild conditions and show that the exact hypergradient is obtained asymptotically. Our method's simplicity and small space and time complexities enable us to effectively solve large-scale bilevel problems involving deep neural networks. We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting. Our results show that our approach outperforms or is comparable to previously proposed methods based on automatic differentiation and approximate inversion in terms of accuracy, run-time, and convergence speed.more » « less
-
In this paper we establish the convergence of a numerical scheme based, on the Finite Element Method, for a time-independent problem modelling the deformation of a linearly elastic elliptic membrane shell subjected to remaining confined in a half space. Instead of approximating the original variational inequalities governing this obstacle problem, we approximate the penalized version of the problem under consideration. A suitable coupling between the penalty parameter and the mesh size will then lead us to establish the convergence of the solution of the discrete penalized problem to the solution of the original variational inequalities. We also establish the convergence of the Brezis-Sibony scheme for the problem under consideration. Thanks to this iterative method, we can approximate the solution of the discrete penalized problem without having to resort to nonlinear optimization tools. Finally, we present numerical simulations validating our new theoretical results.more » « less
An official website of the United States government

