skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on January 1, 2025

Title: Constrained Regularization by Denoising With Automatic Parameter Selection
Regularization by Denoising (RED) is a well-known method for solving image restoration problems by using learned image denoisers as priors. Since the regularization parameter in the traditional RED does not have any physical interpretation, it does not provide an approach for automatic parameter selection. This letter addresses this issue by introducing the Constrained Regularization by Denoising (CRED) method that reformulates RED as a constrained optimization problem where the regularization parameter corresponds directly to the amount of noise in the measurements. The solution to the constrained problem is solved by designing an efficient method based on alternating direction method of multipliers (ADMM). Our experiments show that CRED outperforms the competing methods in terms of stability and robustness, while also achieving competitive performances in terms of image quality.  more » « less
Award ID(s):
2043134
NSF-PAR ID:
10504937
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Signal Processing Letters
Volume:
31
ISSN:
1070-9908
Page Range / eLocation ID:
556 to 560
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Regularization by denoising (RED) is a widely-used framework for solving inverse problems by leveraging image de-noisers as image priors. Recent work has reported the state-of-the-art performance of RED in a number of imaging applications using pre-trained deep neural nets as denoisers. Despite the recent progress, the stable convergence of RED algorithms remains an open problem. The existing RED theory only guarantees stability for convex data-fidelity terms and nonexpansive denoisers. This work addresses this issue by developing a new monotone RED (MRED) algorithm, whose convergence does not require nonexpansiveness of the deep denoising prior. Simulations on image deblurring and compressive sensing recovery from random matrices show the stability of MRED even when the traditional RED diverges. 
    more » « less
  2. We consider the problem of reconstructing an image from its noisy measurements using a prior specified only with an image denoiser. Recent work on plug-and-play priors (PnP) and regularization by denoising (RED) has shown the state-of-the-art performance of image reconstruction algorithms under such priors in a range of imaging problems. In this work, we develop a new block coordinate RED algorithm that decomposes a large-scale estimation problem into a sequence of updates over a small subset of the unknown variables. We theoretically analyze the convergence of the algorithm and discuss its relationship to the traditional proximal optimization. Our analysis complements and extends recent theoretical results for RED-based estimation methods. We numerically validate our method using several denoising priors, including those based on deep neural nets. 
    more » « less
  3. The image processing task of the recovery of an image from a noisy or compromised image is an illposed inverse problem. To solve this problem, it is necessary to incorporate prior information about the smoothness, or the structure, of the solution, by incorporating regularization. Here, we consider linear blur operators with an efficiently-found singular value decomposition. Then, regularization is obtained by employing a truncated singular value expansion for image recovery. In this study, we focus on images for which the image blur operator is separable and can be represented by a Kronecker product such that the associated singular value decomposition is expressible in terms of the singular value decompositions of the separable components. The truncation index k can then be identified without forming the full Kronecker product of the two terms. This report investigates the problem of learning an optimal k using two methods. For one method to learn k we assume the knowledge of the true images, yielding a supervised learning algorithm based on the average relative error. The second method uses the method of generalized cross validation and does not require knowledge of the true images. The approach is implemented and demonstrated to be successful for Gaussian, Poisson and salt and pepper noise types across noise levels with signal to noise ratios as low as 10. This research contributes to the field by offering insights into the use of the supervised and unsupervised estimators for the truncation index, and demonstrates that the unsupervised algorithm is not only robust and computationally efficient, but is also comparable to the supervised method. 
    more » « less
  4. null (Ed.)
    Abstract Discrete ill-posed inverse problems arise in various areas of science and engineering. The presence of noise in the data often makes it difficult to compute an accurate approximate solution. To reduce the sensitivity of the computed solution to the noise, one replaces the original problem by a nearby well-posed minimization problem, whose solution is less sensitive to the noise in the data than the solution of the original problem. This replacement is known as regularization. We consider the situation when the minimization problem consists of a fidelity term, that is defined in terms of a p -norm, and a regularization term, that is defined in terms of a q -norm. We allow 0 < p , q ≤ 2. The relative importance of the fidelity and regularization terms is determined by a regularization parameter. This paper develops an automatic strategy for determining the regularization parameter for these minimization problems. The proposed approach is based on a new application of generalized cross validation. Computed examples illustrate the performance of the method proposed. 
    more » « less
  5. null (Ed.)
    Abstract We study a 1-dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem. 
    more » « less