skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Constrained Regularization by Denoising With Automatic Parameter Selection
Regularization by Denoising (RED) is a well-known method for solving image restoration problems by using learned image denoisers as priors. Since the regularization parameter in the traditional RED does not have any physical interpretation, it does not provide an approach for automatic parameter selection. This letter addresses this issue by introducing the Constrained Regularization by Denoising (CRED) method that reformulates RED as a constrained optimization problem where the regularization parameter corresponds directly to the amount of noise in the measurements. The solution to the constrained problem is solved by designing an efficient method based on alternating direction method of multipliers (ADMM). Our experiments show that CRED outperforms the competing methods in terms of stability and robustness, while also achieving competitive performances in terms of image quality.  more » « less
Award ID(s):
2043134
PAR ID:
10504937
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Signal Processing Letters
Volume:
31
ISSN:
1070-9908
Page Range / eLocation ID:
556 to 560
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Regularization by denoising (RED) is a widely-used framework for solving inverse problems by leveraging image de-noisers as image priors. Recent work has reported the state-of-the-art performance of RED in a number of imaging applications using pre-trained deep neural nets as denoisers. Despite the recent progress, the stable convergence of RED algorithms remains an open problem. The existing RED theory only guarantees stability for convex data-fidelity terms and nonexpansive denoisers. This work addresses this issue by developing a new monotone RED (MRED) algorithm, whose convergence does not require nonexpansiveness of the deep denoising prior. Simulations on image deblurring and compressive sensing recovery from random matrices show the stability of MRED even when the traditional RED diverges. 
    more » « less
  2. We consider the problem of reconstructing an image from its noisy measurements using a prior specified only with an image denoiser. Recent work on plug-and-play priors (PnP) and regularization by denoising (RED) has shown the state-of-the-art performance of image reconstruction algorithms under such priors in a range of imaging problems. In this work, we develop a new block coordinate RED algorithm that decomposes a large-scale estimation problem into a sequence of updates over a small subset of the unknown variables. We theoretically analyze the convergence of the algorithm and discuss its relationship to the traditional proximal optimization. Our analysis complements and extends recent theoretical results for RED-based estimation methods. We numerically validate our method using several denoising priors, including those based on deep neural nets. 
    more » « less
  3. Recently, Deep Image Prior (DIP) has emerged as an effective unsupervised one-shot learner, delivering competitive results across various image recovery problems. This method only requires the noisy measurements and a forward operator, relying solely on deep networks initialized with random noise to learn and restore the structure of the data. However, DIP is notorious for its vulnerability to overfitting due to the overparameterization of the network. Building upon insights into the impact of the DIP input and drawing inspiration from the gradual denoising process in cutting-edge diffusion models, we introduce Autoencoding Sequential DIP (aSeqDIP) for image reconstruction. This method progressively denoises and reconstructs the image through a sequential optimization of network weights. This is achieved using an input-adaptive DIP objective, combined with an autoencoding regularization term. Compared to diffusion models, our method does not require training data and outperforms other DIP-based methods in mitigating noise overfitting while maintaining a similar number of parameter updates as Vanilla DIP. Through extensive experiments, we validate the effectiveness of our method in various image reconstruction tasks, such as MRI and CT reconstruction, as well as in image restoration tasks like image denoising, inpainting, and non-linear deblurring. 
    more » « less
  4. The image processing task of the recovery of an image from a noisy or compromised image is an illposed inverse problem. To solve this problem, it is necessary to incorporate prior information about the smoothness, or the structure, of the solution, by incorporating regularization. Here, we consider linear blur operators with an efficiently-found singular value decomposition. Then, regularization is obtained by employing a truncated singular value expansion for image recovery. In this study, we focus on images for which the image blur operator is separable and can be represented by a Kronecker product such that the associated singular value decomposition is expressible in terms of the singular value decompositions of the separable components. The truncation index k can then be identified without forming the full Kronecker product of the two terms. This report investigates the problem of learning an optimal k using two methods. For one method to learn k we assume the knowledge of the true images, yielding a supervised learning algorithm based on the average relative error. The second method uses the method of generalized cross validation and does not require knowledge of the true images. The approach is implemented and demonstrated to be successful for Gaussian, Poisson and salt and pepper noise types across noise levels with signal to noise ratios as low as 10. This research contributes to the field by offering insights into the use of the supervised and unsupervised estimators for the truncation index, and demonstrates that the unsupervised algorithm is not only robust and computationally efficient, but is also comparable to the supervised method. 
    more » « less
  5. We consider the problem of estimating a vector from its noisy measurements using a prior specified only through a denoising function. Recent work on plug- and-play priors (PnP) and regularization-by-denoising (RED) has shown the state- of-the-art performance of estimators under such priors in a range of imaging tasks. In this work, we develop a new block coordinate RED algorithm that decomposes a large-scale estimation problem into a sequence of updates over a small subset of the unknown variables. We theoretically analyze the convergence of the algorithm and discuss its relationship to the traditional proximal optimization. Our analysis complements and extends recent theoretical results for RED-based estimation methods. We numerically validate our method using several denoiser priors, including those based on convolutional neural network (CNN) denoisers. 
    more » « less