skip to main content


Title: A new framework of designing iterative techniques for image deblurring
In this work we present a framework of designing iterative techniques for image deblurring in inverse problem. The new framework is based on two observations about existing methods. We used Landweber method as the basis to develop and present the new framework but note that the framework is applicable to other iterative techniques. First, we observed that the iterative steps of Landweber method consist of a constant term, which is a low-pass filtered version of the already blurry observation. We proposed a modification to use the observed image directly. Second, we observed that Landweber method uses an estimate of the true image as the starting point. This estimate, however, does not get updated over iterations. We proposed a modification that updates this estimate as the iterative process progresses. We integrated the two modifications into one framework of iteratively deblurring images. Finally, we tested the new method and compared its performance with several existing techniques, including Landweber method, Van Cittert method, GMRES (generalized minimal residual method), and LSQR (least square), to demonstrate its superior performance in image deblurring.  more » « less
Award ID(s):
2115095
NSF-PAR ID:
10496628
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Elsever
Date Published:
Journal Name:
Pattern Recognition
Volume:
124
Issue:
C
ISSN:
0031-3203
Page Range / eLocation ID:
108463
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Discrete ill‐posed inverse problems arise in many areas of science and engineering. Their solutions are very sensitive to perturbations in the data. Regularization methods aim at reducing this sensitivity. This article considers an iterative regularization method, based on iterated Tikhonov regularization, that was proposed in M. Donatelli and M. Hanke, Fast nonstationary preconditioned iterative methods for ill‐posed problems, with application to image deblurring,Inverse Problems, 29 (2013), Art. 095008, 16 pages. In this method, the exact operator is approximated by an operator that is easier to work with. However, the convergence theory requires the approximating operator to be spectrally equivalent to the original operator. This condition is rarely satisfied in practice. Nevertheless, this iterative method determines accurate image restorations in many situations. We propose a modification of the iterative method, that relaxes the demand of spectral equivalence to a requirement that is easier to satisfy. We show that, although the modified method is not an iterative regularization method, it maintains one of the most important theoretical properties for this kind of methods, namely monotonic decrease of the reconstruction error. Several computed experiments show the good performances of the proposed method.

     
    more » « less
  2. Abstract

    We propose a regularization-based deblurring method that works efficiently for galaxy images. The spatial resolution of a ground-based telescope is generally limited by seeing conditions and is much worse than space-based telescopes. This circumstance has generated considerable research interest in the restoration of spatial resolution. Since image deblurring is a typical inverse problem and often ill-posed, solutions tend to be unstable. To obtain a stable solution, much research has adopted regularization-based methods for image deblurring, but the regularization term is not necessarily appropriate for galaxy images. Although galaxies have an exponential or Sérsic profile, the conventional regularization assumes the image profiles to behave linearly in space. The significant deviation between the assumption and real situations leads to blurring of the images and smoothing out the detailed structures. Clearly, regularization on logarithmic domain, i.e., magnitude domain, should provide a more appropriate assumption, which we explore in this study. We formulate a problem of deblurring galaxy images by an objective function with a Tikhonov regularization term on a magnitude domain. We introduce an iterative algorithm minimizing the objective function with a primal–dual splitting method. We investigate the feasibility of the proposed method using simulation and observation images. In the simulation, we blur galaxy images with a realistic point spread function and add both Gaussian and Poisson noise. For the evaluation with the observed images, we use galaxy images taken by the Subaru HSC-SSP. Both of these evaluations show that our method successfully recovers the spatial resolution of the deblurred images and significantly outperforms the conventional methods. The code is publicly available from the GitHub 〈https://github.com/kzmurata-astro/PSFdeconv_amag〉.

     
    more » « less
  3. In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem when trained end-to-end. In this paper, we propose some strategies to improve stability without losing too much accuracy to deblur images with deep-learning-based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following neural-network-based step. Two different pre-processors are presented. The former implements a strong parameter-free denoiser, and the latter is a variational-model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness. 
    more » « less
  4. Face sketch-photo synthesis is a critical application in law enforcement and digital entertainment industry. Despite the significant improvements in sketch-to-photo synthesis techniques, existing methods have still serious limitations in practice, such as the need for paired data in the training phase or having no control on enforcing facial attributes over the synthesized image. In this work, we present a new framework, which is a conditional version of Cycle-GAN, conditioned on facial attributes. The proposed network forces facial attributes, such as skin and hair color, on the synthesized photo and does not need a set of aligned face-sketch pairs during its training. We evaluate the proposed network by training on two real and synthetic sketch datasets. The hand-sketch images of the FERET dataset and the color face images from the WVU Multi-modal dataset are used as an unpaired input to the proposed conditional CycleGAN with the skin color as the controlled face attribute. For more attribute guided evaluation, a synthetic sketch dataset is created from the CelebA dataset and used to evaluate the performance of the network by forcing several desired facial attributes on the synthesized faces. 
    more » « less
  5. Using fingerphoto images acquired from mobile cameras, low-quality sensors, or crime scenes, it has become a challenge for automated identification systems to verify the identity due to various acquisition distortions. A significant type of photometric distortion that notably reduces the quality of a fingerphoto is the blurring of the image. This paper proposes a deep fingerphoto deblurring model to restore the ridge information degraded by the image blurring. As the core of our model, we utilize a conditional Generative Adversarial Network (cGAN) to learn the distribution of natural ridge patterns. We perform several modifications to enhance the quality of the reconstructed (deblurred) fingerphotos by our proposed model. First, we develop a multi-stage GAN to learn the ridge distribution in a coarse-to-fine framework. This framework enables the model to maintain the consistency of the ridge deblurring process at different resolutions. Second, we propose a guided attention module that helps the generator to focus mainly on blurred regions. Third, we incorporate a deep fingerphoto verifier as an auxiliary adaptive loss function to force the generator to preserve the ID information during the deblurring process. Finally, we evaluate the effectiveness of the proposed model through extensive experiments on multiple public fingerphoto datasets as well as real-world blurred fingerphotos. In particular, our method achieves 5.2 dB, 8.7%, and 7.6% improvement in PSNR, AUC, and EER, respectively, compared to a state-of-the-art deblurring method. 
    more » « less