skip to main content


Title: Defocus Map Estimation and Deblurring From a Single Dual-Pixel Image
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map---the amount of defocus blur at each pixel---and recovers an all-in-focus image. Our method is inspired from recent works that leverage the dual-pixel sensors available in many consumer cameras to assist with autofocus, and use them for recovery of defocus maps or all-in-focus images. These prior works have solved the two recovery problems independently of each other, and often require large labeled datasets for supervised training. By contrast, we show that it is beneficial to treat these two closely-connected problems simultaneously. To this end, we set up an optimization problem that, by carefully modeling the optics of dual-pixel images, jointly solves both problems. We use data captured with a consumer smartphone camera to demonstrate that, after a one-time calibration step, our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.  more » « less
Award ID(s):
1730147
PAR ID:
10317160
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Self-supervised depth estimation has recently demonstrated promising performance compared to the supervised methods on challenging indoor scenes. However, the majority of efforts mainly focus on exploiting photometric and geometric consistency via forward image warping and backward image warping, based on monocular videos or stereo image pairs. The influence of defocus blur to depth estimation is neglected, resulting in a limited performance for objects and scenes in out of focus. In this work, we propose the first framework for simultaneous depth estimation from a single image and image focal stacks using depth-from-defocus and depth-from-focus algorithms. The proposed network is able to learn optimal depth mapping from the information contained in the blur of a single image, generate a simulated image focal stack and all-in-focus image, and train a depth estimator from an image focal stack. In addition to the validation of our method on both synthetic NYUv2 dataset and real DSLR dataset, we also collect our own dataset using a DSLR camera and further verify on it. Experiments demonstrate that our system surpasses the state-of-the-art supervised depth estimation method over 4% in accuracy and achieves superb performance among the methods without direct supervision on the synthesized NYUv2 dataset, which has been rarely explored. 
    more » « less
  2. The image processing task of the recovery of an image from a noisy or compromised image is an illposed inverse problem. To solve this problem, it is necessary to incorporate prior information about the smoothness, or the structure, of the solution, by incorporating regularization. Here, we consider linear blur operators with an efficiently-found singular value decomposition. Then, regularization is obtained by employing a truncated singular value expansion for image recovery. In this study, we focus on images for which the image blur operator is separable and can be represented by a Kronecker product such that the associated singular value decomposition is expressible in terms of the singular value decompositions of the separable components. The truncation index k can then be identified without forming the full Kronecker product of the two terms. This report investigates the problem of learning an optimal k using two methods. For one method to learn k we assume the knowledge of the true images, yielding a supervised learning algorithm based on the average relative error. The second method uses the method of generalized cross validation and does not require knowledge of the true images. The approach is implemented and demonstrated to be successful for Gaussian, Poisson and salt and pepper noise types across noise levels with signal to noise ratios as low as 10. This research contributes to the field by offering insights into the use of the supervised and unsupervised estimators for the truncation index, and demonstrates that the unsupervised algorithm is not only robust and computationally efficient, but is also comparable to the supervised method. 
    more » « less
  3. Abstract Introduction

    This study investigated differences in peripheral image quality with refractive error. Peripheral blur orientation is determined by the interaction of optical aberrations (such as oblique astigmatism) and retinal shape. By providing the eye with an optical signal for determining the sign of defocus, peripheral blur anisotropy may play a role in mechanisms of accommodation, emmetropisation and optical myopia control interventions. This study investigated peripheral through‐focus optical anisotropy and image quality and how it varies with the eye's refractive error.

    Methods

    Previously published Zernike coefficients across retinal eccentricity (0, 10, 20 and 30° horizontal nasal visual field) were used to compute the through‐focus modulation transfer function (MTF) for a 4 mm pupil. Image quality was defined as the volume under the MTF, and blur anisotropy was defined as the ratio of the horizontal to vertical meridians of the MTF (HVRatio).

    Results

    Across the horizontal nasal visual field (at 10, 20 and 30°), the peak image quality for emmetropes was within 0.3 D of the retina, as opposed to myopes whose best focus was behind the retina (−0.1, 0.4 and 1.5 D, respectively), while for hyperopes it lay in front of the retina (−0.5, −0.6 and −0.6 D). At 0.0 D (i.e., on the retina), emmetropes and hyperopes both exhibited horizontally elongated blur, whereas myopes had vertically elongated blur (HVRatio = 0.3, 0.7 and 2.8, respectively, at 30° eccentricity).

    Conclusions

    Blur in the peripheral retina is dominated by the so‐called “odd‐error” blur signals, primarily due to oblique astigmatism. The orientation of peripheral blur (horizontal or vertical) provides the eye with an optical cue for the sign of defocus. All subject groups had anisotropic blur in the nasal visual field; myopes exhibited vertically elongated blur, perpendicular to the blur orientation of emmetropes and hyperopes.

     
    more » « less
  4. Abstract

    We propose a regularization-based deblurring method that works efficiently for galaxy images. The spatial resolution of a ground-based telescope is generally limited by seeing conditions and is much worse than space-based telescopes. This circumstance has generated considerable research interest in the restoration of spatial resolution. Since image deblurring is a typical inverse problem and often ill-posed, solutions tend to be unstable. To obtain a stable solution, much research has adopted regularization-based methods for image deblurring, but the regularization term is not necessarily appropriate for galaxy images. Although galaxies have an exponential or Sérsic profile, the conventional regularization assumes the image profiles to behave linearly in space. The significant deviation between the assumption and real situations leads to blurring of the images and smoothing out the detailed structures. Clearly, regularization on logarithmic domain, i.e., magnitude domain, should provide a more appropriate assumption, which we explore in this study. We formulate a problem of deblurring galaxy images by an objective function with a Tikhonov regularization term on a magnitude domain. We introduce an iterative algorithm minimizing the objective function with a primal–dual splitting method. We investigate the feasibility of the proposed method using simulation and observation images. In the simulation, we blur galaxy images with a realistic point spread function and add both Gaussian and Poisson noise. For the evaluation with the observed images, we use galaxy images taken by the Subaru HSC-SSP. Both of these evaluations show that our method successfully recovers the spatial resolution of the deblurred images and significantly outperforms the conventional methods. The code is publicly available from the GitHub 〈https://github.com/kzmurata-astro/PSFdeconv_amag〉.

     
    more » « less
  5. Blur occurs naturally when the eye is focused at one distance and an object is presented at another distance. Computer-graphics engineers and vision scientists often wish to create display images that reproduce such depth-dependent blur, but their methods are incorrect for that purpose. They take into account the scene geometry, pupil size, and focal distances, but do not properly take into account the optical aberrations of the human eye. We developed a method that, by incorporating the viewer’s optics, yields displayed images that produce retinal images close to the ones that occur in natural viewing. We concentrated on the effects of defocus, chromatic aberration, astigmatism, and spherical aberration and evaluated their effectiveness by conducting experiments in which we attempted to drive the eye’s focusing response (accommodation) through the rendering of these aberrations. We found that accommodation is not driven at all by conventional rendering methods, but that it is driven surprisingly quickly and accurately by our method with defocus and chromatic aberration incorporated. We found some effect of astigmatism but none of spherical aberration. We discuss how the rendering approach can be used in vision science experiments and in the development of ophthalmic/optometric devices and augmented- and virtual-reality displays. 
    more » « less