skip to main content


Title: Defocus Map Estimation and Deblurring From a Single Dual-Pixel Image
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map---the amount of defocus blur at each pixel---and recovers an all-in-focus image. Our method is inspired from recent works that leverage the dual-pixel sensors available in many consumer cameras to assist with autofocus, and use them for recovery of defocus maps or all-in-focus images. These prior works have solved the two recovery problems independently of each other, and often require large labeled datasets for supervised training. By contrast, we show that it is beneficial to treat these two closely-connected problems simultaneously. To this end, we set up an optimization problem that, by carefully modeling the optics of dual-pixel images, jointly solves both problems. We use data captured with a consumer smartphone camera to demonstrate that, after a one-time calibration step, our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.  more » « less
Award ID(s):
1730147
NSF-PAR ID:
10317160
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Self-supervised depth estimation has recently demonstrated promising performance compared to the supervised methods on challenging indoor scenes. However, the majority of efforts mainly focus on exploiting photometric and geometric consistency via forward image warping and backward image warping, based on monocular videos or stereo image pairs. The influence of defocus blur to depth estimation is neglected, resulting in a limited performance for objects and scenes in out of focus. In this work, we propose the first framework for simultaneous depth estimation from a single image and image focal stacks using depth-from-defocus and depth-from-focus algorithms. The proposed network is able to learn optimal depth mapping from the information contained in the blur of a single image, generate a simulated image focal stack and all-in-focus image, and train a depth estimator from an image focal stack. In addition to the validation of our method on both synthetic NYUv2 dataset and real DSLR dataset, we also collect our own dataset using a DSLR camera and further verify on it. Experiments demonstrate that our system surpasses the state-of-the-art supervised depth estimation method over 4% in accuracy and achieves superb performance among the methods without direct supervision on the synthesized NYUv2 dataset, which has been rarely explored. 
    more » « less
  2. Abstract

    We propose a regularization-based deblurring method that works efficiently for galaxy images. The spatial resolution of a ground-based telescope is generally limited by seeing conditions and is much worse than space-based telescopes. This circumstance has generated considerable research interest in the restoration of spatial resolution. Since image deblurring is a typical inverse problem and often ill-posed, solutions tend to be unstable. To obtain a stable solution, much research has adopted regularization-based methods for image deblurring, but the regularization term is not necessarily appropriate for galaxy images. Although galaxies have an exponential or Sérsic profile, the conventional regularization assumes the image profiles to behave linearly in space. The significant deviation between the assumption and real situations leads to blurring of the images and smoothing out the detailed structures. Clearly, regularization on logarithmic domain, i.e., magnitude domain, should provide a more appropriate assumption, which we explore in this study. We formulate a problem of deblurring galaxy images by an objective function with a Tikhonov regularization term on a magnitude domain. We introduce an iterative algorithm minimizing the objective function with a primal–dual splitting method. We investigate the feasibility of the proposed method using simulation and observation images. In the simulation, we blur galaxy images with a realistic point spread function and add both Gaussian and Poisson noise. For the evaluation with the observed images, we use galaxy images taken by the Subaru HSC-SSP. Both of these evaluations show that our method successfully recovers the spatial resolution of the deblurred images and significantly outperforms the conventional methods. The code is publicly available from the GitHub 〈https://github.com/kzmurata-astro/PSFdeconv_amag〉.

     
    more » « less
  3. Jumping spiders (Salticidae) rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and from these images, distance is decoded with relatively little computation. We introduce a compact depth sensor that is inspired by the jumping spider. It combines metalens optics, which modifies the phase of incident light at a subwavelength scale, with efficient computations to measure depth from image defocus. Instead of using a multitiered retina to transduce multiple simultaneous images, the sensor uses a metalens to split the light that passes through an aperture and concurrently form 2 differently defocused images at distinct regions of a single planar photosensor. We demonstrate a system that deploys a 3-mm-diameter metalens to measure depth over a 10-cm distance range, using fewer than 700 floating point operations per output pixel. Compared with previous passive depth sensors, our metalens depth sensor is compact, single-shot, and requires a small amount of computation. This integration of nanophotonics and efficient computation brings artificial depth sensing closer to being feasible on millimeter-scale, microwatts platforms such as microrobots and microsensor networks. 
    more » « less
  4. null (Ed.)
    The sky exhibits a unique spatial polarization pattern by scattering the unpolarized sun light. Just like insects use this unique angular pattern to navigate, we use it to map pixels to directions on the sky. That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel. We derive a polarimetric reflection model of a diffuse plus mirror surface lit by the sun and a clear sky. This model is used to recover the per-pixel surface normal of an object from a single polarimetric image or from multiple polarimetric images captured under the sky at different times of the day. We experimentally evaluate the accuracy of our shape-from-sky method on a number of real objects of different surface compositions. The results clearly show that this passive approach to fine-geometry recovery that fully leverages the unique illumination made by nature is a viable option for 3D sensing. With the advent of quad-Bayer polarization chips, we believe the implications of our method span a wide range of domains. 
    more » « less
  5. Blur occurs naturally when the eye is focused at one distance and an object is presented at another distance. Computer-graphics engineers and vision scientists often wish to create display images that reproduce such depth-dependent blur, but their methods are incorrect for that purpose. They take into account the scene geometry, pupil size, and focal distances, but do not properly take into account the optical aberrations of the human eye. We developed a method that, by incorporating the viewer’s optics, yields displayed images that produce retinal images close to the ones that occur in natural viewing. We concentrated on the effects of defocus, chromatic aberration, astigmatism, and spherical aberration and evaluated their effectiveness by conducting experiments in which we attempted to drive the eye’s focusing response (accommodation) through the rendering of these aberrations. We found that accommodation is not driven at all by conventional rendering methods, but that it is driven surprisingly quickly and accurately by our method with defocus and chromatic aberration incorporated. We found some effect of astigmatism but none of spherical aberration. We discuss how the rendering approach can be used in vision science experiments and in the development of ophthalmic/optometric devices and augmented- and virtual-reality displays. 
    more » « less