skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Snapshot polarimetric diffuse-specular separation
We present a polarization-based approach to perform diffuse-specular separation from a single polarimetric image, acquired using a flexible, practical capture setup. Our key technical insight is that, unlike previous polarization-based separation methods that assume completely unpolarized diffuse reflectance, we use a more general polarimetric model that accounts for partially polarized diffuse reflections. We capture the scene with a polarimetric sensor and produce an initial analytical diffuse-specular separation that we further pass into a deep network trained to refine the separation. We demonstrate that our combination of analytical separation and deep network refinement produces state-of-the-art diffuse-specular separation, which enables image-based appearance editing of dynamic scenes and enhanced appearance estimation.  more » « less
Award ID(s):
1730574 1652633
PAR ID:
10370755
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
30
Issue:
19
ISSN:
1094-4087; OPEXFF
Format(s):
Medium: X Size: Article No. 34239
Size(s):
Article No. 34239
Sponsoring Org:
National Science Foundation
More Like this
  1. Helmholtz stereopsis (HS) exploits the reciprocity principle of light propagation (i.e., the Helmholtz reciprocity) for 3D reconstruction of surfaces with arbitrary reflectance. In this paper, we present the polarimetric Helmholtz stereopsis (polar-HS), which extends the classical HS by considering the polarization state of light in the reciprocal paths. With the additional phase information from polar- ization, polar-HS requires only one reciprocal image pair. We formulate new reciprocity and diffuse/specular polari- metric constraints to recover surface depths and normals using an optimization framework. Using a hardware proto- type, we show that our approach produces high-quality 3D reconstruction for different types of surfaces, ranging from diffuse to highly specular. 
    more » « less
  2. null (Ed.)
    The sky exhibits a unique spatial polarization pattern by scattering the unpolarized sun light. Just like insects use this unique angular pattern to navigate, we use it to map pixels to directions on the sky. That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel. We derive a polarimetric reflection model of a diffuse plus mirror surface lit by the sun and a clear sky. This model is used to recover the per-pixel surface normal of an object from a single polarimetric image or from multiple polarimetric images captured under the sky at different times of the day. We experimentally evaluate the accuracy of our shape-from-sky method on a number of real objects of different surface compositions. The results clearly show that this passive approach to fine-geometry recovery that fully leverages the unique illumination made by nature is a viable option for 3D sensing. With the advent of quad-Bayer polarization chips, we believe the implications of our method span a wide range of domains. 
    more » « less
  3. In this paper, we present a polarimetric image restoration approach that aims to recover the Stokes parameters and the degree of linear polarization from their corresponding degraded counterparts. The Stokes parameters and the degree of linear polarization are affected due to the degradations present in partial occlusion or turbid media, such as scattering, attenuation, and turbid water. The polarimetric image restoration with corresponding Mueller matrix estimation is performed using polarization-informed deep learning and 3D Integral imaging. An unsupervised image-to-image translation (UNIT) framework is utilized to obtain clean Stokes parameters from the degraded ones. Additionally, a multi-output convolutional neural network (CNN) based branch is used to predict the Mueller matrix estimate along with an estimate of the corresponding residue. The degree of linear polarization with the Mueller matrix estimate generates information regarding the characteristics of the underlying transmission media and the object under consideration. The approach has been evaluated under different environmentally degraded conditions, such as various levels of turbidity and partial occlusion. The 3D integral imaging reduces the effects of degradations in a turbid medium. The performance comparison between 3D and 2D imaging in varying scene conditions is provided. Experimental results suggest that the proposed approach is promising under the scene degradations considered. To the best of our knowledge, this is the first report on polarization-informed deep learning in 3D imaging, which attempts to recover the polarimetric information along with the corresponding Mueller matrix estimate in a degraded environment. 
    more » « less
  4. We present a method to separate a single image captured under two illuminants, with different spectra, into the two images corresponding to the appearance of the scene under each individual illuminant. We do this by training a deep neural network to predict the per-pixel reflectance chromaticity of the scene, which we use in conjunction with a previous flash/no-flash image-based separation algorithm to produce the final two output images. We design our reflectance chromaticity network and loss functions by incorporating intuitions from the physics of image formation. We show that this leads to significantly better performance than other single image techniques and even approaches the quality of the two image separation method. 
    more » « less
  5. Recovering 3D face models from in-the-wild face images has numerous potential applications. However, properly modeling complex lighting effects in reality, including specular lighting, shadows, and occlusions, from a single in-the-wild face image is still considered as a widely open research challenge. In this paper, we propose a convolutional neural network based framework to regress the face model from a single image in the wild. The outputted face model includes dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions. Our approach uses novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. Besides a carefully designed ablation study, we also conduct direct comparison experiments to show that our method can outperform state-of-art methods both quantitatively and qualitatively. 
    more » « less