skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hold-Geoffroy, Yannick"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present a polarization-based approach to perform diffuse-specular separation from a single polarimetric image, acquired using a flexible, practical capture setup. Our key technical insight is that, unlike previous polarization-based separation methods that assume completely unpolarized diffuse reflectance, we use a more general polarimetric model that accounts for partially polarized diffuse reflections. We capture the scene with a polarimetric sensor and produce an initial analytical diffuse-specular separation that we further pass into a deep network trained to refine the separation. We demonstrate that our combination of analytical separation and deep network refinement produces state-of-the-art diffuse-specular separation, which enables image-based appearance editing of dynamic scenes and enhanced appearance estimation. 
    more » « less
  2. Ensuring ideal lighting when recording videos of people can be a daunting task requiring a controlled environment and expensive equipment. Methods were recently proposed to perform portrait relighting for still images, enabling after-the-fact lighting enhancement. However, naively applying these methods on each frame independently yields videos plagued with flickering artifacts. In this work, we propose the first method to perform temporally consistent video portrait relighting. To achieve this, our method optimizes end-to-end both desired lighting and temporal consistency jointly. We do not require ground truth lighting annotations during training, allowing us to take advantage of the large corpus of portrait videos already available on the internet. We demonstrate that our method outperforms previous work in balancing accurate relighting and temporal consistency on a number of real-world portrait videos 
    more » « less
  3. null (Ed.)