We present a method to separate a single image captured under two illuminants, with different spectra, into the two images corresponding to the appearance of the scene under each individual illuminant. We do this by training a deep neural network to predict the per-pixel reflectance chromaticity of the scene, which we use in conjunction with a previous flash/no-flash image-based separation algorithm to produce the final two output images. We design our reflectance chromaticity network and loss functions by incorporating intuitions from the physics of image formation. We show that this leads to significantly better performance than other single image techniques and even approaches the quality of the two image separation method.
more »
« less
Illuminant Spectra-Based Source Separation Using Flash Photography
Real-world lighting often consists of multiple illuminants with different spectra. Separating and manipulating these illuminants in post-process is a challenging problem that requires either significant manual input or calibrated scene geometry and lighting. In this work, we leverage a flash/no-flash image pair to analyze and edit scene illuminants based on their spectral differences. We derive a novel physics-based relationship between color variations in the observed flash/no-flash intensities and the spectra and surface shading corresponding to individual scene illuminants. Our technique uses this constraint to automatically separate an image into constituent images lit by each illuminant. This separation can be used to support applications like white balancing, lighting editing, and RGB photometric stereo, where we demonstrate results that outperform state-of-the-art techniques on a wide range of images.
more »
« less
- Award ID(s):
- 1652569
- PAR ID:
- 10080875
- Date Published:
- Journal Name:
- 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- Page Range / eLocation ID:
- 6209 to 6218
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Reconstructing 3D objects in natural environments requires solving the ill-posed problem of geometry, spatially-varying material, and lighting estimation. As such, many approaches impractically constrain to a dark environment, use controlled lighting rigs, or use few handheld captures but suffer reduced quality. We develop a method that uses just two smartphone exposures captured in ambient lighting to reconstruct appearance more accurately and practically than baseline methods. Our insight is that we can use a flash/no-flash RGB-D pair to pose an inverse rendering problem using point lighting. This allows efficient differentiable rendering to optimize depth and normals from a good initialization and so also the simultaneous optimization of diffuse environment illumination and SVBRDF material. We find that this reduces diffuse albedo error by 25%, specular error by 46%, and normal error by 30% against single and paired-image baselines that use learning-based techniques. Given that our approach is practical for everyday solid objects, we enable photorealistic relighting for mobile photography and easier content creation for augmented reality.more » « less
-
Agaian, Sos S.; Jassim, Sabah A.; DelMarco, Stephen P.; Asari, Vijayan K. (Ed.)Recognizing the model of a vehicle in natural scene images is an important and challenging task for real-life applications. Current methods perform well under controlled conditions, such as frontal and horizontal view-angles or under optimal lighting conditions. Nevertheless, their performance decreases significantly in an unconstrained environment, that may include extreme darkness or over illuminated conditions. Other challenges to recognition systems include input images displaying very low visual quality or considerably low exposure levels. This paper strives to improve vehicle model recognition accuracy in dark scenes by using a deep neural network model. To boost the recognition performance of vehicle models, the approach performs joint enhancement and localization of vehicles for non-uniform-lighting conditions. Experimental results on several public datasets demonstrate the generality and robustness of our framework. It improves vehicle detection rate under poor lighting conditions, localizes objects of interest, and yields better vehicle model recognition accuracy on low-quality input image data. Grants: This work is supported by the US Department of Transportation, Federal Highway Administration (FHWA), grant contract: 693JJ320C000023 Keywords—Image enhancement, vehicle model andmore » « less
-
Object detection and semantic segmentation are two of the most widely adopted deep learning algorithms in agricultural applications. One of the major sources of variability in image quality acquired in the outdoors for such tasks is changing lighting condition that can alter the appearance of the objects or the contents of the entire image. While transfer learning and data augmentation to some extent reduce the need for large amount of data to train deep neural networks, the large variety of cultivars and the lack of shared datasets in agriculture makes wide-scale field deployments difficult. In this paper, we present a high throughput robust active lighting-based camera system that generates consistent images in all lighting conditions. We detail experiments that show the consistency in images quality leading to relatively fewer images to train deep neural networks for the task of object detection. We further present results from field experiment under extreme lighting conditions where images without active lighting significantly lack to provide consistent results. The experimental results show that on average, deep nets for object detection trained on consistent data required nearly four times less data to achieve similar level of accuracy. This proposed work could potentially provide pragmatic solutions to computer vision needs in agriculture.more » « less
-
This record contains the data of spectra and photometry used for the paper A shock flash breaking out of a dusty red supergiant [MJD]_[Band]_[Instrument]_[stacked number x exposure time of a single image].fits - Images of SN2023ixf after reduction code.zip - Code to fit the hybrid modelmore » « less
An official website of the United States government

