Face recognition (FR) systems are fast becoming ubiquitous. However, differential performance among certain demographics was identified in several widely used FR models. The skin tone of the subject is an important factor in addressing the differential performance. Previous work has used modeling methods to propose skin tone measures of subjects across different illuminations or utilized subjective labels of skin color and demographic information. However, such models heavily rely on consistent background and lighting for calibration, or utilize labeled datasets, which are time-consuming to generate or are unavailable. In this work, we have developed a novel and data-driven skin color measure capable of accurately representing subjects' skin tone from a single image, without requiring a consistent background or illumination. Our measure leverages the dichromatic reflection model in RGB space to decompose skin patches into diffuse and specular bases.
more »
« less
CLTS-GAN: Color-Lighting-Texture-Specular Reflection Augmentation for Colonoscopy
Automatedanalysisofopticalcolonoscopy(OC)videoframes (to assist endoscopists during OC) is challenging due to variations in color, lighting, texture, and specular reflections. Previous methods ei- ther remove some of these variations via preprocessing (making pipelines cumbersome) or add diverse training data with annotations (but expen- sive and time-consuming). We present CLTS-GAN, a new deep learning model that gives fine control over color, lighting, texture, and specular reflection synthesis for OC video frames. We show that adding these colonoscopy-specific augmentations to the training data can improve state-of-the-art polyp detection/segmentation methods as well as drive next generation of OC simulators for training medical students. The code and pre-trained models for CLTS-GAN are available on Computational Endoscopy Platform GitHub (https://github.com/nadeemlab/CEP).
more »
« less
- Award ID(s):
- 1650499
- PAR ID:
- 10399956
- Date Published:
- Journal Name:
- Springer
- ISSN:
- 0947-5427
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recovering 3D face models from in-the-wild face images has numerous potential applications. However, properly modeling complex lighting effects in reality, including specular lighting, shadows, and occlusions, from a single in-the-wild face image is still considered as a widely open research challenge. In this paper, we propose a convolutional neural network based framework to regress the face model from a single image in the wild. The outputted face model includes dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions. Our approach uses novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. Besides a carefully designed ablation study, we also conduct direct comparison experiments to show that our method can outperform state-of-art methods both quantitatively and qualitatively.more » « less
-
Bendinskas, Kestutis; Contento, Tony; Newell, Peter (Ed.)There are various color correction techniques that can be applied to digital photographs to account for environmental lighting variations. This manuscript contains a proposed method for such color correction. The method involves saturating an image by a specified percentage of its pixels via upper and lower percentage histogram manipulation using the image’s RGB histograms. Variations of this new technique, the white balance (WB) correction method, and a multivariable fit are used to test its performance against common color correction techniques. The findings demonstrate that the upper and lower percentage histogram manipulation method is not only more applicable to photos because it doesn’t require calibration regions to be sampled but it is also more consistent in its correction of photos when there are substantial gray scale features (e.g. a black and white grid or text). Our motivation for testing these techniques is to find the most robust color correction technique that is broadly applicable (not requiring a color checker chart) and is consistent across different lighting. KEYWORDS: Color Correction; Histogram Manipulation; Saturation; White Balance; Scientific Image Analysis; Color Comparisons; Euclidean Distance; Standard Deviation; Color Differencemore » « less
-
null (Ed.)Sophisticated generative adversary network (GAN) models are now able to synthesize highly realistic human faces that are difficult to discern from real ones visually. In this work, we show that GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes. The inconsistency is caused by the lack of physical/physiological constraints in the GAN models. We show that such artifacts exist widely in high-quality GAN synthesized faces and further describe an automatic method to extract and compare corneal specular highlights from two eyes. Qualitative and quantitative evaluations of our method suggest its simplicity and effectiveness in distinguishing GAN synthesized faces.more » « less
-
Cloud detection is an inextricable pre-processing step in remote sensing image analysis workflows. Most of the traditional rule-based and machine-learning-based algorithms utilize low-level features of the clouds and classify individual cloud pixels based on their spectral signatures. Cloud detection using such approaches can be challenging due to a multitude of factors including harsh lighting conditions, the presence of thin clouds, the context of surrounding pixels, and complex spatial patterns. In recent studies, deep convolutional neural networks (CNNs) have shown outstanding results in the computer vision domain. These methods are practiced for better capturing the texture, shape as well as context of images. In this study, we propose a deep learning CNN approach to detect cloud pixels from medium-resolution satellite imagery. The proposed CNN accounts for both the low-level features, such as color and texture information as well as high-level features extracted from successive convolutions of the input image. We prepared a cloud-pixel dataset of approximately 7273 randomly sampled 320 by 320 pixels image patches taken from a total of 121 Landsat-8 (30m) and Sentinel-2 (20m) image scenes. These satellite images come with cloud masks. From the available data channels, only blue, green, red, and NIR bands are fed into the model. The CNN model was trained on 5300 image patches and validated on 1973 independent image patches. As the final output from our model, we extract a binary mask of cloud pixels and non-cloud pixels. The results are benchmarked against established cloud detection methods using standard accuracy metrics.more » « less
An official website of the United States government

