skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep Cross Polarimetric Thermal-to- visible Face Recognition
In this paper, we present a deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. Polarization state information of thermal faces provides the missing textural and geometrics details in the thermal face imagery which exist in visible spectrum. we propose a coupled deep neural network architecture which leverages relatively large visible and thermal datasets to overcome the problem of overfitting and eventually we train it by a polarimetric thermal face dataset which is the first of its kind. The proposed architecture is able to make full use of the polarimetric thermal information to train a deep model compared to the conventional shallow thermal-to-visible face recognition methods. Proposed coupled deep neural network also finds global discriminative features in a nonlinear embedding space to relate the polarimetric thermal faces to their corresponding visible faces. The results show the superiority of our method compared to the state-of-the-art models in cross thermal-to-visible face recognition algorithms.  more » « less
Award ID(s):
1650474 1066197
PAR ID:
10053533
Author(s) / Creator(s):
;
Date Published:
Journal Name:
The 11th IAPR International Conference on Biometrics (ICB 2018)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. While spatial information and biases have been consistently reported in high-level face regions, the functional contribution of this information toward face recognition behavior is unclear. Here, we propose that spatial integration of information plays a critical role in a hallmark phenomenon of face perception: holistic processing, or the tendency to process all features of a face concurrently rather than independently. We sought to gain insight into the neural basis of face recognition behavior by using a voxelwise encoding model of spatial selectivity to characterize the human face network using both typical face stimuli, and stimuli thought to disrupt normal face perception. We mapped population receptive fields (pRFs) using 3T fMRI in 6 participants using upright as well as inverted faces, which are thought to disrupt holistic processing. Compared to upright faces, inverted faces yielded substantial differences in measured pRF size, position, and amplitude. Further, these differences increased in magnitude along the face network hierarchy, from IOG- to pFus- and mFus-faces. These data suggest that pRFs in high-level regions reflect complex stimulus- dependent neural computations that underlie variations in recognition performance. 
    more » « less
  2. In recent years, face recognition systems have achieved exceptional success due to promising advances in deep learning architectures. However, they still fail to achieve the expected accuracy when matching profile images against a gallery of frontal images. Current approaches either perform pose normalization (i.e., frontalization) or disentangle pose information for face recognition. We instead propose a new approach to utilize pose as auxiliary information via an attention mechanism. In this paper, we hypothesize that pose-attended information using an attention mechanism can guide contextual and distinctive feature extraction from profile faces, which further benefits better representation learning in an embedded domain. To achieve this, first, we design a unified coupled profile-to-frontal face recognition network. It learns the mapping from faces to a compact embedding subspace via a class-specific contrastive loss. Second, we develop a novel pose attention block (PAB) to specially guide the pose-agnostic feature extraction from profile faces. To be more specific, PAB is designed to explicitly help the network to focus on important features along both “channel” and “spatial” dimensions while learning discriminative yet pose-invariant features in an embedding subspace. To validate the effectiveness of our proposed method, we conduct experiments on both controlled and in the- wild benchmarks including Multi-PIE, CFP, and IJB-C, and show superiority over the state-of-the-art. 
    more » « less
  3. Agaian, Sos S.; Jassim, Sabah A. (Ed.)
    Face recognition technologies have been in high demand in the past few decades due to the increase in human-computer interactions. It is also one of the essential components in interpreting human emotions, intentions, facial expressions for smart environments. This non-intrusive biometric authentication system relies on identifying unique facial features and pairing alike structures for identification and recognition. Application areas of facial recognition systems include homeland and border security, identification for law enforcement, access control to secure networks, authentication for online banking and video surveillance. While it is easy for humans to recognize faces under varying illumination conditions, it is still a challenging task in computer vision. Non-uniform illumination and uncontrolled operating environments can impair the performance of visual-spectrum based recognition systems. To address these difficulties, a novel Anisotropic Gradient Facial Recognition (AGFR) system that is capable of autonomous thermal infrared to visible face recognition is proposed. The main contribution of this paper includes a framework for thermal/fused-thermal-visible to visible face recognition system and a novel human-visual-system inspired thermal-visible image fusion technique. Extensive computer simulations using CARL, IRIS, AT&T, Yale and Yale-B databases demonstrate the efficiency, accuracy, and robustness of the AGFR system. Keywords: Infrared thermal to visible facial recognition, anisotropic gradient, visible-to-visible face recognition, nonuniform illumination face recognition, thermal and visible face fusion method 
    more » « less
  4. Abstract The problem of distinguishing identical twins and non‐twin look‐alikes in automated facial recognition (FR) applications has become increasingly important with the widespread adoption of facial biometrics. Due to the high facial similarity of both identical twins and look‐alikes, these face pairs represent the hardest cases presented to facial recognition tools. This work presents an application of one of the largest twin data sets compiled to date to address two FR challenges: (1) determining a baseline measure of facial similarity between identical twins and (2) applying this similarity measure to determine the impact of doppelgangers, or look‐alikes, on FR performance for large face data sets. The facial similarity measure is determined via a deep convolutional neural network. This network is trained on a tailored verification task designed to encourage the network to group together highly similar face pairs in the embedding space and achieves a test AUC of 0.9799. The proposed network provides a quantitative similarity score for any two given faces and has been applied to large‐scale face data sets to identify similar face pairs. An additional analysis that correlates the comparison score returned by a facial recognition tool and the similarity score returned by the proposed network has also been performed. 
    more » « less
  5. Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources. 
    more » « less