skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Significant Feature Based Representation for Template Protection
The security of biometric templates is of paramount importance. Leakage of biometric information may result in loss of private data and can lead to the compromise of the biometric system. Yet, the security of templates is often overlooked in favour of performance. In this paper, we present a plug-and-play framework for creating secure face templates with negligible degradation in the performance of the system. We propose a significant bit based representation which guarantees security in addition to other biometric aspects such as cancelability and reproducibility. In addition to being scalable, the proposed method does not make unrealistic assumptions regarding the pose or illumination of the face images. We provide experimental results on two unconstrained datasets - IJB-A and IJB-C.  more » « less
Award ID(s):
1822190
PAR ID:
10182823
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Page Range / eLocation ID:
2389 to 2396
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Trusted Execution Environments (TEEs) are becoming ubiquitous and are currently used in many security applications: from personal IoT gadgets to banking and databases. Prominent examples of such architectures are Intel SGX, ARM TrustZone, and Trusted Platform Modules (TPMs). A typical TEE relies on a dynamic Root of Trust (RoT) to provide security services such as code/data confidentiality and integrity, isolated secure software execution, remote attestation, and sensor auditing. Despite their usefulness, there is currently no secure means to determine whether a given security service or task is being performed by the particular RoT within a specific physical device. We refer to this as the Root of Trust Identification (RTI) problem and discuss how it inhibits security for applications such as sensing and actuation. We formalize the RTI problem and argue that security of RTI protocols is especially challenging due to local adversaries, cuckoo adversaries, and the combination thereof. To cope with this problem we propose a simple and effective protocol based on biometrics. Unlike biometric-based user authentication, our approach is not concerned with verifying user identity, and requires neither pre-enrollment nor persistent storage for biometric templates. Instead, it takes advantage of the difficulty of cloning a biometric in real-time to securely identify the RoT of a given physical device, by using the biometric as a challenge. Security of the proposed protocol is analyzed in the combined Local and Cuckoo adversarial model. Also, a prototype implementation is used to demonstrate the protocol’s feasibility and practicality. We further propose a Proxy RTI protocol, wherein a previously identified RoT assists a remote verifier in identifying new RoTs. 
    more » « less
  2. Agaian, Sos S.; Jassim, Sabah A. (Ed.)
    Face recognition technologies have been in high demand in the past few decades due to the increase in human-computer interactions. It is also one of the essential components in interpreting human emotions, intentions, facial expressions for smart environments. This non-intrusive biometric authentication system relies on identifying unique facial features and pairing alike structures for identification and recognition. Application areas of facial recognition systems include homeland and border security, identification for law enforcement, access control to secure networks, authentication for online banking and video surveillance. While it is easy for humans to recognize faces under varying illumination conditions, it is still a challenging task in computer vision. Non-uniform illumination and uncontrolled operating environments can impair the performance of visual-spectrum based recognition systems. To address these difficulties, a novel Anisotropic Gradient Facial Recognition (AGFR) system that is capable of autonomous thermal infrared to visible face recognition is proposed. The main contribution of this paper includes a framework for thermal/fused-thermal-visible to visible face recognition system and a novel human-visual-system inspired thermal-visible image fusion technique. Extensive computer simulations using CARL, IRIS, AT&T, Yale and Yale-B databases demonstrate the efficiency, accuracy, and robustness of the AGFR system. Keywords: Infrared thermal to visible facial recognition, anisotropic gradient, visible-to-visible face recognition, nonuniform illumination face recognition, thermal and visible face fusion method 
    more » « less
  3. null (Ed.)
    In biometric systems, the process of identifying or verifying people using facial data must be highly accurate to ensure a high level of security and credibility. Many researchers investigated the fairness of face recognition systems and reported demographic bias. However, there was not much study on face presentation attack detection technology (PAD) in terms of bias. This research sheds light on bias in face spoofing detection by implementing two phases. First, two CNN (convolutional neural network)-based presentation attack detection models, ResNet50 and VGG16 were used to evaluate the fairness of detecting imposer attacks on the basis of gender. In addition, different sizes of Spoof in the Wild (SiW) testing and training data were used in the first phase to study the effect of gender distribution on the models’ performance. Second, the debiasing variational autoencoder (DB-VAE) (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) was applied in combination with VGG16 to assess its ability to mitigate bias in presentation attack detection. Our experiments exposed minor gender bias in CNN-based presentation attack detection methods. In addition, it was proven that imbalance in training and testing data does not necessarily lead to gender bias in the model’s performance. Results proved that the DB-VAE approach (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) succeeded in mitigating bias in detecting spoof faces. 
    more » « less
  4. Facial Recognition Systems (FRS) have become one of the most viable biometric identity authentication approaches in supervised and unsupervised applications. However, FRSs are known to be vulnerable to adversarial attacks such as identity theft and presentation attacks. The master face dictionary attacks (MFDA) leveraging multiple enrolled face templates have posed a notable threat to FRS. Federated learning-based FRS deployed on edge or mobile devices are particularly vulnerable to MFDA due to the absence of robust MF detectors. To mitigate the MFDA risks, we propose a trustworthy authentication system against visual MFDA (Trauma). Trauma leverages the analysis of specular highlights on diverse facial components and physiological characteristics inherent to human faces, exploiting the inability of existing MFDAs to replicate reflective elements accurately. We have developed a feature extractor network that employs a lightweight and low-latency vision transformer architecture to discern inconsistencies among specular highlights and physiological features in facial imagery. Extensive experimentation has been conducted to assess Trauma’s efficacy, utilizing public GAN-face detection datasets and mobile devices. Empirical findings demonstrate that Trauma achieves high detection accuracy, ranging from 97.83% to 99.56%, coupled with rapid detection speeds (less than 11 ms on mobile devices), even when confronted with state-of-the-art MFDA techniques. 
    more » « less
  5. Biometric recognition is used across a variety of applications from cyber security to border security. Recent research has focused on ensuring biometric performance (false negatives and false positives) is fair across demographic groups. While there has been significant progress on the development of metrics, the evaluation of the performance across groups, and the mitigation of any problems, there has been little work incorporating statistical variation. This is important because differences among groups can be found by chance when no difference is present. In statistics this is called a Type I error. Differences among groups may be due to sampling variation or they may be due to actual difference in system performance. Discriminating between these two sources of error is essential for good decision making about fairness and equity. This paper presents two novel statistical approaches for assessing fairness across demographic groups. The first methodology is a bootstrapped-based hypothesis test, while the second is simpler test methodology focused upon non-statistical audience. For the latter we present the results of a simulation study about the relationship between the margin of error and factors such as number of subjects, number of attempts, correlation between attempts, underlying false non-match rates(FNMR's), and number of groups. 
    more » « less