Many fingerprint recognition systems capture four fingerprints in one image. In such systems, the fingerprint processing pipeline must first segment each four-fingerprint slap into individual fingerprints. Note that most of the current fingerprint segmentation algorithms have been designed and evaluated using only adult fingerprint datasets. In this work, we have developed a human-annotated in-house dataset of 15790 slaps of which 9084 are adult samples and 6706 are samples drawn from children from ages 4 to 12. Subsequently, the dataset is used to evaluate the matching performance of the NFSEG, a slap fingerprint segmentation system developed by NIST, on slaps from adults and juvenile subjects. Our results reveal the lower performance of NFSEG on slaps from juvenile subjects. Finally, we utilized our novel dataset to develop the Mask-RCNN based Clarkson Fingerprint Segmentation (CFSEG). Our matching results using the Verifinger fingerprint matcher indicate that CFSEG outperforms NFSEG for both adults and juvenile slaps. The CFSEG model is publicly available at \url{this https URL}
more »
« less
This content will become publicly available on September 23, 2026
Online Proctoring System with Fingerprint and Eye Recognition Using Siamese Network
The increasing number of online courses and programs available worldwide has elevated the importance of reliable online exam proctoring. The typical proctoring process relies on webcam surveillance. However, this traditional method of proctoring is vulnerable to numerous types of face occlusions used for religious reasons or otherwise. We present a robust biometric authentication framework that combines advanced eye and face recognition, resulting in a much better live proctoring system that is further augmented by including fingerprinting. We have used cutting-edge deep learning techniques, specifically the Siamese network for fingerprint analysis and a ResNet-based eye recognition model tested with and without Gabor filters. Furthermore, our system offers much better performance compared to previously existing models. Notably, our system maintains high accuracy, approximately 98.04% for custom eye recognition, 99.01% for publicly available labeled faces dataset, 82% for niqab dataset and 87.04% for publicly available fingerprint dataset. Moreover, our model demonstrates a 10–20% improvement in face recognition under occlusion. Our solution is highly effective for not only online proctoring but also allows use in other similar situations, such as employee authentication for remote presence verification. Our system supports scenarios involving partial occlusions, such as masks and sunglasses, to full occlusions with veils, without requiring any additional hardware.
more »
« less
- Award ID(s):
- 2318574
- PAR ID:
- 10649912
- Publisher / Repository:
- World Scientific Publishing Company
- Date Published:
- Journal Name:
- Journal of Circuits, Systems and Computers
- ISSN:
- 0218-1266
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Traditional fingerprint authentication requires the acquisition of data through touch-based specialized sensors. However, due to many hygienic concerns including the global spread of the COVID virus through contact with a surface has led to an increased interest in contactless fingerprint image acquisition methods. Matching fingerprints acquired using contactless imaging against contact-based images brings up the problem of performing cross modal fingerprint matching for identity verification. In this paper, we propose a cost-effective, highly accurate and secure end-to-end contactless fingerprint recognition solution. The proposed framework first segments the finger region from an image scan of the hand using a mobile phone camera. For this purpose, we developed a cross-platform mobile application for fingerprint enrollment, verification, and authentication keeping security, robustness, and accessibility in mind. The segmented finger images go through fingerprint enhancement to highlight discriminative ridge-based features. A novel deep convolutional network is proposed to learn a representation from the enhanced images based on the optimization of various losses. The proposed algorithms for each stage are evaluated on multiple publicly available contactless databases. Our matching accuracy and the associated security employed in the system establishes the strength of the proposed solution framework.more » « less
-
Ear wearables (earables) are emerging platforms that are broadly adopted in various applications. There is an increasing demand for robust earables authentication because of the growing amount of sensitive information and the IoT devices that the earable could access. Traditional authentication methods become less feasible due to the limited input interface of earables. Nevertheless, the rich head-related sensing capabilities of earables can be exploited to capture human biometrics. In this paper, we propose EarSlide, an earable biometric authentication system utilizing the advanced sensing capacities of earables and the distinctive features of acoustic fingerprints when users slide their fingers on the face. It utilizes the inward-facing microphone of the earables and the face-ear channel of the ear canal to reliably capture the acoustic fingerprint. In particular, we study the theory of friction sound and categorize the characteristics of the acoustic fingerprints into three representative classes, pattern-class, ridge-groove-class, and coupling-class. Different from traditional fingerprint authentication only utilizes 2D patterns, we incorporate the 3D information in acoustic fingerprint and indirectly sense the fingerprint for authentication. We then design representative sliding gestures that carry rich information about the acoustic fingerprint while being easy to perform. It then extracts multi-class acoustic fingerprint features to reflect the inherent acoustic fingerprint characteristic for authentication. We also adopt an adaptable authentication model and a user behavior mitigation strategy to effectively authenticate legit users from adversaries. The key advantages of EarSlide are that it is resistant to spoofing attacks and its wide acceptability. Our evaluation of EarSlide in diverse real-world environments with intervals over one year shows that EarSlide achieves an average balanced accuracy rate of 98.37% with only one sliding gesture.more » « less
-
Agaian, Sos S.; Jassim, Sabah A. (Ed.)Face recognition technologies have been in high demand in the past few decades due to the increase in human-computer interactions. It is also one of the essential components in interpreting human emotions, intentions, facial expressions for smart environments. This non-intrusive biometric authentication system relies on identifying unique facial features and pairing alike structures for identification and recognition. Application areas of facial recognition systems include homeland and border security, identification for law enforcement, access control to secure networks, authentication for online banking and video surveillance. While it is easy for humans to recognize faces under varying illumination conditions, it is still a challenging task in computer vision. Non-uniform illumination and uncontrolled operating environments can impair the performance of visual-spectrum based recognition systems. To address these difficulties, a novel Anisotropic Gradient Facial Recognition (AGFR) system that is capable of autonomous thermal infrared to visible face recognition is proposed. The main contribution of this paper includes a framework for thermal/fused-thermal-visible to visible face recognition system and a novel human-visual-system inspired thermal-visible image fusion technique. Extensive computer simulations using CARL, IRIS, AT&T, Yale and Yale-B databases demonstrate the efficiency, accuracy, and robustness of the AGFR system. Keywords: Infrared thermal to visible facial recognition, anisotropic gradient, visible-to-visible face recognition, nonuniform illumination face recognition, thermal and visible face fusion methodmore » « less
-
Recent advances in eye tracking have given birth to a new genre of gaze-based context sensing applications, ranging from cognitive load estimation to emotion recognition. To achieve state-of-the-art recognition accuracy, a large-scale, labeled eye movement dataset is needed to train deep learning-based classifiers. However, due to the heterogeneity in human visual behavior, as well as the labor-intensive and privacy-compromising data collection process, datasets for gaze-based activity recognition are scarce and hard to collect. To alleviate the sparse gaze data problem, we present EyeSyn, a novel suite of psychology-inspired generative models that leverages only publicly available images and videos to synthesize a realistic and arbitrarily large eye movement dataset. Taking gaze-based museum activity recognition as a case study, our evaluation demonstrates that EyeSyn can not only replicate the distinct pat-terns in the actual gaze signals that are captured by an eye tracking device, but also simulate the signal diversity that results from different measurement setups and subject heterogeneity. Moreover, in the few-shot learning scenario, EyeSyn can be readily incorporated with either transfer learning or meta-learning to achieve 90% accuracy, without the need for a large-scale dataset for training.more » « less
An official website of the United States government
