This study explores the impact of blinking on deep learning based iris recognition, addressing a critical aspect in the development of robust, reliable, and non-intrusive biometric systems. While previous research has demonstrated the promise of Convolutional Neural Networks (CNNs), such as AlexNet, GoogleLeNet, and ResNet, the impact of blinking remains underexplored in this context. To address this gap, our research focuses on training multiple ResNet models with varying degrees of iris occlusion exposure. Using a dataset with 101 subjects, we generated cohorts of synthetically occluded images ranging from 0% occlusion to 90% occlusion. Our findings reveal a noteworthy linear performance decrease in models unexposed to blinked images as iris occlusion increases. However, augmenting the training dataset with occluded images significantly mitigates this performance degradation, highlighting the importance of accounting for blinking in the development of reliable iris recognition systems.
more »
« less
This content will become publicly available on September 1, 2025
Advancements in Synthetic Generation of Contactless Palmprint Biometrics Using StyleGAN Models
Deep learning models have demonstrated significant advantages over traditional algorithms in image processing tasks like object detection. However, a large amount of data are needed to train such deep networks, which limits their application to tasks such as biometric recognition that require more training samples for each class (i.e., each individual). Researchers developing such complex systems rely on real biometric data, which raises privacy concerns and is restricted by the availability of extensive, varied datasets. This paper proposes a generative adversarial network (GAN)-based solution to produce training data (palm images) for improved biometric (palmprint-based) recognition systems. We investigate the performance of the most recent StyleGAN models in generating a thorough contactless palm image dataset for application in biometric research. Training on publicly available H-PolyU and IIDT palmprint databases, a total of 4839 images were generated using StyleGAN models. SIFT (Scale-Invariant Feature Transform) was used to find uniqueness and features at different sizes and angles, which showed a similarity score of 16.12% with the most recent StyleGAN3-based model. For the regions of interest (ROIs) in both the palm and finger, the average similarity scores were 17.85%. We present the Frechet Inception Distance (FID) of the proposed model, which achieved a 16.1 score, demonstrating significant performance. These results demonstrated StyleGAN as effective in producing unique synthetic biometric images.
more »
« less
- Award ID(s):
- 1650503
- PAR ID:
- 10577448
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Journal of Cybersecurity and Privacy
- Volume:
- 4
- Issue:
- 3
- ISSN:
- 2624-800X
- Page Range / eLocation ID:
- 663 to 677
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Agaian, Sos S.; DelMarco, Stephen P.; Asari, Vijayan K. (Ed.)Iris recognition is a widely used biometric technology that has high accuracy and reliability in well-controlled environments. However, the recognition accuracy can significantly degrade in non-ideal scenarios, such as off-angle iris images. To address these challenges, deep learning frameworks have been proposed to identify subjects through their off-angle iris images. Traditional CNN-based iris recognition systems train a single deep network using multiple off-angle iris image of the same subject to extract the gaze invariant features and test incoming off-angle images with this single network to classify it into same subject class. In another approach, multiple shallow networks are trained for each gaze angle that will be the experts for specific gaze angles. When testing an off-angle iris image, we first estimate the gaze angle and feed the probe image to its corresponding network for recognition. In this paper, we present an analysis of the performance of both single and multimodal deep learning frameworks to identify subjects through their off-angle iris images. Specifically, we compare the performance of a single AlexNet with multiple SqueezeNet models. SqueezeNet is a variation of the AlexNet that uses 50x fewer parameters and is optimized for devices with limited computational resources. Multi-model approach using multiple shallow networks, where each network is an expert for a specific gaze angle. Our experiments are conducted on an off-angle iris dataset consisting of 100 subjects captured at 10-degree intervals between -50 to +50 degrees. The results indicate that angles that are more distant from the trained angles have lower model accuracy than the angles that are closer to the trained gaze angle. Our findings suggest that the use of SqueezeNet, which requires fewer parameters than AlexNet, can enable iris recognition on devices with limited computational resources while maintaining accuracy. Overall, the results of this study can contribute to the development of more robust iris recognition systems that can perform well in non-ideal scenarios.more » « less
-
Intrinsic images, in the original sense, are image-like maps of scene properties like depth, normal, albedo, or shading. This paper demonstrates that StyleGAN can easily be induced to produce intrinsic images. The procedure is straightforward. We show that if StyleGAN produces G ( w ) from latent w , then for each type of intrinsic image, there is a fixed offset d c so that G ( w + d c ) is that type of intrinsic image for G ( w ) . Here d c is {\em independent of w }. The StyleGAN we used was pretrained by others, so this property is not some accident of our training regime. We show that there are image transformations StyleGAN will {\em not} produce in this fashion, so StyleGAN is not a generic image regression engine. It is conceptually exciting that an image generator should know'' and represent intrinsic images. There may also be practical advantages to using a generative model to produce intrinsic images. The intrinsic images obtained from StyleGAN compare well both qualitatively and quantitatively with those obtained by using SOTA image regression techniques; but StyleGAN's intrinsic images are robust to relighting effects, unlike SOTA methods.more » « less
-
Alam, Mohammad S.; Asari, Vijayan K. (Ed.)Iris recognition is one of the well-known areas of biometric research. However, in real-world scenarios, subjects may not always provide fully open eyes, which can negatively impact the performance of existing systems. Therefore, the detection of blinking eyes in iris images is crucial to ensure reliable biometric data. In this paper, we propose a deep learning-based method using a convolutional neural network to classify blinking eyes in off-angle iris images into four different categories: fully-blinked, half-blinked, half-opened, and fully-opened. The dataset used in our experiments includes 6500 images of 113 subjects and contains images of a mixture of both frontal and off-angle views of the eyes from -50 to 50 in gaze angle. We train and test our approach using both frontal and off-angle images and achieve high classification performance for both types of images. Compared to training the network with only frontal images, our approach shows significantly better performance when tested on off-angle images. These findings suggest that training the model with a more diverse set of off-angle images can improve its performance for off-angle blink detection, which is crucial for real-world applications where the iris images are often captured at different angles. Overall, the deep learning-based blink detection method can be used as a standalone algorithm or integrated into existing standoff biometrics frameworks to improve their accuracy and reliability, particularly in scenarios where subjects may blink.more » « less
-
null (Ed.)Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets. These models achieve the current state of the art, but they have difficulties generalizing when applied to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose "GainTuning", in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the "Gain") of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive denoising in a scientific application, in which a CNN is trained on synthetic data, and tested on real transmission-electron-microscope images. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios.more » « less