The widespread use of smartphones has spurred the research in mobile iris devices. Due to their convenience, these mobile devices are also utilized in unconstrained outdoor scenarios. This has necessitated the development of reliable iris recognition algorithms for such uncontrolled environment. At the same time, iris presentation attacks pose a major challenge to current iris recognition systems. It has been shown that print attacks and textured contact lens may significantly degrade the iris recognition performance. Motivated by these factors, we present a novel Mobile Uncontrolled Iris Presentation Attack Database (MUIPAD). The database contains more than 10,000 iris images that are acquired with and without textured contact lenses in indoor and outdoor environments using a mobile sensor. We also investigate the efficacy of textured contact lens in identity impersonation and obfuscation. Moreover, we demonstrate the effectiveness of deep learning based features for iris presentation attack detection on the proposed database.
Synthetic iris presentation attack using iDCGAN
Reliability and accuracy of iris biometric modality has
prompted its large-scale deployment for critical applications
such as border control and national ID projects. The
extensive growth of iris recognition systems has raised apprehensions
about susceptibility of these systems to various
attacks. In the past, researchers have examined the impact
of various iris presentation attacks such as textured contact
lenses and print attacks. In this research, we present a novel
presentation attack using deep learning based synthetic iris
generation. Utilizing the generative capability of deep convolutional
generative adversarial networks and iris quality
metrics, we propose a new framework, named as iDCGAN
(iris deep convolutional generative adversarial network) for
generating realistic appearing synthetic iris images. We
demonstrate the effect of these synthetically generated iris
images as presentation attack on iris recognition by using
a commercial system. The state-of-the-art presentation attack
detection framework, DESIST is utilized to analyze if it
can discriminate these synthetically generated iris images
from real images. The experimental results illustrate that
mitigating the proposed synthetic presentation attack is of
paramount importance.
- Publication Date:
- NSF-PAR ID:
- 10053783
- Journal Name:
- International Joint Conference on Biometrics (IJCB)
- Page Range or eLocation-ID:
- 674 to 680
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Dopplermore »
-
Recent advances in machine learning and deep neural networks have led to the realization of many important applications in the area of personalized medicine. Whether it is detecting activities of daily living or analyzing images for cancerous cells, machine learning algorithms have become the dominant choice for such emerging applications. In particular, the state-of-the-art algorithms used for human activity recognition (HAR) using wearable inertial sensors utilize machine learning algorithms to detect health events and to make predictions from sensor data. Currently, however, there remains a gap in research on whether or not and how activity recognition algorithms may become the subject of adversarial attacks. In this paper, we take the first strides on (1) investigating methods of generating adversarial example in the context of HAR systems; (2) studying the vulnerability of activity recognition models to adversarial examples in feature and signal domain; and (3) investigating the effects of adversarial training on HAR systems. We introduce Adar, a novel computational framework for optimization-driven creation of adversarial examples in sensor-based activity recognition systems. Through extensive analysis based on real sensor data collected with human subjects, we found that simple evasion attacks are able to decrease the accuracy of a deep neural networkmore »
-
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms. However, it often degrades the model performance on normal images and the defense does not generalize well to novel attacks. Given the success of deep generative models such as GANs and VAEs in characterizing the underlying manifold of images, we investigate whether or not the aforementioned problems can be remedied by exploiting the underlying manifold information. To this end, we construct an "On-Manifold ImageNet" (OM-ImageNet) dataset by projecting the ImageNet samples onto the manifold learned by StyleGSN. For this dataset, the underlying manifold information is exact. Using OM-ImageNet, we first show that adversarial training in the latent space of images improves both standard accuracy and robustness to on-manifold attacks. However, since no out-of-manifold perturbations are realized, the defense can be broken by Lp adversarial attacks. We further propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model. Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks. In addition, we observe that models defended by DMAT achieve improved robustness against novel attacks which manipulate imagesmore »
-
Recent advances in machine learning enable wider applications of prediction models in cyber-physical systems. Smart grids are increasingly using distributed sensor settings for distributed sensor fusion and information processing. Load forecasting systems use these sensors to predict future loads to incorporate into dynamic pricing of power and grid maintenance. However, these inference predictors are highly complex and thus vulnerable to adversarial attacks. Moreover, the adversarial attacks are synthetic norm-bounded modifications to a limited number of sensors that can greatly affect the accuracy of the overall predictor. It can be much cheaper and effective to incorporate elements of security and resilience at the earliest stages of design. In this paper, we demonstrate how to analyze the security and resilience of learning-based prediction models in power distribution networks by utilizing a domain-specific deep-learning and testing framework. This framework is developed using DeepForge and enables rapid design and analysis of attack scenarios against distributed smart meters in a power distribution network. It runs the attack simulations in the cloud backend. In addition to the predictor model, we have integrated an anomaly detector to detect adversarial attacks targeting the predictor. We formulate the stealthy adversarial attacks as an optimization problem to maximize prediction lossmore »