skip to main content

Search for: All records

Creators/Authors contains: "Vatsa, Mayank"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The widespread use of smartphones has spurred the research in mobile iris devices. Due to their convenience, these mobile devices are also utilized in unconstrained outdoor scenarios. This has necessitated the development of reliable iris recognition algorithms for such uncontrolled environment. At the same time, iris presentation attacks pose a major challenge to current iris recognition systems. It has been shown that print attacks and textured contact lens may significantly degrade the iris recognition performance. Motivated by these factors, we present a novel Mobile Uncontrolled Iris Presentation Attack Database (MUIPAD). The database contains more than 10,000 iris images that are acquired with and without textured contact lenses in indoor and outdoor environments using a mobile sensor. We also investigate the efficacy of textured contact lens in identity impersonation and obfuscation. Moreover, we demonstrate the effectiveness of deep learning based features for iris presentation attack detection on the proposed database.
  2. Advancements in smartphone applications have empowered even non-technical users to perform sophisticated operations such as morphing in faces as few tap operations. While such enablements have positive effects, as a negative side, now anyone can digitally attack face (biometric) recognition systems. For example, face swapping application of Snapchat can easily create “swapped” identities and circumvent face recognition system. This research presents a novel database, termed as SWAPPED - Digital Attack Video Face Database, prepared using Snapchat’s application which swaps/stitches two faces and creates videos. The database contains bonafide face videos and face swapped videos of multiple subjects. Baseline face recognition experiments using commercial system shows over 90% rank-1 accuracy when attack videos are used as probe. As a second contribution, this research also presents a novel Weighted Local Magnitude Pattern feature descriptor based presentation attack detection algorithm which outperforms several existing approaches.
  3. Iris recognition in visible spectrum has developed into an active area of research. This has elevated the importance of efficient presentation attack detection algorithms, particularly in security based critical applications. In this paper, we present the first detailed analysis of the effect of contact lenses on iris recognition in visible spectrum. We introduce the first contact lens database in visible spectrum, Unconstrained Visible Contact Lens Iris (UVCLI) Database, containing samples from 70 classes with subjects wearing textured contact lenses in indoor and outdoor environments across multiple sessions. We observe that textured contact lenses degrade the visible spectrum iris recognition performance by over 25% and thus, may be utilized intentionally or unintentionally to attack existing iris recognition systems. Next, three iris presentation attack detection (PAD) algorithms are evaluated on the proposed database and highest PAD accuracy of 82.85% is observed. This illustrates that there is a significant scope of improvement in developing efficient PAD algorithms for detection of textured contact lenses in unconstrained visible spectrum iris images.
  4. Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
  5. Forensic application of automatically matching skull with face images is an important research area linking biometrics with practical applications in forensics. It is an opportunity for biometrics and face recognition researchers to help the law enforcement and forensic experts in giving an identity to unidentified human skulls. It is an extremely challenging problem which is further exacerbated due to lack of any publicly available database related to this problem. This is the first research in this direction with a twofold contribution: (i) introducing the first of its kind skullface image pair database, IdentifyMe, and (ii) presenting a preliminary approach using the proposed semi-supervised formulation of transform learning. The experimental results and comparison with existing algorithms showcase the challenging nature of the problem. We assert that the availability of the database will inspire researchers to build sophisticated skull-to-face matching algorithms.
  6. Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised autoencoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.
  7. Face sketch to digital image matching is an important challenge of face recognition that involves matching across different domains. Current research efforts have primarily focused on extracting domain invariant representations or learning a mapping from one domain to the other. In this research, we propose a novel transform learning based approach termed as DeepTransformer, which learns a transformation and mapping function between the features of two domains. The proposed formulation is independent of the input information and can be applied with any existing learned or hand-crafted feature. Since the mapping function is directional in nature, we propose two variants of DeepTransformer: (i) semi-coupled and (ii) symmetricallycoupled deep transform learning. This research also uses a novel IIIT-D Composite Sketch with Age (CSA) variations database which contains sketch images of 150 subjects along with age-separated digital photos. The performance of the proposed models is evaluated on a novel application of sketch-to-sketch matching, along with sketch-to-digital photo matching. Experimental results demonstrate the robustness of the proposed models in comparison to existing state-of-the-art sketch matching algorithms and a commercial face recognition system.
  8. Face recognition systems are susceptible to presentation attacks such as printed photo attacks, replay attacks, and 3D mask attacks. These attacks, primarily studied in visible spectrum, aim to obfuscate or impersonate a person’s identity. This paper presents a unique multispectral video face database for face presentation attack using latex and paper masks. The proposed Multispectral Latex Mask based Video Face Presentation Attack (MLFP) database contains 1350 videos in visible, near infrared, and thermal spectrums. Since the database consists of videos of subjects without any mask as well as wearing ten different masks, the effect of identity concealment is analyzed in each spectrum using face recognition algorithms. We also present the performance of existing presentation attack detection algorithms on the proposed MLFP database. It is observed that the thermal imaging spectrum is most effective in detecting face presentation attacks.
  9. This paper focuses on decoding the process of face verification in the human brain using fMRI responses. 2400 fMRI responses are collected from different participants while they perform face verification on genuine and imposter stimuli face pairs. The first part of the paper analyzes the responses covering both cognitive and fMRI neuro-imaging results. With an average verification accuracy of 64.79% by human participants, the results of the cognitive analysis depict that the performance of female participants is significantly higher than the male participants with respect to imposter pairs. The results of the neuroimaging analysis identifies regions of the brain such as the left fusiform gyrus, caudate nucleus, and superior frontal gyrus that are activated when participants perform face verification tasks. The second part of the paper proposes a novel two-level fMRI dictionary learning approach to predict if the stimuli observed is genuine or imposter using the brain activation data for selected regions. A comparative analysis with existing machine learning techniques illustrates that the proposed approach yields at least 4.5% higher classification accuracy than other algorithms. It is envisioned that the result of this study is the first step in designing brain-inspired automatic face verification algorithms.