skip to main content


Title: Fingerprint Distortion Rectification using Deep Convolutional Neural Networks
Elastic distortion of fingerprints has a negative effect on the performance of fingerprint recognition systems. This negative effect brings inconvenience to users in authentication applications. However, in the negative recognition scenario where users may intentionally distort their fingerprints, this can be a serious problem since distortion will prevent recognition system from identifying malicious users. Current methods aimed at addressing this problem still have limitations. They are often not accurate because they estimate distortion parameters based on the ridge frequency map and orientation map of input samples, which are not reliable due to distortion. Secondly, they are not efficient and requiring significant computation time to rectify samples. In this paper, we develop a rectification model based on a Deep Convolutional Neural Network (DCNN) to accurately estimate distortion parameters from the input image. Using a comprehensive database of synthetic distorted samples, the DCNN learns to accurately estimate distortion bases ten times faster than the dictionary search methods used in the previous approaches. Evaluating the proposed method on public databases of distorted samples shows that it can significantly improve the matching performance of distorted samples.  more » « less
Award ID(s):
1650474 1066197
NSF-PAR ID:
10053532
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
The 11th IAPR International Conference on Biometrics (ICB 2018)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a new Automatic Target Recognition (ATR) system, based on Deep Convolutional Neural Network (DCNN), to detect the targets in Forward Looking Infrared (FLIR) scenes and recognize their classes. In our proposed ATR framework, a fully convolutional network (FCN) is trained to map the input FLIR imagery data to a fixed stride correspondingly-sized target score map. The potential targets are identified by applying a threshold on the target score map. Finally, corresponding regions centered at these target points are fed to a DCNN to classify them into different target types while at the same time rejecting the false alarms. The proposed architecture achieves a significantly better performance in comparison with that of the state-of-the-art methods on two large FLIR image databases. 
    more » « less
  2. Electron Backscatter Diffraction (EBSD) is a widely used approach for characterising the microstructure of various materials. However, it is difficult to accurately distinguish similar (body centred cubic and body centred tetragonal, with small tetragonality) phases in steels using standard EBSD software. One method to tackle the problem of phase distinction is to measure the tetragonality of the phases, which can be done using simulated patterns and cross‐correlation techniques to detect distortion away from a perfectly cubic crystal lattice. However, small errors in the determination of microscope geometry (the so‐called pattern or projection centre) can cause significant errors in tetragonality measurement and lead to erroneous results. This paper utilises a new approach for accurate pattern centre determination via a strain minimisation routine across a large number of grains in dual phase steels. Tetragonality maps are then produced and used to identify phase and estimate local carbon content. The technique is implemented using both kinetically simulated and dynamically simulated patterns to determine their relative accuracy. Tetragonality maps, and subsequent phase maps, based on dynamically simulated patterns in a point‐by‐point and grain average comparison are found to consistently produce more precise and accurate results, with close to 90% accuracy for grain phase identification, when compared with an image‐quality identification method. The error in tetragonality measurements appears to be of the order of 1%, thus producing a commensurate ∼0.2% error in carbon content estimation. Such an error makes the technique unsuitable for estimation of total carbon content of most commercial steels, which often have carbon levels below 0.1%. However, even in the DP steel for this study (0.1 wt.% carbon) it can be used to map carbon in regions with higher accumulation (such as in martensite with nonhomogeneous carbon content).

    Lay Description

    Electron Backscatter Diffraction (EBSD) is a widely used approach for characterising the microstructure of various materials. However, it is difficult to accurately distinguish similar (BCC and BCT) phases in steels using standard EBSD software due to the small difference in crystal structure. One method to tackle the problem of phase distinction is to measure the tetragonality, or apparent ‘strain’ in the crystal lattice, of the phases. This can be done by comparing experimental EBSD patterns with simulated patterns via cross‐correlation techniques, to detect distortion away from a perfectly cubic crystal lattice. However, small errors in the determination of microscope geometry (the so‐called pattern or projection centre) can cause significant errors in tetragonality measurement and lead to erroneous results. This paper utilises a new approach for accurate pattern centre determination via a strain minimisation routine across a large number of grains in dual phase steels. Tetragonality maps are then produced and used to identify phase and estimate local carbon content. The technique is implemented using both simple kinetically simulated and more complex dynamically simulated patterns to determine their relative accuracy. Tetragonality maps, and subsequent phase maps, based on dynamically simulated patterns in a point‐by‐point and grain average comparison are found to consistently produce more precise and accurate results, with close to 90% accuracy for grain phase identification, when compared with an image‐quality identification method. The error in tetragonality measurements appears to be of the order of 1%, thus producing a commensurate error in carbon content estimation. Such an error makes an estimate of total carbon content particularly unsuitable for low carbon steels; although maps of local carbon content may still be revealing.

    Application of the method developed in this paper will lead to better understanding of the complex microstructures of steels, and the potential to design microstructures that deliver higher strength and ductility for common applications, such as vehicle components.

     
    more » « less
  3. null (Ed.)
    Abstract. We consider the problem of inferring the basal sliding coefficientfield for an uncertain Stokes ice sheet forward model from syntheticsurface velocity measurements. The uncertainty in the forward modelstems from unknown (or uncertain) auxiliary parameters (e.g., rheologyparameters). This inverse problem is posed within the Bayesianframework, which provides a systematic means of quantifyinguncertainty in the solution. To account for the associated modeluncertainty (error), we employ the Bayesian approximation error (BAE)approach to approximately premarginalize simultaneously over both thenoise in measurements and uncertainty in the forward model. We alsocarry out approximative posterior uncertainty quantification based ona linearization of the parameter-to-observable map centered at themaximum a posteriori (MAP) basal sliding coefficient estimate, i.e.,by taking the Laplace approximation. The MAP estimate is found byminimizing the negative log posterior using an inexact Newtonconjugate gradient method. The gradient and Hessian actions to vectorsare efficiently computed using adjoints. Sampling from theapproximate covariance is made tractable by invoking a low-rankapproximation of the data misfit component of the Hessian. We studythe performance of the BAE approach in the context of three numericalexamples in two and three dimensions. For each example, the basalsliding coefficient field is the parameter of primary interest whichwe seek to infer, and the rheology parameters (e.g., the flow ratefactor or the Glen's flow law exponent coefficient field) representso-called nuisance (secondary uncertain) parameters. Our resultsindicate that accounting for model uncertainty stemming from thepresence of nuisance parameters is crucial. Namely our findingssuggest that using nominal values for these parameters, as is oftendone in practice, without taking into account the resulting modelingerror, can lead to overconfident and heavily biased results. We alsoshow that the BAE approach can be used to account for the additionalmodel uncertainty at no additional cost at the online stage. 
    more » « less
  4. Abstract

    Researchers can investigate many aspects of animal ecology through noninvasive photo–identification. Photo–identification is becoming more efficient as matching individuals between photos is increasingly automated. However, the convolutional neural network models that have facilitated this change need many training images to generalize well. As a result, they have often been developed for individual species that meet this threshold. These single‐species methods might underperform, as they ignore potential similarities in identifying characteristics and the photo–identification process among species.

    In this paper, we introduce a multi‐species photo–identification model based on a state‐of‐the‐art method in human facial recognition, the ArcFace classification head. Our model uses two such heads to jointly classify species and identities, allowing species to share information and parameters within the network. As a demonstration, we trained this model with 50,796 images from 39 catalogues of 24 cetacean species, evaluating its predictive performance on 21,192 test images from the same catalogues. We further evaluated its predictive performance with two external catalogues entirely composed of identities that the model did not see during training.

    The model achieved a mean average precision (MAP) of 0.869 on the test set. Of these, 10 catalogues representing seven species achieved a MAP score over 0.95. For some species, there was notable variation in performance among catalogues, largely explained by variation in photo quality. Finally, the model appeared to generalize well, with the two external catalogues scoring similarly to their species' counterparts in the larger test set.

    From our cetacean application, we provide a list of recommendations for potential users of this model, focusing on those with cetacean photo–identification catalogues. For example, users with high quality images of animals identified by dorsal nicks and notches should expect near optimal performance. Users can expect decreasing performance for catalogues with higher proportions of indistinct individuals or poor quality photos. Finally, we note that this model is currently freely available as code in a GitHub repository and as a graphical user interface, with additional functionality for collaborative data management, via Happywhale.com.

     
    more » « less
  5. Academic cloud infrastructures require users to specify an estimate of their resource requirements. The resource usage for applications often depends on the input file sizes, parameters, optimization flags, and attributes, specified for each run. Incorrect estimation can result in low resource utilization of the entire infrastructure and long wait times for jobs in the queue. We have designed a Resource Utilization based Migration (RUMIG) system to address the resource estimation problem. We present the overall architecture of the two-stage elastic cluster design, the Apache Mesos-specific container migration system, and analyze the performance for several scientific workloads on three different cloud/cluster environments. In this paper we (b) present a design and implementation for container migration in a Mesos environment, (c) evaluate the effect of right-sizing and cluster elasticity on overall performance, (d) analyze different profiling intervals to determine the best fit, (e) determine the overhead of our profiling mechanism. Compared to the default use of Apache Mesos, in the best cases, RUMIG provides a gain of 65% in runtime (local cluster), 51% in CPU utilization in the Chameleon cloud, and 27% in memory utilization in the Jetstream cloud. 
    more » « less