skip to main content


Title: Continuous authentication of smartphone users by fusing typing, swiping, and phone movement patterns
We studied the fusion of three biometric authentication modalities, namely, swiping gestures, typing patterns and the phone movement patterns observed during typing or swiping. A web browser was customized to collect the data generated from the aforementioned modalities over four to seven days in an unconstrained environment. Several features were extracted by using sliding window mechanism for each modality and analyzed by using information gain, correlation, and symmetric uncertainty. Finally, five features from windows of continuous swipes, thirty features from windows of continuously typed letters, and nine features from corresponding phone movement patterns while swiping/typing were used to build the authentication system. We evaluated the performance of each modality and their fusion over a dataset of 28 users. The feature-level fusion of swiping and the corresponding phone movement patterns achieved an authentication accuracy of 93.33%, whereas, the score-level fusion of typing behaviors and the corresponding phone movement patterns achieved an authentication accuracy of 89.31 %.  more » « less
Award ID(s):
1527795
NSF-PAR ID:
10036392
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Biometrics Theory, Applications and Systems (BTAS), 2016 IEEE 8th International Conference on
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile devices typically rely on entry-point and other one-time authentication mechanisms such as a password, PIN, fingerprint, iris, or face. But these authentication types are prone to a wide attack vector and worse 1 INTRODUCTION Currently smartphones are predominantly protected a patterned password is prone to smudge attacks, and fingerprint scanning is prone to spoof attacks. Other forms of attacks include video capture and shoulder surfing. Given the increasingly important roles smartphones play in e-commerce and other operations where security is crucial, there lies a strong need of continuous authentication mechanisms to complement and enhance one-time authentication such that even if the authentication at the point of login gets compromised, the device is still unobtrusively protected by additional security measures in a continuous fashion. The research community has investigated several continuous authentication mechanisms based on unique human behavioral traits, including typing, swiping, and gait. To this end, we focus on investigating physiological traits. While interacting with hand-held devices, individuals strive to achieve stability and precision. This is because a certain degree of stability is required in order to manipulate and interact successfully with smartphones, while precision is needed for tasks such as touching or tapping a small target on the touch screen (Sitov´a et al., 2015). As a result, to achieve stability and precision, individuals tend to develop their own postural preferences, such as holding a phone with one or both hands, supporting hands on the sides of upper torso and interacting, keeping the phone on the table and typing with the preferred finger, setting the phone on knees while sitting crosslegged and typing, supporting both elbows on chair handles and typing. On the other hand, physiological traits, such as hand-size, grip strength, muscles, age, 424 Ray, A., Hou, D., Schuckers, S. and Barbir, A. Continuous Authentication based on Hand Micro-movement during Smartphone Form Filling by Seated Human Subjects. DOI: 10.5220/0010225804240431 In Proceedings of the 7th International Conference on Information Systems Security and Privacy (ICISSP 2021), pages 424-431 ISBN: 978-989-758-491-6 Copyrightc 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved still, once compromised, fail to protect the user’s account and data. In contrast, continuous authentication, based on traits of human behavior, can offer additional security measures in the device to authenticate against unauthorized users, even after the entry-point and one-time authentication has been compromised. To this end, we have collected a new data-set of multiple behavioral biometric modalities (49 users) when a user fills out an account recovery form in sitting using an Android app. These include motion events (acceleration and angular velocity), touch and swipe events, keystrokes, and pattern tracing. In this paper, we focus on authentication based on motion events by evaluating a set of score level fusion techniques to authenticate users based on the acceleration and angular velocity data. The best EERs of 2.4% and 6.9% for intra- and inter-session respectively, are achieved by fusing acceleration and angular velocity using Nandakumar et al.’s likelihood ratio (LR) based score fusion. 
    more » « less
  2. In this paper, we propose a deep multimodal fusion network to fuse multiple modalities (face, iris, and fingerprint) for person identification. The proposed deep multimodal fusion algorithm consists of multiple streams of modality-specific Convolutional Neural Networks (CNNs), which are jointly optimized at multiple feature abstraction levels. Multiple features are extracted at several different convolutional layers from each modality-specific CNN for joint feature fusion, optimization, and classification. Features extracted at different convolutional layers of a modality-specific CNN represent the input at several different levels of abstract representations. We demonstrate that an efficient multimodal classification can be accomplished with a significant reduction in the number of network parameters by exploiting these multi-level abstract representations extracted from all the modality-specific CNNs. We demonstrate an increase in multimodal person identification performance by utilizing the proposed multi-level feature abstract representations in our multimodal fusion, rather than using only the features from the last layer of each modality-specific CNNs. We show that our deep multi-modal CNNs with multimodal fusion at several different feature level abstraction can significantly outperform the unimodal representation accuracy. We also demonstrate that the joint optimization of all the modality-specific CNNs excels the score and decision level fusions of independently optimized CNNs. 
    more » « less
  3. null (Ed.)
    Clinical translation of “intelligent” lower-limb assistive technologies relies on robust control interfaces capable of accurately detecting user intent. To date, mechanical sensors and surface electromyography (EMG) have been the primary sensing modalities used to classify ambulation. Ultrasound (US) imaging can be used to detect user-intent by characterizing structural changes of muscle. Our study evaluates wearable US imaging as a new sensing modality for continuous classification of five discrete ambulation modes: level, incline, decline, stair ascent, and stair descent ambulation, and benchmarks performance relative to EMG sensing. Ten able-bodied subjects were equipped with a wearable US scanner and eight unilateral EMG sensors. Time-intensity features were recorded from US images of three thigh muscles. Features from sliding windows of EMG signals were analyzed in two configurations: one including 5 EMG sensors on muscles around the thigh, and another with 3 additional sensors placed on the shank. Linear discriminate analysis was implemented to continuously classify these phase-dependent features of each sensing modality as one of five ambulation modes. US-based sensing statistically improved mean classification accuracy to 99.8% (99.5-100% CI) compared to 8-EMG sensors (85.8%; 84.0-87.6% CI) and 5-EMG sensors (75.3%; 74.5-76.1% CI). Further, separability analyses show the importance of superficial and deep US information for stair classification relative to other modes. These results are the first to demonstrate the ability of US-based sensing to classify discrete ambulation modes, highlighting the potential for improved assistive device control using less widespread, less superficial and higher resolution sensing of skeletal muscle. 
    more » « less
  4. Introduction: Computed tomography perfusion (CTP) imaging requires injection of an intravenous contrast agent and increased exposure to ionizing radiation. This process can be lengthy, costly, and potentially dangerous to patients, especially in emergency settings. We propose MAGIC, a multitask, generative adversarial network-based deep learning model to synthesize an entire CTP series from only a non-contrasted CT (NCCT) input. Materials and Methods: NCCT and CTP series were retrospectively retrieved from 493 patients at UF Health with IRB approval. The data were deidentified and all images were resized to 256x256 pixels. The collected perfusion data were analyzed using the RapidAI CT Perfusion analysis software (iSchemaView, Inc. CA) to generate each CTP map. For each subject, 10 CTP slices were selected. Each slice was paired with one NCCT slice at the same location and two NCCT slices at a predefined vertical offset, resulting in 4.3K CTP images and 12.9K NCCT images used for training. The incorporation of a spatial offset into the NCCT input allows MAGIC to more accurately synthesize cerebral perfusive structures, increasing the quality of the generated images. The studies included a variety of indications, including healthy tissue, mild infarction, and severe infarction. The proposed MAGIC model incorporates a novel multitask architecture, allowing for the simultaneous synthesis of four CTP modalities: mean transit time (MTT), cerebral blood flow (CBF), cerebral blood volume (CBV), and time to peak (TTP). We propose a novel Physicians-in-the-loop module in the model's architecture, acting as a tunable layer that allows physicians to manually adjust the amount of anatomic detail present in the synthesized CTP series. Additionally, we propose two novel loss terms: multi-modal connectivity loss and extrema loss. The multi-modal connectivity loss leverages the multi-task nature to assert that the mathematical relationship between MTT, CBF, and CBV is satisfied. The extrema loss aids in learning regions of elevated and decreased activity in each modality, allowing for MAGIC to accurately learn the characteristics of diagnostic regions of interest. Corresponding NCCT and CTP slices were paired along the vertical axis. The model was trained for 100 epochs on a NVIDIA TITAN X GPU. Results and Discussion: The MAGIC model’s performance was evaluated on a sample of 40 patients from the UF Health dataset. Across all CTP modalities, MAGIC was able to accurately produce images with high structural agreement between the entire synthesized and clinical perfusion images (SSIMmean=0.801 , UQImean=0.926). MAGIC was able to synthesize CTP images to accurately characterize cerebral circulatory structures and identify regions of infarct tissue, as shown in Figure 1. A blind binary evaluation was conducted to assess the presence of cerebral infarction in both the synthesized and clinical perfusion images, resulting in the synthesized images correctly predicting the presence of cerebral infarction with 87.5% accuracy. Conclusions: We proposed a MAGIC model whose novel deep learning structures and loss terms enable high-quality synthesis of CTP maps and characterization of circulatory structures solely from NCCT images, potentially eliminating the requirement for the injection of an intravenous contrast agent and elevated radiation exposure during perfusion imaging. This makes MAGIC a beneficial tool in a clinical scenario increasing the overall safety, accessibility, and efficiency of cerebral perfusion and facilitating better patient outcomes. Acknowledgements: This work was partially supported by the National Science Foundation, IIS-1908299 III: Small: Modeling Multi-Level Connectivity of Brain Dynamics + REU Supplement, to the University of Florida. 
    more » « less
  5. Intent recognition in lower-limb assistive devices typically relies on neuromechanical sensing of an affected limb acquired through embedded device sensors. It remains unknown whether signals from more widespread sources such as the contralateral leg and torso positively influence intent recognition, and how specific locomotor tasks that place high demands on the neuromuscular system, such as changes of direction, contribute to intent recognition. In this study, we evaluated the performances of signals from varying mechanical modalities (accelerographic, gyroscopic, and joint angles) and locations (the trailing leg, leading leg and torso) during straight walking, changes of direction (cuts), and cuts to stair ascent with varying task anticipation. Biomechanical information from the torso demonstrated poor performance across all conditions. Unilateral (the trailing or leading leg) joint angle data provided the highest accuracy. Surprisingly, neither the fusion of unilateral and torso data nor the combination of multiple signal modalities improved recognition. For these fused modality data, similar trends but with diminished accuracy rates were reported during unanticipated conditions. Finally, for datasets that achieved a relatively accurate (≥90%) recognition of unanticipated tasks, these levels of recognition were achieved after the mid-swing of the trailing/transitioning leg, prior to a subsequent heel strike. These findings suggest that mechanical sensing of the legs and torso for the recognition of straight-line and transient locomotion can be implemented in a relatively flexible manner (i.e., signal modality, and from the leading or trailing legs) and, importantly, suggest that more widespread sensing is not always optimal. 
    more » « less