skip to main content


Search for: All records

Award ID contains: 1527795

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. While developing continuous authentication systems (CAS), we generally assume that samples from both genuine and impostor classes are readily available. However, the assumption may not be true in certain circumstances. Therefore, we explore the possibility of implementing CAS using only genuine samples. Specifically, we investigate the usefulness of four one-class classifiers OCC (elliptic envelope, isolation forest, local outliers factor, and one-class support vector machines) and their fusion. The performance of these classifiers was evaluated on four distinct behavioral biometric datasets, and compared with eight multi-class classifiers (MCC). The results demonstrate that if we have sufficient training data from the genuine user the OCC, and their fusion can closely match the performance of the majority of MCC. Our findings encourage the research community to use OCC in order to build CAS as it does not require knowledge of impostor class during the enrollment process. 
    more » « less
  2. Wearable computing devices have become increasingly popular and while these devices promise to improve our lives, they come with new challenges. One such device is the Google Glass from which data can be stolen easily as the touch gestures can be intercepted from a head-mounted device. This paper focuses on analyzing and combining two behavioral metrics, namely, head movement (captured through glass) and torso movement (captured through smartphone) to build a continuous authentication system that can be used on Google Glass alone or by pairing it with a smartphone. We performed a correlation analysis among the features on these two metrics and found that very little correlation exists between the features extracted from head and torso movements in most scenarios (set of activities). This led us to combine the two metrics to perform authentication. We built an authentication system using these metrics and compared the performance among different scenarios. We got EER less than 6% when authenticating a user using only the head movements in one scenario whereas the EER is less than 5% when authenticating a user using both head and torso movements in general. 
    more » « less
  3. Free-text keystroke is a form of behavioral biometrics which has great potential for addressing the security limitations of conventional one-time authentication by continuously monitoring the user's typing behaviors. This paper presents a new, enhanced continuous authentication approach by incorporating the dynamics of both keystrokes and wrist motions. Based upon two sets of features (free-text keystroke latency features and statistical wrist motion patterns extracted from the wrist-worn smartwatches), two one-vs-all Random Forest Ensemble Classifiers (RFECs) are constructed and trained respectively. A Dynamic Trust Model (DTM) is then developed to fuse the two classifiers' decisions and realize non-time-blocked real-time authentication. In the free-text typing experiments involving 25 human subjects, an imposter/intruder can be detected within no more than one sentence (average 56 keystrokes) with an FRR of 1.82% and an FAR of 1.94%. Compared with the scheme relying on only keystroke latency which has an FRR of 4.66%, an FAR of 17.92% and the required number of keystroke of 162, the proposed authentication system shows significant improvements in terms of accuracy, efficiency, and usability. 
    more » « less
  4. We studied the fusion of three biometric authentication modalities, namely, swiping gestures, typing patterns and the phone movement patterns observed during typing or swiping. A web browser was customized to collect the data generated from the aforementioned modalities over four to seven days in an unconstrained environment. Several features were extracted by using sliding window mechanism for each modality and analyzed by using information gain, correlation, and symmetric uncertainty. Finally, five features from windows of continuous swipes, thirty features from windows of continuously typed letters, and nine features from corresponding phone movement patterns while swiping/typing were used to build the authentication system. We evaluated the performance of each modality and their fusion over a dataset of 28 users. The feature-level fusion of swiping and the corresponding phone movement patterns achieved an authentication accuracy of 93.33%, whereas, the score-level fusion of typing behaviors and the corresponding phone movement patterns achieved an authentication accuracy of 89.31 %. 
    more » « less