skip to main content


Title: TrueHeart: Continuous Authentication on Wrist-worn Wearables Using PPG-based Biometrics
Traditional one-time user authentication processes might cause friction and unfavorable user experience in many widely-used applications. This is a severe problem in particular for security-sensitive facilities if an adversary could obtain unauthorized privileges after a user’s initial login. Recently, continuous user authentication (CA) has shown its great potential by enabling seamless user authentication with few active participation. We devise a low-cost system exploiting a user’s pulsatile signals from the photoplethysmography (PPG) sensor in commercial wrist-worn wearables for CA. Compared to existing approaches, our system requires zero user effort and is applicable to practical scenarios with non-clinical PPG measurements having motion artifacts (MA). We explore the uniqueness of the human cardiac system and design an MA filtering method to mitigate the impacts of daily activities. Furthermore, we identify general fiducial features and develop an adaptive classifier using the gradient boosting tree (GBT) method. As a result, our system can authenticate users continuously based on their cardiac characteristics so little training effort is required. Experiments with our wrist-worn PPG sensing platform on 20 participants under practical scenarios demonstrate that our system can achieve a high CA accuracy of over 90% and a low false detection rate of 4% in detecting random attacks.  more » « less
Award ID(s):
2000480 1909963
NSF-PAR ID:
10192361
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Conference on Computer Communications
Page Range / eLocation ID:
30 to 39
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  2. Hara, T. ; Yamaguchi, H. (Ed.)
    Prevalent wearables (e.g., smartwatches and activity trackers) demand high secure measures to protect users' private information, such as personal contacts, bank accounts, etc. While existing two-factor authentication methods can enhance traditional user authentication, they are not convenient as they require participations from users. Recently, manufacturing imperfections in hardware devices (e.g., accelerometers and WiFi interface) have been utilized for low-effort two-factor authentications. However, these methods rely on fixed device credentials that would require users to replace their devices once the device credentials are stolen. In this work, we develop a novel device authentication system, WatchID, that can identify a user's wearable using its vibration-based device credentials. Our system exploits readily available vibration motors and accelerometers in wearables to establish a vibration communication channel to capture wearables' unique vibration characteristics. Compared to existing methods, our vibration-based device credentials are reprogrammable and easy to use. We develop a series of data processing methods to mitigate the impact of noises and body movements. A lightweight convolutional neural network is developed for feature extraction and device authentication. Extensive experimental results using five smartwatches show that WatchID can achieve an average precision and recall of 98% and 94% respectively in various attacking scenarios. 
    more » « less
  3. Continuous location authentication (CLA) seeks to continuously and automatically verify the physical presence of legitimate users in a protected indoor area. CLA can play an important role in contexts where access to electrical or physical resources must be limited to physically present legitimate users. In this paper, we present WearRF-CLA, a novel CLA scheme built upon increasingly popular wrist wearables and UHF RFID systems. WearRF-CLA explores the observation that human daily routines in a protected indoor area comprise a sequence of human-states (e.g., walking and sitting) that follow predictable state transitions. Each legitimate WearRF-CLA user registers his/her RFID tag and also wrist wearable during system enrollment. After the user enters a protected area, WearRF-CLA continuously collects and processes the gyroscope data of the wrist wearable and the phase data of the RFID tag signals to verify three factors to determine the user's physical presence/absence without explicit user involvement: (1) the tag ID as in a traditional RFID authentication system, (2) the validity of the human-state chain, and (3) the continuous coexistence of the paired wrist wearable and RFID tag with the user. The user passes CLA if and only if all three factors can be validated. Extensive user experiments on commodity smartwatches and UHF RFID devices confirm the very high security and low authentication latency of WearRF-CLA. 
    more » « less
  4. With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over $99%$ average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%. 
    more » « less
  5. Free-text keystroke is a form of behavioral biometrics which has great potential for addressing the security limitations of conventional one-time authentication by continuously monitoring the user's typing behaviors. This paper presents a new, enhanced continuous authentication approach by incorporating the dynamics of both keystrokes and wrist motions. Based upon two sets of features (free-text keystroke latency features and statistical wrist motion patterns extracted from the wrist-worn smartwatches), two one-vs-all Random Forest Ensemble Classifiers (RFECs) are constructed and trained respectively. A Dynamic Trust Model (DTM) is then developed to fuse the two classifiers' decisions and realize non-time-blocked real-time authentication. In the free-text typing experiments involving 25 human subjects, an imposter/intruder can be detected within no more than one sentence (average 56 keystrokes) with an FRR of 1.82% and an FAR of 1.94%. Compared with the scheme relying on only keystroke latency which has an FRR of 4.66%, an FAR of 17.92% and the required number of keystroke of 162, the proposed authentication system shows significant improvements in terms of accuracy, efficiency, and usability. 
    more » « less