skip to main content


Title: A Scalable Solution for Signaling Face Touches to Reduce the Spread of Surface-based Pathogens
Hand-to-Face transmission has been estimated to be a minority, yet non-negligible, vector of COVID-19 transmission and a major vector for multiple other pathogens. At the same time, as it cannot be effectively addressed with mainstream protection measures, such as wearing masks or tracing contacts, it remains largely untackled. To help address this issue, we have developed Saving Face - an app that alerts users when they are about to touch their faces, by analyzing the distortion patterns in the ultrasound signal emitted by their earphones. The system only relies on pre-existing hardware (a smartphone with generic earphones), which allows it to be rapidly scalable to billions of smartphone users worldwide. This paper describes the design, implementation and evaluation of the system, as well as the results of a user study testing the solution's accuracy, robustness, and user experience during various day-to-day activities (93.7% Sensitivity and 91.5% Precision, N=10). While this paper focuses on the system's application to detecting hand-to-face gestures, the technique can also be applicable to other types of gestures and gesture-based applications.  more » « less
Award ID(s):
2032704
NSF-PAR ID:
10248892
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
5
Issue:
1
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 22
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile devices typically rely on entry-point and other one-time authentication mechanisms such as a password, PIN, fingerprint, iris, or face. But these authentication types are prone to a wide attack vector and worse 1 INTRODUCTION Currently smartphones are predominantly protected a patterned password is prone to smudge attacks, and fingerprint scanning is prone to spoof attacks. Other forms of attacks include video capture and shoulder surfing. Given the increasingly important roles smartphones play in e-commerce and other operations where security is crucial, there lies a strong need of continuous authentication mechanisms to complement and enhance one-time authentication such that even if the authentication at the point of login gets compromised, the device is still unobtrusively protected by additional security measures in a continuous fashion. The research community has investigated several continuous authentication mechanisms based on unique human behavioral traits, including typing, swiping, and gait. To this end, we focus on investigating physiological traits. While interacting with hand-held devices, individuals strive to achieve stability and precision. This is because a certain degree of stability is required in order to manipulate and interact successfully with smartphones, while precision is needed for tasks such as touching or tapping a small target on the touch screen (Sitov´a et al., 2015). As a result, to achieve stability and precision, individuals tend to develop their own postural preferences, such as holding a phone with one or both hands, supporting hands on the sides of upper torso and interacting, keeping the phone on the table and typing with the preferred finger, setting the phone on knees while sitting crosslegged and typing, supporting both elbows on chair handles and typing. On the other hand, physiological traits, such as hand-size, grip strength, muscles, age, 424 Ray, A., Hou, D., Schuckers, S. and Barbir, A. Continuous Authentication based on Hand Micro-movement during Smartphone Form Filling by Seated Human Subjects. DOI: 10.5220/0010225804240431 In Proceedings of the 7th International Conference on Information Systems Security and Privacy (ICISSP 2021), pages 424-431 ISBN: 978-989-758-491-6 Copyrightc 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved still, once compromised, fail to protect the user’s account and data. In contrast, continuous authentication, based on traits of human behavior, can offer additional security measures in the device to authenticate against unauthorized users, even after the entry-point and one-time authentication has been compromised. To this end, we have collected a new data-set of multiple behavioral biometric modalities (49 users) when a user fills out an account recovery form in sitting using an Android app. These include motion events (acceleration and angular velocity), touch and swipe events, keystrokes, and pattern tracing. In this paper, we focus on authentication based on motion events by evaluating a set of score level fusion techniques to authenticate users based on the acceleration and angular velocity data. The best EERs of 2.4% and 6.9% for intra- and inter-session respectively, are achieved by fusing acceleration and angular velocity using Nandakumar et al.’s likelihood ratio (LR) based score fusion. 
    more » « less
  2. People with hearing and speaking disabilities face significant hurdles in communication. The knowledge of sign language can help mitigate these hurdles, but most people without disabilities, including relatives, friends, and care providers, cannot understand sign language. The availability of automated tools can allow people with disabilities and those around them to communicate ubiquitously and in a variety of situations with non-signers. There are currently two main approaches for recognizing sign language gestures. The first is a hardware-based approach, involving gloves or other hardware to track hand position and determine gestures. The second is a software-based approach, where a video is taken of the hands and gestures are classified using computer vision techniques. However, some hardware, such as a phone's internal sensor or a device worn on the arm to track muscle data, is less accurate, and wearing them can be cumbersome or uncomfortable. The software-based approach, on the other hand, is dependent on the lighting conditions and on the contrast between the hands and the background. We propose a hybrid approach that takes advantage of low-cost sensory hardware and combines it with a smart sign-recognition algorithm with the goal of developing a more efficient gesture recognition system. The Myo band-based approach using the Support Vector Machine method achieves an accuracy of only 49%. The software-based approach uses the Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) methods to train the Myo-based module and achieves an accuracy of over 80% in our experiments. Our method combines the two approaches and shows the potential for improvement. Our experiments are done with a dataset of nine gestures generated from multiple videos, each repeated five times for a total of 45 trials for both the software-based and hardware-based modules. Apart from showing the performance of each approach, our results show that with a more improved hardware module, the accuracy of the combined approach can be significantly improved 
    more » « less
  3. null (Ed.)
    Spam phone calls have been rapidly growing from nuisance to an increasingly effective scam delivery tool. To counter this increasingly successful attack vector, a number of commercial smartphone apps that promise to block spam phone calls have appeared on app stores, and are now used by hundreds of thousands or even millions of users. However, following a business model similar to some online social network services, these apps often collect call records or other potentially sensitive information from users’ phones with little or no formal privacy guarantees. In this paper, we study whether it is possible to build a practical collaborative phone blacklisting system that makes use of local differential privacy (LDP) mechanisms to provide clear privacy guarantees. We analyze the challenges and trade-offs related to using LDP, evaluate our LDP-based system on real-world user-reported call records collected by the FTC, and show that it is possible to learn a phone blacklist using a reasonable overall privacy budget and at the same time preserve users’ privacy while maintaining utility for the learned blacklist. 
    more » « less
  4. Abstract

    Augmented reality (AR) is a computer graphics technique that creates a seamless interface between the real and virtual worlds. AR usage rapidly spreads across diverse areas, such as healthcare, education, and entertainment. Despite its immense potential, AR interface controls rely on an external joystick, a smartphone, or a fixed camera system susceptible to lighting. Here, an AR‐integrated soft wearable electronic system that detects the gestures of a subject for more intuitive, accurate, and direct control of external systems is introduced. Specifically, a soft, all‐in‐one wearable device includes a scalable electrode array and integrated wireless system to measure electromyograms for real‐time continuous recognition of hand gestures. An advanced machine learning algorithm embedded in the system enables the classification of ten different classes with an accuracy of 96.08%. Compared to the conventional rigid wearables, the multi‐channel soft wearable system offers an enhanced signal‐to‐noise ratio and consistency over multiple uses due to skin conformality. The demonstration of the AR‐integrated soft wearable system for drone control captures the potential of the platform technology to offer numerous human–machine interface opportunities for users to interact remotely with external hardware and software.

     
    more » « less
  5. In this paper, we explore quick 3D shape composition during early-phase spatial design ideation. Our approach is to re-purpose a smartphone as a hand-held reference plane for creating, modifying, and manipulating 3D sweep surfaces. We implemented MobiSweep, a prototype application to explore a new design space of constrained spatial interactions that combine direct orientation control with indirect position control via well-established multi-touch gestures. MobiSweep leverages kinesthetically aware interactions for the creation of a sweep surface without explicit position tracking. The design concepts generated by users, in conjunction with their feedback, demonstrate the potential of such interactions in enabling spatial ideation. 
    more » « less