Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one's face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch. We fabricated prototypes using off-the-shelf hardware and conducted experiments with 14 participants while they perform various daily activities (e.g., drinking, talking). Results show a macro-F1-score of 83.4% for touch detection with leave-one-user-out cross-validation and a macro-F1-score of 90.1% for touch zone identification with a personalized model.
more »
« less
Whose Touch is This? : Understanding the Agency Trade-Off Between User-Driven Touch vs. Computer-Driven Touch
Force-feedback enhances digital touch by enabling users to share non-verbal aspects such as rhythm, poses, and so on. To achieve this, interfaces actuate the user’s to touch involuntarily (using exoskeletons or electrical-muscle-stimulation); we refer to this as computer-driven touch. Unfortunately, forcing users to touch causes a loss of their sense of agency. While researchers found that delaying the timing of computer-driven touch preserves agency, they only considered the naïve case when user-driven touch is aligned with computer-driven touch. We argue this is unlikely as it assumes we can perfectly predict user-touches. But, what about all the remainder situations: when the haptics forces the user into an outcome they did not intend or assists the user in an outcome they would not achieve alone? We unveil, via an experiment, what happens in these novel situations. From our findings, we synthesize a framework that enables researchers of digital-touch systems to trade-off between haptic-assistance vs. sense-of-agency.
more »
« less
- Award ID(s):
- 2047189
- PAR ID:
- 10390283
- Date Published:
- Journal Name:
- ACM Transactions on Computer-Human Interaction
- Volume:
- 29
- Issue:
- 3
- ISSN:
- 1073-0516
- Page Range / eLocation ID:
- 1 to 27
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Communication during touch provides a seamless and natural way of interaction between humans and ambient intelligence. Current techniques that couple wireless transmission with touch detection suffer from the problem of selectivity and security, i.e., they cannot ensure communication only through direct touch and not through close proximity. We present BodyWire-HCI , which utilizes the human body as a wire-like communication channel, to enable human–computer interaction, that for the first time, demonstrates selective and physically secure communication strictly during touch. The signal leakage out of the body is minimized by utilizing a novel, low frequency Electro-QuasiStatic Human Body Communication (EQS-HBC) technique that enables interaction strictly when there is a conductive communication path between the transmitter and receiver through the human body. Design techniques such as capacitive termination and voltage mode operation are used to minimize the human body channel loss to operate at low frequencies and enable EQS-HBC. The demonstrations highlight the impact of BodyWire-HCI in enabling new human–machine interaction modalities for variety of application scenarios such as secure authentication (e.g., opening a door and pairing a smart device) and information exchange (e.g., payment, image, medical data, and personal profile transfer) through touch (https://www.youtube.com/watch?v=Uwrig2XQIH8).more » « less
-
Mobile user authentication (MUA) has become a gatekeeper for securing a wealth of personal and sensitive information residing on mobile devices. Keystrokes and touch gestures are two types of touch behaviors. It is not uncommon for a mobile user to make multiple MUA attempts. Nevertheless, there is a lack of an empirical comparison of different types of touch dynamics based MUA methods across different attempts. In view of the richness of touch dynamics, a large number of features have been extracted from it to build MUA models. However, there is little understanding of what features are important for the performance of such MUA models. Further, the training sample size of template generation is critical for real-world application of MUA models, but there is a lack of such information about touch gesture based methods. This study is aimed to address the above research limitations by conducting experiments using two MUA prototypes. Their empirical results can not only serve as a guide for the design of touch dynamics based MUA methods but also offer suggestions for improving the performance of MUA models.more » « less
-
null (Ed.)With the growing popularity of smartphones, continuous and implicit authentication of such devices via behavioral biometrics such as touch dynamics becomes an attractive option. Specially, when the physical biometrics are challenging to utilize, and their frequent and continuous usage annoys the user. This paper presents a touchstroke authentication model based on several classification algorithms and compare their performances in authenticating legitimate smartphone users. The evaluation results suggest that it is possible to achieve comparable authentication accuracies with an average accuracy of 91% considering the best performing model. This research is supervised by Dr. Debzani Deb (debd@wssu.edu), Department of Computer Science at Winston-Salem State University, NC.more » « less
-
Over the past decade, augmented reality (AR) developers have explored a variety of approaches to allow users to interact with the information displayed on smart glasses and head-mounted displays (HMDs). Current interaction modalities such as mid-air gestures, voice commands, or hand-held controllers provide a limited range of interactions with the virtual content. Additionally, these modalities can also be exhausting, uncomfortable, obtrusive, and socially awkward. There is a need to introduce comfortable interaction techniques for smart glasses and HMDS without the need for visual attention. This paper presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users' interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch-stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses.more » « less
An official website of the United States government

