skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Fabriccio: Touchless Gestural Input on Interactive Fabrics
We present Fabriccio, a touchless gesture sensing technique developed for interactive fabrics using Doppler motion sensing. Our prototype was developed using a pair of loop antennas (one for transmitting and the other for receiving), made of conductive thread that was sewn onto a fabric substrate. The antenna type, configuration, transmission lines, and operating frequency were carefully chosen to balance the complexity of the fabrication process and the sensitivity of our system for touchless hand gestures, performed at a 10 cm distance. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 11 touchless gestures as well as 1 touch gesture. The study result yielded a 92.8% cross-validation accuracy and 85.2% leave-one-session-out accuracy. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique on soft objects.  more » « less
Award ID(s):
1835983
PAR ID:
10196448
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Conference on Human Factors in Computing Systems (CHI)
Page Range / eLocation ID:
1 to 14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Tigrini, Andrea (Ed.)
    Hand gesture classification is crucial for the control of many modern technologies, ranging from virtual and augmented reality systems to assistive mechatronic devices. A prominent control technique employs surface electromyography (EMG) and pattern recognition algorithms to identify specific patterns in muscle electrical activity and translate these to device commands. While being well established in consumer, clinical, and research applications, this technique suffers from misclassification errors caused by limb movements and the weight of manipulated objects, both vital aspects of how we use our hands in daily life. An emerging alternative control technique is force myography (FMG) which uses pattern recognition algorithms to predict hand gestures from the axial forces present at the skin’s surface created by contractions of the underlying muscles. As EMG and FMG capture different physiological signals associated with muscle contraction, we hypothesized that each may offer unique additional information for gesture classification, potentially improving classification accuracy in the presence of limb position and object loading effects. Thus, we tested the effect of limb position and grasped load on 3 different sensing modalities: EMG, FMG, and the fused combination of the two. 27 able-bodied participants performed a grasp and release task with 4 hand gestures at 8 positions and under 5 object weight conditions. We then examined the effects of limb position and grasped load on gesture classification accuracy across each sensing modality. It was found that position and grasped load had statistically significant effects on the classification performance of the 3 sensing modalities and that the combination of EMG and FMG provided the highest classification accuracy of hand gesture, limb position, and grasped load combinations (97.34%) followed by FMG (92.27%) and then EMG (82.84%). This points to the fact that the addition of FMG to traditional EMG control systems offers unique additional data for more effective device control and can help accommodate different limb positions and grasped object loads. 
    more » « less
  2. Wearable internet of things (IoT) devices can enable a variety of biomedical applications, such as gesture recognition, health monitoring, and human activity tracking. Size and weight constraints limit the battery capacity, which leads to frequent charging requirements and user dissatisfaction. Minimizing the energy consumption not only alleviates this problem, but also paves the way for self-powered devices that operate on harvested energy. This paper considers an energy-optimal gesture recognition application that runs on energy-harvesting devices. We first formulate an optimization problem for maximizing the number of recognized gestures when energy budget and accuracy constraints are given. Next, we derive an analytical energy model from the power consumption measurements using a wearable IoT device prototype. Then, we prove that maximizing the number of recognized gestures is equivalent to minimizing the duration of gesture recognition. Finally, we utilize this result to construct an optimization technique that maximizes the number of gestures recognized under the energy budget constraints while satisfying the recognition accuracy requirements. Our extensive evaluations demonstrate that the proposed analytical model is valid for wearable IoT applications, and the optimization approach increases the number of recognized gestures by up to 2.4× compared to a manual optimization. 
    more » « less
  3. Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this article presents two studies that explore two different categories of mixed reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the tradeoffs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits. 
    more » « less
  4. null (Ed.)
    This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81 ms we find that the stroke of the gesture is commonly within 10 ms of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems. 
    more » « less
  5. Current systems that use gestures to enable storytelling tend to mostly rely on a pre-scripted set of gestures or the use of manipulative gestures with respect to tangibles. Our research aims to inform the design of gesture recognition systems for storytelling with implications derived from a feature-based analysis of iconic gestures that occur during naturalistic oral storytelling. We collected story retellings of a collection of cartoon stimuli from 20 study participants, and a gesture analysis was performed on videos of the story retellings focusing on iconic gestures. Iconic gestures are a type of representational gesture that provides information about objects such as their shape, location, or movement. The form features of the iconic gestures were analyzed with respect to the concepts that they portrayed. Patterns between the two were identified and used to create recommendations for patterns in gesture form a system could be primed to recognize. 
    more » « less