skip to main content


Title: Fabriccio: Touchless Gestural Input on Interactive Fabrics
We present Fabriccio, a touchless gesture sensing technique developed for interactive fabrics using Doppler motion sensing. Our prototype was developed using a pair of loop antennas (one for transmitting and the other for receiving), made of conductive thread that was sewn onto a fabric substrate. The antenna type, configuration, transmission lines, and operating frequency were carefully chosen to balance the complexity of the fabrication process and the sensitivity of our system for touchless hand gestures, performed at a 10 cm distance. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 11 touchless gestures as well as 1 touch gesture. The study result yielded a 92.8% cross-validation accuracy and 85.2% leave-one-session-out accuracy. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique on soft objects.  more » « less
Award ID(s):
1835983
NSF-PAR ID:
10196448
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Conference on Human Factors in Computing Systems (CHI)
Page Range / eLocation ID:
1 to 14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Wearable internet of things (IoT) devices can enable a variety of biomedical applications, such as gesture recognition, health monitoring, and human activity tracking. Size and weight constraints limit the battery capacity, which leads to frequent charging requirements and user dissatisfaction. Minimizing the energy consumption not only alleviates this problem, but also paves the way for self-powered devices that operate on harvested energy. This paper considers an energy-optimal gesture recognition application that runs on energy-harvesting devices. We first formulate an optimization problem for maximizing the number of recognized gestures when energy budget and accuracy constraints are given. Next, we derive an analytical energy model from the power consumption measurements using a wearable IoT device prototype. Then, we prove that maximizing the number of recognized gestures is equivalent to minimizing the duration of gesture recognition. Finally, we utilize this result to construct an optimization technique that maximizes the number of gestures recognized under the energy budget constraints while satisfying the recognition accuracy requirements. Our extensive evaluations demonstrate that the proposed analytical model is valid for wearable IoT applications, and the optimization approach increases the number of recognized gestures by up to 2.4× compared to a manual optimization. 
    more » « less
  2. null (Ed.)
    Hand-to-Face transmission has been estimated to be a minority, yet non-negligible, vector of COVID-19 transmission and a major vector for multiple other pathogens. At the same time, as it cannot be effectively addressed with mainstream protection measures, such as wearing masks or tracing contacts, it remains largely untackled. To help address this issue, we have developed Saving Face - an app that alerts users when they are about to touch their faces, by analyzing the distortion patterns in the ultrasound signal emitted by their earphones. The system only relies on pre-existing hardware (a smartphone with generic earphones), which allows it to be rapidly scalable to billions of smartphone users worldwide. This paper describes the design, implementation and evaluation of the system, as well as the results of a user study testing the solution's accuracy, robustness, and user experience during various day-to-day activities (93.7% Sensitivity and 91.5% Precision, N=10). While this paper focuses on the system's application to detecting hand-to-face gestures, the technique can also be applicable to other types of gestures and gesture-based applications. 
    more » « less
  3. Abstract

    With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.

     
    more » « less
  4. Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this article presents two studies that explore two different categories of mixed reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the tradeoffs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits. 
    more » « less
  5. Unmanned aerial vehicles (UAVs) are becoming more common, presenting the need for effective human-robot communication strategies that address the unique nature of unmanned aerial flight. Visual communication via drone flight paths, also called gestures, may prove to be an ideal method. However, the effectiveness of visual communication techniques is dependent on several factors including an observer's position relative to a UAV. Previous work has studied the maximum line-of-sight at which observers can identify a small UAV [1]. However, this work did not consider how changes in distance may affect an observer's ability to perceive the shape of a UAV's motion. In this study, we conduct a series of online surveys to evaluate how changes in line-of-sight distance and gesture size affect observers' ability to identify and distinguish between UAV gestures. We first examine observers' ability to accurately identify gestures when adjusting a gesture's size relative to the size of a UAV. We then measure how observers' ability to identify gestures changes with respect to varying line-of-sight distances. Lastly, we consider how altering the size of a UAV gesture may improve an observer's ability to identify drone gestures from varying distances. Our results show that increasing the gesture size across varying UAV to gesture ratios did not have a significant effect on participant response accuracy. We found that between 17 m and 75 m from the observer, their ability to accurately identify a drone gesture was inversely proportional to the distance between the observer and the drone. Finally, we found that maintaining a gesture's apparent size improves participant response accuracy over changing line-of-sight distances. 
    more » « less