This study employed the N400 event-related potential (ERP) to investigate how observing different types of gestures at learning affects the subsequent processing of L2 Mandarin words differing in lexical tone by L1 English speakers. The effects of pitch gestures conveying lexical tones (e.g., upwards diagonal movements for rising tone), semantic gestures conveying word meanings (e.g., waving goodbye for to wave), and no gesture were compared. In a lexical tone discrimination task, larger N400s for Mandarin target words mismatching vs. matching Mandarin prime words in lexical tone were observed for words learned with pitch gesture. In a meaning discrimination task, larger N400s for English target words mismatching vs. matching Mandarin prime words in meaning were observed for words learned with pitch and semantic gesture. These findings provide the first neural evidence that observing gestures during L2 word learning enhances subsequent phonological and semantic processing of learned L2 words.
more »
« less
Wi-Fringe: Leveraging Text Semantics in WiFi CSI-Based Device-Free Named Gesture Recognition
The lack of adequate training data is one of the major hurdles in WiFi-based activity recognition systems. In this paper, we propose Wi-Fringe, which is a WiFi CSI-based devicefree human gesture recognition system that recognizes named gestures, i.e., activities and gestures that have a semantically meaningful name in English language, as opposed to arbitrary free-form gestures. Given a list of activities (only their names in English text), along with zero or more training examples (WiFi CSI values) per activity, Wi-Fringe is able to detect all activities at runtime. We show for the first time that by utilizing the state-of-the-art semantic representation of English words, which is learned from datasets like the Wikipedia (e.g., Google's word-to-vector [1]) and verb attributes learned from how a word is defined (e.g, American Heritage Dictionary), we can enhance the capability of WiFi-based named gesture recognition systems that lack adequate training examples per class. We propose a novel cross-domain knowledge transfer algorithm between radio frequency (RF) and text to lessen the burden on developers and end-users from the tedious task of data collection for all possible activities. To evaluate Wi-Fringe, we collect data from four volunteers in a multi-person apartment and an office building for a total of 20 activities. We empirically quantify the trade-off between the accuracy and the number of unseen activities.
more »
« less
- Award ID(s):
- 1840131
- PAR ID:
- 10198358
- Date Published:
- Journal Name:
- 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS)
- Page Range / eLocation ID:
- 35 to 42
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Gesture recognition has become increasingly important in human-computer interaction and can support different applications such as smart home, VR, and gaming. Traditional approaches usually rely on dedicated sensors that are worn by the user or cameras that require line of sight. In this paper, we present fine-grained finger gesture recognition by using commodity WiFi without requiring user to wear any sensors. Our system takes advantages of the fine-grained Channel State Information available from commodity WiFi devices and the prevalence of WiFi network infrastructures. It senses and identifies subtle movements of finger gestures by examining the unique patterns exhibited in the detailed CSI. We devise environmental noise removal mechanism to mitigate the effect of signal dynamic due to the environment changes. Moreover, we propose to capture the intrinsic gesture behavior to deal with individual diversity and gesture inconsistency. Lastly, we utilize multiple WiFi links and larger bandwidth at 5GHz to achieve finger gesture recognition under multi-user scenario. Our experimental evaluation in different environments demonstrates that our system can achieve over 90% recognition accuracy and is robust to both environment changes and individual diversity. Results also show that our system can provide accurate gesture recognition under different scenarios.more » « less
-
Goldwater, M; Angora, F; Hayes, B; Ong, D (Ed.)This study investigated how observing pitch gestures conveying lexical tones and representational gestures conveying word meanings when learning L2 Mandarin words differing in lexical tone affects their subsequent semantic and phonological processing in L1 English speakers using the N400 event-related potential (ERP). Larger N400s for English target words mismatching vs. matching Mandarin prime words in meaning were observed for words learned with pitch and representational gesture, but not no gesture. Additionally, larger N400s for Mandarin target words mismatching vs. matching Mandarin prime words in lexical tone were observed for words learned with pitch gesture, but not representational or no gesture. These findings provide the first ERP evidence that observing gestures conveying phonological and semantic information during L2 word learning enhances subsequent phonological and semantic processing of learned L2 words.more » « less
-
In this paper, we propose a novel, generalizable, and scalable idea that eliminates the need for collecting Radio Frequency (RF) measurements, when training RF sensing systems for human-motion-related activities. Existing learning-based RF sensing systems require collecting massive RF training data, which depends heavily on the particular sensing setup/involved activities. Thus, new data needs to be collected when the setup/activities change, significantly limiting the practical deployment of RF sensing systems. On the other hand, recent years have seen a growing, massive number of online videos involving various human activities/motions. In this paper, we propose to translate such already-available online videos to instant simulated RF data for training any human-motion-based RF sensing system, in any given setup. To validate our proposed framework, we conduct a case study of gym activity classification, where CSI magnitude measurements of three WiFi links are used to classify a person's activity from 10 different physical exercises. We utilize YouTube gym activity videos and translate them to RF by simulating the WiFi signals that would have been measured if the person in the video was performing the activity near the transceivers. We then train a classifier on the simulated data, and extensively test it with real WiFi data of 10 subjects performing the activities in 3 areas. Our system achieves a classification accuracy of 86% on activity periods, each containing an average of 5.1 exercise repetitions, and 81% on individual repetitions of the exercises. This demonstrates that our approach can generate reliable RF training data from already-available videos, and can successfully train an RF sensing system without any real RF measurements. The proposed pipeline can also be used beyond training and for analysis and design of RF sensing systems, without the need for massive RF data collection.more » « less
-
Vision-based methods are commonly used in robotic arm activity recognition. These approaches typically rely on line-of-sight (LoS) and raise privacy concerns, particularly in smart home applications. Passive Wi-Fi sensing represents a new paradigm for recognizing human and robotic arm activi- ties, utilizing channel state information (CSI) measurements to identify activities in indoor environments. In this paper, a novel machine learning approach based on discrete wavelet transform and vision transformers for robotic arm activity recognition from CSI measurements in indoor settings is proposed. This method outperforms convolutional neural network (CNN) and long short- term memory (LSTM) models in robotic arm activity recognition, particularly when LoS is obstructed by barriers, without relying on external or internal sensors or visual aids. Experiments are conducted using four different data collection scenarios and four different robotic arm activities. Performance results demonstrate that wavelet transform can significantly enhance the accuracy of visual transformer networks in robotic arms activity recognition.more » « less