In this paper, we propose a novel, generalizable, and scalable idea that eliminates the need for collecting Radio Frequency (RF) measurements, when training RF sensing systems for human-motion-related activities. Existing learning-based RF sensing systems require collecting massive RF training data, which depends heavily on the particular sensing setup/involved activities. Thus, new data needs to be collected when the setup/activities change, significantly limiting the practical deployment of RF sensing systems. On the other hand, recent years have seen a growing, massive number of online videos involving various human activities/motions. In this paper, we propose to translate such already-available online videos to instant simulated RF data for training any human-motion-based RF sensing system, in any given setup. To validate our proposed framework, we conduct a case study of gym activity classification, where CSI magnitude measurements of three WiFi links are used to classify a person's activity from 10 different physical exercises. We utilize YouTube gym activity videos and translate them to RF by simulating the WiFi signals that would have been measured if the person in the video was performing the activity near the transceivers. We then train a classifier on the simulated data, and extensively test it with real WiFi data of 10 subjects performing the activities in 3 areas. Our system achieves a classification accuracy of 86% on activity periods, each containing an average of 5.1 exercise repetitions, and 81% on individual repetitions of the exercises. This demonstrates that our approach can generate reliable RF training data from already-available videos, and can successfully train an RF sensing system without any real RF measurements. The proposed pipeline can also be used beyond training and for analysis and design of RF sensing systems, without the need for massive RF data collection.
more »
« less
Wi-Flex: Reflex Detection with Commodity WiFi
In this paper, we are interested in startle reflex detection with WiFi signals. We propose that two parameters related to the received signal bandwidth, maximum normalized bandwidth and bandwidth-intense duration, can successfully detect reflexes and robustly differentiate them from non-reflex events, even from those that involve intense body motions (e.g., certain exercises). In order to confirm this, we need a massive RF reflex dataset which would be prohibitively laborious to collect. On the other hand, there are many available reflex/non-reflex videos online. We then propose an efficient way of translating the content of a video to the bandwidth of the corresponding received RF signal that would have been measured if there was a link near the event in the video, by drawing analogies between our problem and the classic bandwidth modeling work of J. Carson in the context of analog FM radios (Carson's Rule). This then allows us to translate online reflex/non-reflex videos to an instant large RF bandwidth dataset, and characterize optimum 2D reflex/non-reflex decision regions accordingly, to be used during real operation with WiFi. We extensively test our approach with 203 reflex events, 322 non-reflex events (including 142 intense body motion events), over four areas (including several through-wall ones), and with 15 participants, achieving a correct reflex detection rate of 90.15% and a false alarm rate of 2.49% (all events are natural). While the paper is extensively tested with startle reflexes, it is also applicable to sport-type reflexes, and is thus tested with sport-related reflexes as well. We further show reflex detection with multiple people simultaneously engaged in a series of activities. Optimality of the proposed design is also demonstrated experimentally. Finally, we conduct experiments to show the potential of our approach for providing cost-effective and quantifiable metrics in sports, by quantifying a goalkeeper's reaction. Overall, our results confirm a fast, robust, and cost-effective reflex detection system, without collecting any RF training data, or training a neural network.
more »
« less
- Award ID(s):
- 2226255
- PAR ID:
- 10535873
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Volume:
- 7
- Issue:
- 3
- ISSN:
- 2474-9567
- Page Range / eLocation ID:
- 1 to 27
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
There is an increasing concern in computer vision devices invading the privacy of their users. We want the camera systems/robots to recognize important events and assist human daily life by understanding its videos, but we also want to ensure that they do not intrude people's privacy. In this paper, we propose a new principled approach for learning a video anonymizer. We use an adversarial training setting in which two competing systems fight: (1) a video anonymizer that modifies the original video to remove privacy-sensitive information (i.e., human face) while still trying to maximize spatial action detection performance, and (2) a discriminator that tries to extract privacy-sensitive information from such anonymized videos. The end goal is for the video anonymizer to perform a pixel-level modification of video frames to anonymize each person's face, while minimizing the effect on action detection performance. We experimentally confirm the benefit of our approach particularly compared to conventional hand-crafted video/face anonymization methods including masking, blurring, and noise adding.more » « less
-
Abstract Existing models of human walking use low-level reflexes or neural oscillators to generate movement. While appropriate to generate the stable, rhythmic movement patterns of steady-state walking, these models lack the ability to change their movement patterns or spontaneously generate new movements in the specific, goal-directed way characteristic of voluntary movements. Here we present a neuromuscular model of human locomotion that bridges this gap and combines the ability to execute goal directed movements with the generation of stable, rhythmic movement patterns that are required for robust locomotion. The model represents goals for voluntary movements of the swing leg on the task level of swing leg joint kinematics. Smooth movements plans towards the goal configuration are generated on the task level and transformed into descending motor commands that execute the planned movements, using internal models. The movement goals and plans are updated in real time based on sensory feedback and task constraints. On the spinal level, the descending commands during the swing phase are integrated with a generic stretch reflex for each muscle. Stance leg control solely relies on dedicated spinal reflex pathways. Spinal reflexes stimulate Hill-type muscles that actuate a biomechanical model with eight internal joints and six free-body degrees of freedom. The model is able to generate voluntary, goal-directed reaching movements with the swing leg and combine multiple movements in a rhythmic sequence. During walking, the swing leg is moved in a goal-directed manner to a target that is updated in real-time based on sensory feedback to maintain upright balance, while the stance leg is stabilized by low-level reflexes and a behavioral organization switching between swing and stance control for each leg. With this combination of reflex-based stance leg and voluntary, goal-directed control of the swing leg, the model controller generates rhythmic, stable walking patterns in which the swing leg movement can be flexibly updated in real-time to step over or around obstacles.more » « less
-
he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+.more » « less
-
The increasing reach of deepfakes raises practical questions about people’s ability to detect false videos online. How vulnerable are people to deepfake videos? What technologies can help improve detection? Previous experiments that measure human deepfake detection historically omit a number of conditions that can exist in typical browsing conditions. Here, we operationalized four such conditions (low prevalence, brief presentation, low video quality, and divided attention), and found in a series of online experiments that all conditions lowered detection relative to baseline, suggesting that the current literature underestimates people’s susceptibility to deepfakes. Next, we examined how AI assistance could be integrated into the human decision process. We found that a model that exposes deepfakes by amplifying artifacts increases detection rates, and also leads to higher rates of incorporating AI feedback and higher final confidence than text-based prompts. Overall, this suggests that visual indicators that cause distortions on fake videos may be effective at mitigating the impact of falsified video.more » « less
An official website of the United States government

