skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: RAM: Radar-based activity monitor
Activity recognition has applications in a variety of human-in-the-loop settings such as smart home health monitoring, green building energy and occupancy management, intelligent transportation, and participatory sensing. While fine-grained activity recognition systems and approaches help enable a multitude of novel applications, discovering them with non-intrusive ambient sensor systems pose challenging design, as well as data processing, mining, and activity recognition issues. In this paper, we develop a low-cost heterogeneous Radar based Activity Monitoring (RAM) system for recognizing fine-grained activities. We exploit the feasibility of using an array of heterogeneous micro-doppler radars to recognize low-level activities. We prototype a short-range and a long-range radar system and evaluate the feasibility of using the system for fine-grained activity recognition. In our evaluation, using real data traces, we show that our system can detect fine-grained user activities with 92.84% accuracy.  more » « less
Award ID(s):
1544687
NSF-PAR ID:
10073258
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications
Page Range / eLocation ID:
1 to 9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications. 
    more » « less
  2. There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large. This makes end-to-end supervised training of a recognition system impractical as no training set is practically able to encompass the entire label set. In this paper, we present an approach to fine-grained recognition that models activities as compositions of dynamic action signatures. This compositional approach allows us to reframe fine-grained recognition as zero-shot activity recognition, where a detector is composed “on the fly” from simple first-principles state machines supported by deep-learned components. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also extend this method to form a unique framework for zero-shot joint segmentation and classification of activities in video and demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can use off-the-shelf object detectors to recognize activities in completely de-novo settings with no additional training. 
    more » « less
  3. Hedden, Abigail S ; Mazzaro, Gregory J (Ed.)
    Human activity recognition (HAR) with radar-based technologies has become a popular research area in the past decade. However, the objective of these studies are often to classify human activity for anyone; thus, models are trained using data spanning as broad a swath of people and mobility profiles as possible. In contrast, applications of HAR and gait analysis to remote health monitoring require characterization of the person-specific qualities of a person’s activities and gait, which greatly depends on age, health and agility. In fact, the speed or agility with which a person moves can be an important health indicator. In this study, we propose a multi-input multi-task deep learning framework to simultaneously learn a person’s activity and agility. In this initial study, we consider three different agility states: slow, nominal, and fast. It is shown that joint learning of agility and activity improves the classification accuracy for both activity and agility recognition tasks. To the best of our knowledge, this study is the first work considering both agility characterization and personalized activity recognition using RF sensing. 
    more » « less
  4. In clinical settings, most automatic recognition systems use visual or sensory data to recognize activities. These systems cannot recognize activities that rely on verbal assessment, lack visual cues, or do not use medical devices. We examined speech-based activity and activity-stage recognition in a clinical domain, making the following contributions. (1) We collected a high-quality dataset representing common activities and activity stages during actual trauma resuscitation events-the initial evaluation and treatment of critically injured patients. (2) We introduced a novel multimodal network based on audio signal and a set of keywords that does not require a high-performing automatic speech recognition (ASR) engine. (3) We designed novel contextual modules to capture dynamic dependencies in team conversations about activities and stages during a complex workflow. (4) We introduced a data augmentation method, which simulates team communication by combining selected utterances and their audio clips, and showed that this method contributed to performance improvement in our data-limited scenario. In offline experiments, our proposed context-aware multimodal model achieved F1-scores of 73.2±0.8% and 78.1±1.1% for activity and activity-stage recognition, respectively. In online experiments, the performance declined about 10% for both recognition types when using utterance-level segmentation of the ASR output. The performance declined about 15% when we omitted the utterance-level segmentation. Our experiments showed the feasibility of speech-based activity and activity-stage recognition during dynamic clinical events.

     
    more » « less
  5. Translating fine-grained activity detection (e.g., phone ring, talking interspersed with silence and walking) into semantically meaningful and richer contextual information (e.g., on a phone call for 20 minutes while exercising) is essential towards enabling a range of healthcare and human-computer interaction applications. Prior work has proposed building ontologies or temporal analysis of activity patterns with limited success in capturing complex real-world context patterns. We present TAO, a hybrid system that leverages OWL-based ontologies and temporal clustering approaches to detect high-level contexts from human activities. TAO can characterize sequential activities that happen one after the other and activities that are interleaved or occur in parallel to detect a richer set of contexts more accurately than prior work. We evaluate TAO on real-world activity datasets (Casas and Extrasensory) and show that our system achieves, on average, 87% and 80% accuracy for context detection, respectively. We deploy and evaluate TAO in a real-world setting with eight participants using our system for three hours each, demonstrating TAO's ability to capture semantically meaningful contexts in the real world. Finally, to showcase the usefulness of contexts, we prototype wellness applications that assess productivity and stress and show that the wellness metrics calculated using contexts provided by TAO are much closer to the ground truth (on average within 1.1%), as compared to the baseline approach (on average within 30%).

     
    more » « less