skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 3, 2026

Title: High-Quality Synthetic Data Generation for Omnidirectional Human Activity Recognition
The impact of aspect angle on Doppler effect hinders the capability of a monostatic radar to achieve human activity recognition (HAR) from all aspect angles, i.e., omnidirectional. To alleviate the “angle sensitivity”, sufficient and high-quality training data from multiple aspect angles is mandated. However, it would be time-consuming for the monostatic radar to collect the training data from all aspect angles. To address this issue, this paper proposes a high-quality synthetic data generation algorithm based on high-dimensional model representation (HDMR) for omnidirectional HAR. The aim is to augment a high-quality dataset with collected samples at the radar line-of-sight direction and few samples from other aspect angles. The quality of synthetic samples is evaluated by dynamic time wrapping distance (DTWD) between the synthetic and real samples. Subsequently, the synthetic samples are utilized to train a classifier based on ResNet50 to achieve omnidirectional HAR. Experimental results demonstrate that the averaged HAR accuracy of the proposed algorithm exceeds 91% at different aspect angles. The quality of the synthetic samples generated by the proposed algorithm outperforms two commonly-used algorithms in the literature.  more » « less
Award ID(s):
2233536
PAR ID:
10616048
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3315-3956-6
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Location:
Atlanta, GA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network. 
    more » « less
  2. Backscatter power measurements are collected to characterize indoor radar clutter in monostatic sensing applications. A narrowband 28 GHz sounder used a quasi-monostatic radar arrangement with an omnidirectional transmit antenna illuminating an indoor scene and a spinning horn receive antenna offset vertically (less than 1 m away) collecting backscattered power as a function of azimuth. Power variation in azimuth around the local average is found to be within 1 dB of a lognormal distribution with a standard deviation of 6.8 dB. Backscatter azimuth spectra are found to be highly variable with location, with cross-correlation coefficients on the order of 0.3 at separations as small as 0.1 m. These statistics are needed for system-level evaluation of RF sensing performance. 
    more » « less
  3. In this article, the terrain classifications of polarimetric synthetic aperture radar (PolSAR) images are studied. A novel semi-supervised method based on improved Tri-training combined with a neighborhood minimum spanning tree (NMST) is proposed. Several strategies are included in the method: 1) a high-dimensional vector of polarimetric features that are obtained from the coherency matrix and diverse target decompositions is constructed; 2) this vector is divided into three subvectors and each subvector consists of one-third of the polarimetric features, randomly selected. The three subvectors are used to separately train the three different base classifiers in the Tri-training algorithm to increase the diversity of classification; and 3) a help-training sample selection with the improved NMST that uses both the coherency matrix and the spatial information is adopted to select highly reliable unlabeled samples to increase the training sets. Thus, the proposed method can effectively take advantage of unlabeled samples to improve the classification. Experimental results show that with a small number of labeled samples, the proposed method achieves a much better performance than existing classification methods. 
    more » « less
  4. null (Ed.)
    Human activity recognition (HAR) is growing in popularity due to its wide-ranging applications in patient rehabilitation and movement disorders. HAR approaches typically start with collecting sensor data for the activities under consideration and then develop algorithms using the dataset. As such, the success of algorithms for HAR depends on the availability and quality of datasets. Most of the existing work on HAR uses data from inertial sensors on wearable devices or smartphones to design HAR algorithms. However, inertial sensors exhibit high noise that makes it difficult to segment the data and classify the activities. Furthermore, existing approaches typically do not make their data available publicly, which makes it difficult or impossible to obtain comparisons of HAR approaches. To address these issues, we present wearable HAR (w-HAR) which contains labeled data of seven activities from 22 users. Our dataset’s unique aspect is the integration of data from inertial and wearable stretch sensors, thus providing two modalities of activity information. The wearable stretch sensor data allows us to create variable-length segment data and ensure that each segment contains a single activity. We also provide a HAR framework to use w-HAR to classify the activities. To this end, we first perform a design space exploration to choose a neural network architecture for activity classification. Then, we use two online learning algorithms to adapt the classifier to users whose data are not included at design time. Experiments on the w-HAR dataset show that our framework achieves 95% accuracy while the online learning algorithms improve the accuracy by as much as 40%. 
    more » « less
  5. Human activity recognition provides insights into physical and mental well-being by monitoring patterns of movement and behavior, facilitating personalized interventions and proactive health management. Radio Frequency (RF)-based human activity recognition (HAR) is gaining attention due to its less privacy exposure and non-contact characteristics. However, it suffers from data scarcity problems and is sensitive to environment changes. Collecting and labeling such data is laborintensive and time consuming. The limited training data makes generalizability challenging when the sensor is deployed in a very different relative view in the real world. Synthetic data generation from abundant videos presents a potential to address data scarcity issues, yet the domain gaps between synthetic and real data constrain its benefit. In this paper, we firstly share our investigations and insights on the intrinsic limitations of existing video-based data synthesis methods. Then we present M4X, a method using metric learning to extract effective viewindependent features from the more abundant synthetic data despite their domain gaps, thus enhancing cross-view generalizability. We explore two main design issues in different mining strategies for contrastive pairs/triplets construction, and different forms of loss functions. We find that the best choices are offline triplet mining with real data as anchors, balanced triplets, and a triplet loss function without hard negative mining for higher discriminative power. Comprehensive experiments show that M4X consistently outperform baseline methods in cross-view generalizability. In the most challenging case of the least amount of real training data, M4X outperforms three baselines by 7.9- 16.5% on all views, and 18.9-25.6% on a view with only synthetic but no real data during training. This proves its effectiveness in extracting view-independent features from synthetic data despite their domain gaps. We also observe that given limited sensor deployments, a participant-facing viewpoint and another at a large angle (e.g. 60◦) tend to produce much better performance. 
    more » « less