skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Generalized Adversarial and Hierarchical Co-occurrence Network based Synthetic Skeleton Generation and Human Identity Recognition
Human skeleton data provides a compact, low noise representation of relative joint locations that may be used in human identity and activity recognition. Hierarchical Co-occurrence Network (HCN) has been used for human activity recognition because of its ability to consider correlation between joints in convolutional operations in the network. HCN shows good identification accuracy but requires a large number of samples to train. Acquisition of this large-scale data can be time consuming and expensive, motivating synthetic skeleton data generation for data augmentation in HCN. We propose a novel method that integrates an Auxiliary Classifier Generative Adversarial Network (AC-GAN) and HCN hybrid framework for Assessment and Augmented Identity Recognition for Skeletons (AAIRS). The proposed AAIRS method performs generation and evaluation of synthetic 3-dimensional motion capture skeleton videos followed by human identity recognition. Synthetic skeleton data produced by the generator component of the AC-GAN is evaluated using an Inception Score-inspired realism metric computed from the HCN classifier outputs. We study the effect of increasing the percentage of synthetic samples in the training set on HCN performance. Before synthetic data augmentation, we achieve 74.49% HCN performance in 10-fold cross validation for 9-class human identification. With a synthetic-real mixture of 50%-50%, we achieve 78.22% mean accuracy, significantly  more » « less
Award ID(s):
1950704
PAR ID:
10394185
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2022 International Joint Conference on Neural Networks (IJCNN)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The impact of aspect angle on Doppler effect hinders the capability of a monostatic radar to achieve human activity recognition (HAR) from all aspect angles, i.e., omnidirectional. To alleviate the “angle sensitivity”, sufficient and high-quality training data from multiple aspect angles is mandated. However, it would be time-consuming for the monostatic radar to collect the training data from all aspect angles. To address this issue, this paper proposes a high-quality synthetic data generation algorithm based on high-dimensional model representation (HDMR) for omnidirectional HAR. The aim is to augment a high-quality dataset with collected samples at the radar line-of-sight direction and few samples from other aspect angles. The quality of synthetic samples is evaluated by dynamic time wrapping distance (DTWD) between the synthetic and real samples. Subsequently, the synthetic samples are utilized to train a classifier based on ResNet50 to achieve omnidirectional HAR. Experimental results demonstrate that the averaged HAR accuracy of the proposed algorithm exceeds 91% at different aspect angles. The quality of the synthetic samples generated by the proposed algorithm outperforms two commonly-used algorithms in the literature. 
    more » « less
  2. null (Ed.)
    With the growing popularity of smartphones, continuous and implicit authentication of such devices via behavioral biometrics such as touch dynamics becomes an attractive option, especially when the physical biometrics are challenging to utilize, or their frequent and continuous usage annoys the user. However, touch dynamics is vulnerable to potential security attacks such as shoulder surfing, camera attack, and smudge attack. As a result, it is challenging to rule out genuine imposters while only relying on models that learn from real touchstrokes. In this paper, a touchstroke authentication model based on Auxiliary Classifier Generative Adversarial Network (AC-GAN) is presented. Given a small subset of a legitimate user's touchstrokes data during training, the presented AC-GAN model learns to generate a vast amount of synthetic touchstrokes that closely approximate the real touchstrokes, simulating imposter behavior, and then uses both generated and real touchstrokes in discriminating real user from the imposters. The presented network is trained on the Touchanalytics dataset and the discriminability is evaluated with popular performance metrics and loss functions. The evaluation results suggest that it is possible to achieve comparable authentication accuracies with Equal Error Rate ranging from 2% to 11% even when the generative model is challenged with a vast number of synthetic data that effectively simulates an imposter behavior. The use of AC-GAN also diversifies generated samples and stabilizes training. 
    more » « less
  3. Abstract Recently, Raman Spectroscopy (RS) was demonstrated to be a non-destructive way of cancer diagnosis, due to the uniqueness of RS measurements in revealing molecular biochemical changes between cancerous vs. normal tissues and cells. In order to design computational approaches for cancer detection, the quality and quantity of tissue samples for RS are important for accurate prediction. In reality, however, obtaining skin cancer samples is difficult and expensive due to privacy and other constraints. With a small number of samples, the training of the classifier is difficult, and often results in overfitting. Therefore, it is important to have more samples to better train classifiers for accurate cancer tissue classification. To overcome these limitations, this paper presents a novel generative adversarial network based skin cancer tissue classification framework. Specifically, we design a data augmentation module that employs a Generative Adversarial Network (GAN) to generate synthetic RS data resembling the training data classes. The original tissue samples and the generated data are concatenated to train classification modules. Experiments on real-world RS data demonstrate that (1) data augmentation can help improve skin cancer tissue classification accuracy, and (2) generative adversarial network can be used to generate reliable synthetic Raman spectroscopic data. 
    more » « less
  4. We present a novel algorithm that is able to generate deep synthetic COVID-19 pneumonia CT scan slices using a very small sample of positive training images in tandem with a larger number of normal images. This generative algorithm produces images of sufficient accuracy to enable a DNN classifier to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID-19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19-positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. Furthermore, algorithms to produce deep synthetic images with smaller data volumes have the added benefit of reducing the barriers of data sharing between healthcare institutions. We present the cycle-consistent segmentation-generative adversarial network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19-positive CT scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19. 
    more » « less
  5. null (Ed.)
    Conditional generative models enjoy remarkable progress over the past few years. One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN), which generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier. However, the diversity of the generated samples by AC-GAN tends to decrease as the number of classes increases, hence limiting its power on large-scale data. In this paper, we identify the source of the low diversity issue theoretically and propose a practical solution to solve the problem. We show that the auxiliary classifier in AC-GAN imposes perfect separability, which is disadvantageous when the supports of the class distributions have significant overlap. To address the issue, we propose Twin Auxiliary Classifiers Generative Adversarial Net (TAC-GAN) that further benefits from a new player that interacts with other players (the generator and the discriminator) in GAN. Theoretically, we demonstrate that TAC-GAN can effectively minimize the divergence between the generated and real-data distributions. Extensive experimental results show that our TAC-GAN can successfully replicate the true data distributions on simulated data, and significantly improves the diversity of class-conditional image generation on real datasets. 
    more » « less