skip to main content


Search for: All records

Award ID contains: 1840080

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In recent years, remarkable results have been achieved in self-supervised action recognition using skeleton sequences with contrastive learning. It has been observed that the semantic distinction of human action features is often represented by local body parts, such as legs or hands, which are advantageous for skeleton-based action recognition. This paper proposes an attention-based contrastive learning framework for skeleton representation learning, called SkeAttnCLR, which integrates local similarity and global features for skeleton-based action representations. To achieve this, a multi-head attention mask module is employed to learn the soft attention mask features from the skeletons, suppressing non-salient local features while accentuating local salient features, thereby bringing similar local features closer in the feature space. Additionally, ample contrastive pairs are generated by expanding contrastive pairs based on salient and non-salient features with global features, which guide the network to learn the semantic representations of the entire skeleton. Therefore, with the attention mask mechanism, SkeAttnCLR learns local features under different data augmentation views. The experiment results demonstrate that the inclusion of local feature similarity significantly enhances skeleton-based action representation. Our proposed SkeAttnCLR outperforms state-of-the-art methods on NTURGB+D, NTU120-RGB+D, and PKU-MMD datasets. The code and settings are available at this repository: https://github.com/GitHubOfHyl97/SkeAttnCLR. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  2. With the emergence and fast development of cloud computing and outsourced services, more and more companies start to use managed security service providers (MSSP) as their security service team. This approach can save the budget on maintaining its own security teams and depend on professional security persons to protect the company infrastructures and intellectual property. However, this approach also gives the MSSP opportunities to honor only a part of the security service level agreement. To pre- vent this from happening, researchers propose to use outsourced network testing to verify the execution of the security policies. During this procedure, the end customer has to design network testing traffic and provide it to the testers. Since the testing traffic is designed based on the security rules and selectors, external testers could derive the customer network security setup, and conduct subsequent attacks based on the learned knowledge. To protect the network security configuration secrecy in outsourced testing, in this paper we propose different methods to hide the accurate information. For Regex-based security selectors, we propose to introduce fake testing traffic to confuse the testers. For exact match and range based selectors, we propose to use NAT VM to hide the accurate information. We conduct simulation to show the protection effectiveness under different scenarios. We also discuss the advantages of our approaches and the potential challenges. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  3. Self-supervised skeleton-based action recognition has attracted more attention in recent years. By utilizing the unlabeled data, more generalizable features can be learned to alleviate the overfitting problem and reduce the demand for massive labeled training data. Inspired by the MAE [1], we propose a spatial-temporal masked autoencoder framework for self-supervised 3D skeleton-based action recognition (SkeletonMAE). Following MAE's masking and reconstruction pipeline, we utilize a skeleton-based encoder-decoder transformer architecture to reconstruct the masked skeleton sequences. A novel masking strategy, named Spatial-Temporal Masking, is introduced in terms of both joint-level and frame-level for the skeleton sequence. This pre-training strategy makes the encoder output generalizable skeleton features with spatial and temporal dependencies. Given the unmasked skeleton sequence, the encoder is fine-tuned for the action recognition task. Extensive ex- periments show that our SkeletonMAE achieves remarkable performance and outperforms the state-of-the-art methods on both NTU RGB+D 60 and NTU RGB+D 120 datasets. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  4. Skeleton-based motion capture and visualization is an important computer vision task, especially in the virtual reality (VR) envi- ronment. It has grown increasingly popular due to the ease of gathering skeleton data and the high demand of virtual socializa- tion. The captured skeleton data seems anonymous but can still be used to extract personal identifiable information (PII). This can lead to an unintended privacy leakage inside a VR meta-verse. We propose a novel linkage attack on skeleton-based motion visual- ization. It detects if a target and a reference skeleton are the same individual. The proposed model, called Linkage Attack Neural Net- work (LAN), is based on the principles of a Siamese Network. It incorporates deep neural networks to embed the relevant PII then uses a classifier to match the reference and target skeletons. We also employ classical and deep motion retargeting (MR) to cast the target skeleton onto a dummy skeleton such that the motion sequence is anonymized for privacy protection. Our evaluation shows that the effectiveness of LAN in the linkage attack and the effectiveness of MR in anonymization. The source code is available at https://github.com/Thomasc33/Linkage-Attack 
    more » « less
  5. null (Ed.)
  6. null (Ed.)