skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Linkage Attack on Skeleton-based Motion Visualization
Skeleton-based motion capture and visualization is an important computer vision task, especially in the virtual reality (VR) envi- ronment. It has grown increasingly popular due to the ease of gathering skeleton data and the high demand of virtual socializa- tion. The captured skeleton data seems anonymous but can still be used to extract personal identifiable information (PII). This can lead to an unintended privacy leakage inside a VR meta-verse. We propose a novel linkage attack on skeleton-based motion visual- ization. It detects if a target and a reference skeleton are the same individual. The proposed model, called Linkage Attack Neural Net- work (LAN), is based on the principles of a Siamese Network. It incorporates deep neural networks to embed the relevant PII then uses a classifier to match the reference and target skeletons. We also employ classical and deep motion retargeting (MR) to cast the target skeleton onto a dummy skeleton such that the motion sequence is anonymized for privacy protection. Our evaluation shows that the effectiveness of LAN in the linkage attack and the effectiveness of MR in anonymization. The source code is available at https://github.com/Thomasc33/Linkage-Attack  more » « less
Award ID(s):
1840080
PAR ID:
10464820
Author(s) / Creator(s):
Date Published:
Journal Name:
ACM International Conference on Information and Knowledge Management
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As the metaverse grows with the advances of new technologies, a number of researchers have raised the concern on the privacy of motion data in virtual reality (VR). It is becoming clear that motion data can reveal essential information of people, such as user identification. However, the fundamental problems about what types of motion data, how to process, and on what ranges of VR applications are still underexplored. This work summarizes the work of motion data privacy on these aspects from both the fields of VR and data privacy. Our results demonstrate that researchers from both fields have recognized the importance of the problem, while there are differences due to the focused problems. A variety of VR studies have been used for user identification, and the results are affected by the application types and ranges of involved actions. We also review the biometrics work from related fields including the behaviors of keystrokes and waist as well as data of skeleton, face and fingerprint. At the end, we discuss our findings and suggest future work to protect the privacy of motion data. 
    more » « less
  2. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  3. Abstract Recent immersive mixed reality (MR) and virtual reality (VR) displays enable users to use their hands to interact with both veridical and virtual environments simultaneously. Therefore, it becomes important to understand the performance of human hand-reaching movement in MR. Studies have shown that different virtual environment visualization modalities can affect point-to-point reaching performance using a stylus, but it is not yet known if these effects translate to direct human-hand interactions in mixed reality. This paper focuses on evaluating human point-to-point motor performance in MR and VR for both finger-pointing and cup-placement tasks. Six performance measures relevant to haptic interface design were measured for both tasks under several different visualization conditions (“MR with indicator,” “MR without indicator,” and “VR”) to determine what factors contribute to hand-reaching performance. A key finding was evidence of a trade-off between reaching “motion confidence” measures (indicated by throughput, number of corrective movements, and peak velocity) and “accuracy” measures (indicated by end-point error and initial movement error). Specifically, we observed that participants tended to be more confident in the “MR without Indicator” condition for finger-pointing tasks. These results contribute critical knowledge to inform the design of VR/MR interfaces based on the application's user performance requirements. 
    more » « less
  4. null (Ed.)
    Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. 
    more » « less
  5. Recent innovations in virtual and mixed-reality (VR/MR) technologies have enabled innovative hands-on training applications in high-risk/high-value fields such as medicine, flight, and worker-safety. Here, we present a detailed description of a novel VR/MR tactile user interactions/interface (TUI) hardware and software development framework that enables the rapid and cost-effective no-code development, optimization, and distribution of fully authentic hands-on VR/MR laboratory training experiences in the physical and life sciences. We applied our framework to the development and optimization of an introductory pipette calibration activity that is often carried out in real chemistry and biochemistry labs. Our approach provides users with nuanced real-time feedback on both their psychomotor skills during data acquisition and their attention to detail when conducting data analysis procedures. The cost-effectiveness of our approach relative to traditional face-to-face science labs improves access to quality hands-on science lab experiences. Importantly, the no-code nature of this Hands-On Virtual-Reality (HOVR) Lab platform enables faculties to iteratively optimize VR/MR experiences to meet their student’s targeted needs without costly software development cycles. Our platform also accommodates TUIs using either standard virtual-reality controllers (VR TUI mode) or fully functional hand-held physical lab tools (MR TUI mode). In the latter case, physical lab tools are strategically retrofitted with optical tracking markers to enable tactile, experimental, and analytical authenticity scientific experimentation. Preliminary user study data highlights the strengths and weaknesses of our generalized approach regarding student affective and cognitive student learning outcomes. 
    more » « less