skip to main content


Title: Personal identifiability of user tracking data during observation of 360-degree VR video
Abstract

Virtual reality (VR) is a technology that is gaining traction in the consumer market. With it comes an unprecedented ability to track body motions. These body motions are diagnostic of personal identity, medical conditions, and mental states. Previous work has focused on the identifiability of body motions in idealized situations in which some action is chosen by the study designer. In contrast, our work tests the identifiability of users under typical VR viewing circumstances, with no specially designed identifying task. Out of a pool of 511 participants, the system identifies 95% of users correctly when trained on less than 5 min of tracking data per person. We argue these results show nonverbal data should be understood by the public and by researchers as personally identifying data.

 
more » « less
Award ID(s):
1800922
NSF-PAR ID:
10307200
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
10
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  2. Abstract

    We propose a learning‐based approach for full‐body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4‐point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real‐time, and is able to produce temporally coherent and realistic motions.

     
    more » « less
  3. We present and evaluate methods to redirect desktop inputs such as eye gaze and mouse pointing to a VR-embedded avatar. We use these methods to build a novel interface that allows a desktop user to give presentations in remote VR meetings such as conferences or classrooms. Recent work on such VR meetings suggests a substantial number of users continue to use desktop interfaces due to ergonomic or technical factors. Our approach enables desk-top and immersed users to better share virtual worlds, by allowing desktop-based users to have more engaging or present "cross-reality" avatars. The described redirection methods consider mouse pointing and drawing for a presentation, eye-tracked gaze towards audience members, hand tracking for gesturing, and associated avatar motions such as head and torso movement. A study compared different levels of desktop avatar control and headset-based control. Study results suggest that users consider the enhanced desktop avatar to be human-like and lively and draw more attention than a conventionally animated desktop avatar, implying that our interface and methods could be useful for future cross-reality remote learning tools. 
    more » « less
  4. As social Virtual Reality (VR) grows in prevalence, new possibilities for embodied and immersive social interaction emerge, including varied forms of interpersonal harm. Yet, challenges remain regarding defining, identifying, and mitigating said harm in social VR. In this paper, we take an alternative approach to understanding and designing solutions for interpersonal harm in social VR through the lens of consent, which circumvents the lack of consensus and social norms on what should be defined as harm in social VR and reflects the embodied, immersive, and offline-world-like nature of harm in social VR. Through interviews with 39 social VR users, we offer one of the first empirical explorations on how social VR users understand consent as "boundaries," (re)purpose existing social VR features for practicing consent as "boundary setting," and envision the design of future consent mechanics in social VR to balance protection and interaction expectations to mitigate interpersonal harm as "boundary violations" in social VR. This work makes significant contributions to CSCW and HCI research by (1) uncovering how social VR users craft novel conceptualizations of consent as boundaries and harm as unwanted boundary violations, and (2) providing three foundational principles for designing future consent mechanics in social VR informed by actual social VR users. 
    more » « less
  5. ABSTRACT

    The second data release of ESA’s Gaia mission revealed numerous signatures of disequilibrium in the Milky Way’s disc. These signatures are seen in the planar kinematics of stars, which manifest as ridges and ripples in R–vϕ, and in vertical kinematics, where a prominent spiral is seen in the z–vz phase space. In this work, we show an equivalent ΔR–vR phase spiral forms following a perturbation to the disc. We demonstrate the behaviour of the ΔR–vR phase spirals in both a toy model and a high-resolution N-body simulation of a satellite interaction. We then confront these models with the data, where we find partial ΔR–vR phase spirals in the Solar neighbourhood using the most recent data from Gaia DR3. This structure indicates ongoing radial phase mixing in the Galactic disc, suggesting a history of recent perturbations, either through internal or external (e.g. satellite) processes. Future work modelling the z–vz and ΔR–vR phase spirals in tandem may help break degeneracy’s between possible origins of the perturbation.

     
    more » « less