skip to main content


Title: Immersive Virtual Reality Attacks and the Human Joystick
This is one of the first accounts for the security analysis of consumer immersive Virtual Reality (VR) systems. This work breaks new ground, coins new terms, and constructs proof of concept implementations of attacks related to immersive VR. Our work used the two most widely adopted immersive VR systems, the HTC Vive, and the Oculus Rift. More specifically, we were able to create attacks that can potentially disorient users, turn their Head Mounted Display (HMD) camera on without their knowledge, overlay images in their field of vision, and modify VR environmental factors that force them into hitting physical objects and walls. Finally, we illustrate through a human participant deception study the success of being able to exploit VR systems to control immersed users and move them to a location in physical space without their knowledge. We term this the Human Joystick Attack. We conclude our work with future research directions and ways to enhance the security of these systems.  more » « less
Award ID(s):
1748950
NSF-PAR ID:
10113849
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Transactions on Dependable and Secure Computing
ISSN:
1545-5971
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Modern autonomous vehicles are increasingly infused with sensors, electronics, and software software. One consequence is that they are getting increasingly susceptible to cyber-attacks. However, awareness of cybersecurity challenges for automotive systems remains low. In this paper, we consider the problem of developing a virtual reality (VR) infrastructure that can enable users who are not necessarily experts in automotive security to explore vulnerabilities arising from compromised ranging sensors. A key requirement for such platforms is to develop natural, intuitive scenarios that enable the user to experience security challenges and impact. We discuss the challenges in developing such scenarios, and develop a solution that enables exploration of jamming and spoong attacks. Our solution is integrated into a VR platform for automotive se- curity exploration called IVE (Immersive Virtual Environment). It combines realistic driving with a rst-person view, user interaction, and sound eects to provide all the benets of a real-life simulation without the consequences. 
    more » « less
  2. During active shooter events or emergencies, the ability of security personnel to respond appropriately to the situation is driven by pre-existing knowledge and skills, but also depends upon their state of mind and familiarity with similar scenarios. Human behavior becomes unpredictable when it comes to making a decision in emergency situations. The cost and risk of determining these human behavior characteristics in emergency situations is very high. This paper presents an immersive collaborative virtual reality (VR) environment for performing virtual building evacuation drills and active shooter training scenarios using Oculus Rift head mounted displays. The collaborative immersive environment is implemented in Unity 3D and is based on run, hide, and fight mode for emergency response. The immersive collaborative VR environment also offers a unique method for training in emergencies for campus safety. The participant can enter the collaborative VR environment setup on the cloud and participate in the active shooter response training environment, which leads to considerable cost advantages over large-scale real-life exercises. A presence questionnaire in the user study was used to evaluate the effectiveness of our immersive training module. The results show that a majority of users agreed that their sense of presence was increased when using the immersive emergency 
    more » « less
  3. As augmented and virtual reality (AR/VR) technology matures, a method is desired to represent real-world persons visually and aurally in a virtual scene with high fidelity to craft an immersive and realistic user experience. Current technologies leverage camera and depth sensors to render visual representations of subjects through avatars, and microphone arrays are employed to localize and separate high-quality subject audio through beamforming. However, challenges remain in both realms. In the visual domain, avatars can only map key features (e.g., pose, expression) to a predetermined model, rendering them incapable of capturing the subjects’ full details. Alternatively, high-resolution point clouds can be utilized to represent human subjects. However, such three-dimensional data is computationally expensive to process. In the realm of audio, sound source separation requires prior knowledge of the subjects’ locations. However, it may take unacceptably long for sound source localization algorithms to provide this knowledge, which can still be error-prone, especially with moving objects. These challenges make it difficult for AR systems to produce real-time, high-fidelity representations of human subjects for applications such as AR/VR conferencing that mandate negligible system latency. We present Acuity, a real-time system capable of creating high-fidelity representations of human subjects in a virtual scene both visually and aurally. Acuity isolates subjects from high-resolution input point clouds. It reduces the processing overhead by performing background subtraction at a coarse resolution, then applying the detected bounding boxes to fine-grained point clouds. Meanwhile, Acuity leverages an audiovisual sensor fusion approach to expedite sound source separation. The estimated object location in the visual domain guides the acoustic pipeline to isolate the subjects’ voices without running sound source localization. Our results demonstrate that Acuity can isolate multiple subjects’ high-quality point clouds with a maximum latency of 70 ms and average throughput of over 25 fps, while separating audio in less than 30 ms. We provide the source code of Acuity at: https://github.com/nesl/Acuity. 
    more » « less
  4. While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR. 
    more » « less
  5. State-of-the-art password guessing tools, such as HashCat and John the Ripper, enable users to check billions of passwords per second against password hashes. In addition to performing straightforward dictionary attacks, these tools can expand password dictionaries using password generation rules, such as concatenation of words (e.g., “password123456”) and leet speak (e.g., “password” becomes “p4s5w0rd”). Although these rules work well in practice, creating and expanding them to model further passwords is a labor-intensive task that requires specialized expertise. To address this issue, in this paper we introduce PassGAN, a novel approach that replaces human-generated password rules with theory-grounded machine learning algorithms. Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising. When we evaluated PassGAN on two large password datasets, we were able to surpass rule-based and state-of-the-art machine learning password guessing tools. However, in contrast with the other tools, PassGAN achieved this result without any a-priori knowledge on passwords or common password structures. Additionally, when we combined the output of PassGAN with the output of HashCat, we were able to match 51%–73% more passwords than with HashCat alone. This is remarkable, because it shows that PassGAN can autonomously extract a considerable number of password properties that current state-of-the art rules do not encode. 
    more » « less