skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: Modeling and Defense of Social Virtual Reality Attacks Inducing Cybersickness
Social Virtual Reality Learning Environments (VRLE) offer a new medium for flexible and immersive learning environments with geo-distributed users. Ensuring user safety in VRLE application domains such as education, flight simulations, military training is of utmost importance. Specifically, there is a need to study the impact of ‘`immersion attacks’' (e.g., chaperone attack, occlusion) and other types of attacks/faults (e.g., unauthorized access, network congestion) that may cause user safety issues (i.e., inducing of cybersickness). In this paper, we present a novel framework to quantify the security, privacy issues triggered via immersion attacks and other types of attacks/faults. By using a real-world social VRLE viz., vSocial and creating a novel attack-fault tree model, we show that such attacks can induce undesirable levels of cybersickness. Next, we convert these attack-fault trees into stochastic timed automata (STA) representations to perform statistical model checking for a given attacker profile. Using this model checking approach, we determine the most vulnerable threat scenarios that can trigger high occurrence cases of cybersickness for VRLE users. Lastly, we show the effectiveness of our attack-fault tree modeling by incorporating suitable design principles such as hardening, diversity, redundancy and principle of least privilege to ensure user safety in a VRLE session.  more » « less
Award ID(s):
2114035
NSF-PAR ID:
10343866
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Dependable and Secure Computing
ISSN:
1545-5971
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. —Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a threedimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system. 
    more » « less
  2. Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a threedimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system. 
    more » « less
  3. Recent research has exposed a number of security issues related to the use of FPGAs in embedded system and cloud computing environments. Circuits that deliberately waste power can be carefully crafted by a malicious cloud FPGA user and deployed to cause denial-of-service and fault injection attacks. The main defense strategy used by FPGA cloud services involves checking user-submitted designs for circuit structures that are known to aggressively consume power. Unfortunately, this approach is limited by an attacker’s ability to conceive new designs that defeat existing checkers. In this work, our contributions are twofold. We evaluate a variety of circuit power wasting techniques that typically are not flagged by design rule checks imposed by FPGA cloud computing vendors. The efficiencies of five power wasting circuits, including our new design, are evaluated in terms of power consumed per logic resource. We then show that the source of voltage attacks based on power wasters can be identified. Our monitoring approach localizes the attack and suppresses the clock signal for the target region within 21 μs, which is fast enough to stop an attack before it causes a board reset. All experiments are performed using a state-of-the-art Intel Stratix 10 FPGA. 
    more » « less
  4. Abstract—In this paper, we introduce DFSSD, a novel logic locking solution for sequential and FSM circuits with a restricted (locked) access to the scan chain. DFSSD combines two techniques for obfuscation: (1) Deep Faults, and (2) Shallow State Duality. Both techniques are specifically designed to resist against sequential SAT attacks based on bounded model checking. The shallow state duality prevents a sequential SAT attack from taking a shortcut for early termination without running an exhaustive unbounded model checker to assess if the attack could be terminated. The deep fault, on the other hand, provides a designer with a technique for building deep, yet key recoverable faults that could not be discovered by sequential SAT (and bounded model checker based) attacks in a reasonable time. 
    more » « less
  5. As widely used in data-driven decision-making, recommender systems have been recognized for their capabilities to provide users with personalized services in many user-oriented online services, such as E-commerce (e.g., Amazon, Taobao, etc.) and Social Media sites (e.g., Facebook and Twitter). Recent works have shown that deep neural networks-based recommender systems are highly vulnerable to adversarial attacks, where adversaries can inject carefully crafted fake user profiles (i.e., a set of items that fake users have interacted with) into a target recommender system to promote or demote a set of target items. Instead of generating users with fake profiles from scratch, in this paper, we introduce a novel strategy to obtain “fake” user profiles via copying cross-domain user profiles, where a reinforcement learning-based black-box attacking framework (CopyAttack+) is developed to effectively and efficiently select cross-domain user profiles from the source domain to attack the target system. Moreover, we propose to train a local surrogate system for mimicking adversarial black-box attacks in the source domain, so as to provide transferable signals with the purpose of enhancing the attacking strategy in the target black-box recommender system. Comprehensive experiments on three real-world datasets are conducted to demonstrate the effectiveness of the proposed attacking framework. 
    more » « less