skip to main content


Search for: All records

Award ID contains: 1814190

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Speculative execution attacks like Spectre and Meltdown exploit hardware performance optimization features to illegally access a secret and then leak the secret to an unauthorized recipient. Many variants of speculative execution attacks (also called transient execution attacks) have been proposed in the last few years, and new ones are constantly being discovered. While software mitigations for some attacks have been proposed, they often cause very significant performance degradation. Hardware solutions are also being proposed actively by the research community, especially as these are attacks on hardware microarchitecture. In this talk, we identify the critical steps in a speculative attack, and the root cause of successful attacks. We define the concept of "security dependencies", which should be implemented to prevent data leaks and other security breaches. We propose a taxonomy of defense strategies and show how proposed hardware defenses fall under each defense strategy. We discuss security-performance tradeoffs, which can decrease the performance overhead while still preventing security breaches. We suggest design principles for future security-aware microarchitecture. 
    more » « less
  2. We present a new and practical framework for security verification of secure architectures. Specifically, we break the verification task into external verification and internal verification. External verification considers the external protocols, i.e. interactions between users, compute servers, network entities, etc. Meanwhile, internal verification considers the interactions between hardware and software components within each server. This verification framework is general-purpose and can be applied to a stand-alone server, or a large-scale distributed system. We evaluate our verification method on the CloudMonatt and HyperWall architectures as examples. 
    more » « less
  3. Speculative execution attacks leverage the speculative and out-of-order execution features in modern computer processors to access secret data or execute code that should not be executed. Secret information can then be leaked through a covert channel. While software patches can be installed for mitigation on existing hardware, these solutions can incur big performance overhead. Hardware mitigation is being studied extensively by the computer architecture community. It has the benefit of preserving software compatibility and the potential for much smaller performance overhead than software solutions. This paper presents a systematization of the hardware defenses against speculative execution attacks that have been proposed. We show that speculative execution attacks consist of 6 critical attack steps. We propose defense strategies, each of which prevents a critical attack step from happening, thus preventing the attack from succeeding. We then summarize 20 hardware defenses and overhead-reducing features that have been proposed. We show that each defense proposed can be classified under one of our defense strategies, which also explains why it can thwart the attack from succeeding. We discuss the scope of the defenses, their performance overhead, and the security-performance trade-offs that can be made. 
    more » « less
  4. In cloud computing, it is desirable if suspicious activities can be detected by automatic anomaly detection systems. Although anomaly detection has been investigated in the past, it remains unsolved in cloud computing. Challenges are: characterizing the normal behavior of a cloud server, distinguishing between benign and malicious anomalies (attacks), and preventing alert fatigue due to false alarms. We propose CloudShield, a practical and generalizable real-time anomaly and attack detection system for cloud computing. Cloudshield uses a general, pretrained deep learning model with different cloud workloads, to predict the normal behavior and provide real-time and continuous detection by examining the model reconstruction error distributions. Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions. We evaluate the proposed CloudShield on representative cloud benchmarks. Our evaluation shows that CloudShield, using model pretraining, can apply to a wide scope of cloud workloads. Especially, we observe that CloudShield can detect the recently proposed speculative execution attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we show that CloudShield accurately differentiates and prioritizes known attacks, and potential zero-day attacks, from benign programs. Thus, it significantly reduces false alarms by up to 99.0%. 
    more » « less
  5. Impostors are attackers who take over a smartphone and gain access to the legitimate user’s confidential and private information. This paper proposes a defense-in-depth mechanism to detect impostors quickly with simple Deep Learning algorithms, which can achieve better detection accuracy than the best prior work which used Machine Learning algorithms requiring computation of multiple features. Different from previous work, we then consider protecting the privacy of a user’s behavioral (sensor) data by not exposing it outside the smartphone. For this scenario, we propose a Recurrent Neural Network (RNN) based Deep Learning algorithm that uses only the legitimate user’s sensor data to learn his/her normal behavior. We propose to use Prediction Error Distribution (PED) to enhance the detection accuracy. We also show how a minimalist hardware module, dubbed SID for Smartphone Impostor Detector, can be designed and integrated into smartphones for self-contained impostor detection. Experimental results show that SID can support real-time impostor detection, at a very low hardware cost and energy consumption, compared to other RNN accelerators. 
    more » « less
  6. Spectre and Meltdown attacks and their variants exploit hardware performance optimization features to cause security breaches. Secret information is accessed and leaked through covert or side channels. New attack variants keep appearing and we do not have a systematic way to capture the critical characteristics of these attacks and evaluate why they succeed or fail.In this paper, we provide a new attack-graph model for reasoning about speculative execution attacks. We model attacks as ordered dependency graphs, and prove that a race condition between two nodes can occur if there is a missing dependency edge between them. We define a new concept, “security dependency”, between a resource access and its prior authorization operation. We show that a missing security dependency is equivalent to a race condition between authorization and access, which is a root cause of speculative execution attacks. We show detailed examples of how our attack graph models the Spectre and Meltdown attacks, and is generalizable to all the attack variants published so far. This attack model is also very useful for identifying new attacks and for generalizing defense strategies. We identify several defense strategies with different performance-security tradeoffs. We show that the defenses proposed so far all fit under one of our defense strategies. We also explain how attack graphs can be constructed and point to this as promising future work for tool designers 
    more » « less
  7. Rootkits are malware that attempt to compromise the system’s functionalities while hiding their existence. Various rootkits have been proposed as well as different software defenses, but only very few hardware defenses. We position hardware-enhanced rootkit defenses as an interesting research opportunity for computer architects, especially as many new hardware defenses for speculative execution attacks are being actively considered. We first describe different techniques used by rootkits and their prime targets in the operating system. We then try to shed insights on what the main challenges are in providing a rootkit defense, and how these may be overcome. We show how a hypervisor-based defense can be implemented, and provide a full prototype implementation in an open-source cloud computing platform, OpenStack. We evaluate the performance overhead of different defense mechanisms. Finally, we point to some research opportunities for enhancing resilience to rootkit-like attacks in the hardware architecture. 
    more » « less
  8. null (Ed.)
    Benefiting from the advance of Deep Learning technology, IoT devices and systems are becoming more intelligent and multi-functional. They are expected to run various Deep Learning inference tasks with high efficiency and performance. This requirement is challenged by the mismatch between the limited computing capability of edge devices and large-scale Deep Neural Networks. Edge-cloud collaborative systems are then introduced to mitigate this conflict, enabling resource-constrained IoT devices to host arbitrary Deep Learning applications. However, the introduction of third-party clouds can bring potential privacy issues to edge computing. In this paper, we conduct a systematic study about the opportunities of attacking and protecting the privacy of edge-cloud collaborative systems. Our contributions are twofold: (1) we first devise a set of new attacks for an untrusted cloud to recover arbitrary inputs fed into the system, even if the attacker has no access to the edge device’s data or computations, or permissions to query this system. (2) We empirically demonstrate that solutions that add noise fail to defeat our proposed attacks, and then propose two more effective defense methods. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms. 
    more » « less
  9. null (Ed.)
    The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference. In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms. 
    more » « less
  10. null (Ed.)
    Controllers of security-critical cyber-physical systems, like the power grid, are a very important class of computer systems. Attacks against the control code of a power-grid system, especially zero-day attacks, can be catastrophic. Earlier detection of the anomalies can prevent further damage. However, detecting zero-day attacks is extremely challenging because they have no known code and have unknown behavior. Furthermore, if data collected from the controller is transferred to a server through networks for analysis and detection of anomalous behavior, this creates a very large attack surface and also delays detection. In order to address this problem, we propose Reconstruction Error Distribution (RED) of Hardware Performance Counters (HPCs), and a data-driven defense system based on it. Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller. Then, we run this model using real-time data from commonly available HPCs. We use the proposed RED to enhance the temporal deep learning detection of anomalous behavior, by estimating distribution deviations from the normal behavior with an effective statistical test. Experimental results on a real power-grid controller show that we can detect anomalous behavior with high accuracy (>99.9%), nearly zero false positives and short (<360ms) latency. 
    more » « less