The secure functioning of automotive systems is vital to the safety of their passengers and other roadway users. One of the critical functions for safety is the controller area network (CAN), which interconnects the safety-critical electronic control units (ECUs) in the majority of ground vehicles. Unfortunately CAN is known to be vulnerable to several attacks. One such attack is the bus-off attack, which can be used to cause a victim ECU to disconnect itself from the CAN bus and, subsequently, for an attacker to masquerade as that ECU. A limitation of the bus-off attack is that it requires the attacker to achieve tight synchronization between the transmission of the victim and the attacker’s injected message. In this paper, we introduce a schedule-based attack framework for the CAN bus-off attack that uses the real-time schedule of the CAN bus to predict more attack opportunities than previously known. We describe a ranking method for an attacker to select and optimize its attack injections with respect to criteria such as attack success rate, bus perturbation, or attack latency. The results show that vulnerabilities of the CAN bus can be enhanced by schedulebased attacks.
more »
« less
On the Pitfalls and Vulnerabilities of Schedule Randomization Against Schedule-Based Attacks
Schedule randomization is one of the recently introduced security defenses against schedule-based attacks, i.e., attacks whose success depends on a particular ordering between the execution window of an attacker and a victim task within the system. It falls into the category of information hiding (as opposed to deterministic isolation-based defenses) and is designed to reduce the attacker's ability to infer the future schedule. This paper aims to investigate the limitations and vulnerabilities of schedule randomization-based defenses in real-time systems. We first provide definitions, categorization, and examples of schedule-based attacks, and then discuss the challenges of employing schedule randomization in real-time systems. Further, we provide a preliminary security test to determine whether a certain timing relation between the attacker and victim tasks will never happen in systems scheduled by a fixed-priority scheduling algorithm. Finally, we compare fixed-priority scheduling against schedule-randomization techniques in terms of the success rate of various schedule-based attacks for both synthetic and real-world applications. Our results show that, in many cases, schedule randomization either has no security benefits or can even increase the success rate of the attacker depending on the priority relation between the attacker and victim tasks.
more »
« less
- PAR ID:
- 10108594
- Date Published:
- Journal Name:
- 2019 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)
- Page Range / eLocation ID:
- 103 to 116
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Timing correctness is crucial in a multi-criticality real-time system, such as an autonomous driving system. It has been recently shown that these systems can be vulnerable to timing inference attacks, mainly due to their predictable behavioral patterns. Existing solutions like schedule randomization cannot protect against such attacks, often limited by the system’s real-time nature. This article presents “ SchedGuard++ ”: a temporal protection framework for Linux-based real-time systems that protects against posterior schedule-based attacks by preventing untrusted tasks from executing during specific time intervals. SchedGuard++ supports multi-core platforms and is implemented using Linux containers and a customized Linux kernel real-time scheduler. We provide schedulability analysis assuming the Logical Execution Time (LET) paradigm, which enforces I/O predictability. The proposed response time analysis takes into account the interference from trusted and untrusted tasks and the impact of the protection mechanism. We demonstrate the effectiveness of our system using a realistic radio-controlled rover platform. Not only is “ SchedGuard++ ” able to protect against the posterior schedule-based attacks, but it also ensures that the real-time tasks/containers meet their temporal requirements.more » « less
-
Trajectory prediction forecasts nearby agents’ moves based on their historical trajectories. Accurate trajectory prediction (or prediction in short) is crucial for autonomous vehicles (AVs). Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage of predictionside attack efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV. The effectiveness of our attack is also demonstrated on a real testbed car 1. To the best of our knowledge, this study is the first security analysis spanning from LiDARbased perception to prediction in autonomous driving, leading to a realistic attack on prediction. To counteract the proposed attack, potential defenses are discussed.more » « less
-
Sybil attacks present a significant threat to many Internet systems and applications, in which a single adversary inserts multiple colluding identities in the system to compromise its security and privacy. Recent work has advocated the use of social-network-based trust relationships to defend against Sybil attacks. However, most of the prior security analyses of such systems examine only the case of social networks at a single instant in time. In practice, social network connections change over time, and attackers can also cause limited changes to the networks. In this work, we focus on the temporal dynamics of a variety of social-network-based Sybil defenses. We describe and examine the effect of novel attacks based on: (a) the attacker's ability to modify Sybil-controlled parts of the social-network graph, (b) his ability to change the connections that his Sybil identities maintain to honest users, and (c) taking advantage of the regular dynamics of connections forming and breaking in the honest part of the social network. We find that against some defenses meant to be fully distributed, such as SybilLimit and Persea, the attacker can make dramatic gains over time and greatly undermine the security guarantees of the system. Even against centrally controlled Sybil defenses, the attacker can eventually evade detection (e.g. against SybilInfer and SybilRank) or create denial-of-service conditions (e.g. against Ostra and SumUp). After analysis and simulation of these attacks using both synthetic and real-world social network topologies, we describe possible defense strategies and the trade-offs that should be explored. It is clear from our findings that temporal dynamics need to be accounted for in Sybil defense or else the attacker will be able to undermine the system in unexpected and possibly dangerous ways.more » « less
-
Deep neural networks (DNNs) provide excellent performance across a wide range of classification tasks, but their training requires high computational resources and is often outsourced to third parties. Recent work has shown that outsourced training introduces the risk that a malicious trainer will return a backdoored DNN that behaves normally on most inputs but causes targeted misclassifications or degrades the accuracy of the network when a trigger known only to the attacker is present. In this paper, we provide the first effective defenses against backdoor attacks on DNNs. We implement three backdoor attacks from prior work and use them to investigate two promising defenses, pruning and fine-tuning. We show that neither, by itself, is sufficient to defend against sophisticated attackers. We then evaluate fine-pruning, a combination of pruning and fine-tuning, and show that it successfully weakens or even eliminates the backdoors, i.e., in some cases reducing the attack success rate to 0% with only a 0.4% drop in accuracy for clean (non-triggering) inputs. Our work provides the first step toward defenses against backdoor attacks in deep neural networks.more » « less