Automotive systems have always been designed with safety in mind. In this regard, the functional safety standard, ISO 26262, was drafted with the intention of minimizing risk due to random hardware faults or systematic failure in design of electrical and electronic components of an automobile. However, growing complexity of a modern car has added another potential point of failure in the form of cyber or sensor attacks. Recently, researchers have demonstrated that vulnerability in vehicle's software or sensing units could enable them to remotely alter the intended operation of the vehicle. As such, in addition to safety, security should be considered as an important design goal. However, designing security solutions without the consideration of safety objectives could result in potential hazards. Consequently, in this paper we propose the notion of security for safety and show that by integrating safety conditions with our system-level security solution, which comprises of a modified Kalman filter and a Chi-squared detector, we can prevent potential hazards that could occur due to violation of safety objectives during an attack. Furthermore, with the help of a car-following case study, where the follower car is equipped with an adaptive-cruise control unit, we show that our proposed system-level security solution preserves the safety constraints and prevent collision between vehicle while under sensor attack.
more »
« less
Trick or Heat?: Manipulating Critical Temperature-Based Control Systems Using Rectification Attacks
Temperature sensing and control systems are widely used in the closed-loop control of critical processes such as maintaining the thermal stability of patients, or in alarm systems for detecting temperature-related hazards. However, the security of these systems has yet to be completely explored, leaving potential attack surfaces that can be exploited to take control over critical systems. In this paper we investigate the reliability of temperature-based control systems from a security and safety perspective. We show how unexpected consequences and safety risks can be induced by physical-level attacks on analog temperature sensing components. For instance, we demonstrate that an adversary could remotely manipulate the temperature sensor measurements of an infant incubator to cause potential safety issues, without tampering with the victim system or triggering automatic temperature alarms. This attack exploits the unintended rectification effect that can be induced in operational and instrumentation amplifiers to control the sensor output, tricking the internal control loop of the victim system to heat up or cool down. Furthermore, we show how the exploit of this hardware-level vulnerability could affect different classes of analog sensors that share similar signal conditioning processes. Our experimental results indicate that conventional defenses commonly deployed in these systems are not sufficient to mitigate the threat, so we propose a prototype design of a low-cost anomaly detector for critical applications to ensure the integrity of temperature sensor signals.
more »
« less
- Award ID(s):
- 1812553
- PAR ID:
- 10422593
- Date Published:
- Journal Name:
- Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
- Page Range / eLocation ID:
- 2301 to 2315
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite various prior studies on its security issues, all of them only consider attacks on cameraor LiDAR-based AD perception alone. However, production AD systems today predominantly adopt a Multi-Sensor Fusion (MSF) based design, which in principle can be more robust against these attacks under the assumption that not all fusion sources are (or can be) attacked at the same time. In this paper, we present the first study of security issues of MSF-based perception in AD systems. We directly challenge the basic MSF design assumption above by exploring the possibility of attacking all fusion sources simultaneously. This allows us for the first time to understand how much security guarantee MSF can fundamentally provide as a general defense strategy for AD perception. We formulate the attack as an optimization problem to generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it. To systematically generate such a physical-world attack, we propose a novel attack pipeline that addresses two main design challenges: (1) non-differentiable target camera and LiDAR sensing systems, and (2) non-differentiable cell-level aggregated features popularly used in LiDAR-based AD perception. We evaluate our attack on MSF algorithms included in representative open-source industry-grade AD systems in real-world driving scenarios. Our results show that the attack achieves over 90% success rate across different object types and MSF algorithms. Our attack is also found stealthy, robust to victim positions, transferable across MSF algorithms, and physical-world realizable after being 3D-printed and captured by LiDAR and camera devices. To concretely assess the end-to-end safety impact, we further perform simulation evaluation and show that it can cause a 100% vehicle collision rate for an industry-grade AD system. We also evaluate and discuss defense strategies.more » « less
-
The secure functioning of automotive systems is vital to the safety of their passengers and other roadway users. One of the critical functions for safety is the controller area network (CAN), which interconnects the safety-critical electronic control units (ECUs) in the majority of ground vehicles. Unfortunately CAN is known to be vulnerable to several attacks. One such attack is the bus-off attack, which can be used to cause a victim ECU to disconnect itself from the CAN bus and, subsequently, for an attacker to masquerade as that ECU. A limitation of the bus-off attack is that it requires the attacker to achieve tight synchronization between the transmission of the victim and the attacker’s injected message. In this paper, we introduce a schedule-based attack framework for the CAN bus-off attack that uses the real-time schedule of the CAN bus to predict more attack opportunities than previously known. We describe a ranking method for an attacker to select and optimize its attack injections with respect to criteria such as attack success rate, bus perturbation, or attack latency. The results show that vulnerabilities of the CAN bus can be enhanced by schedulebased attacks.more » « less
-
Multi-sensor fusion has been widely used by autonomous vehicles (AVs) to integrate the perception results from different sensing modalities including LiDAR, camera and radar. Despite the rapid development of multi-sensor fusion systems in autonomous driving, their vulnerability to malicious attacks have not been well studied. Although some prior works have studied the attacks against the perception systems of AVs, they only consider a single sensing modality or a camera-LiDAR fusion system, which can not attack the sensor fusion system based on LiDAR, camera, and radar. To fill this research gap, in this paper, we present the first study on the vulnerability of multi-sensor fusion systems that employ LiDAR, camera, and radar. Specifically, we propose a novel attack method that can simultaneously attack all three types of sensing modalities using a single type of adversarial object. The adversarial object can be easily fabricated at low cost, and the proposed attack can be easily performed with high stealthiness and flexibility in practice. Extensive experiments based on a real-world AV testbed show that the proposed attack can continuously hide a target vehicle from the perception system of a victim AV using only two small adversarial objects.more » « less
-
Deep neural networks (DNNs) are influencing a wide range of applications from safety-critical to security-sensitive use cases. In many such use cases, the DNN inference process relies on distributed systems involving IoT devices and edge/cloud severs as participants where a pre-trained DNN model is partitioned/split onto multiple parts and the participants collaboratively execute them. However, often such collaboration requires dynamic DNN partitioning information to be exchanged among the participants over unsecured network or via relays/hops which can lead to novel privacy vulnerabilities. In this paper, we propose a DNN model extraction attack that exploits such vulnerabilities to not only extract the original input data, but also reconstruct the entire victim DNN model. Specifically, the proposed attack model utilizes extracted/leaked data and adversarial autoencoders to generate and train a shadow model that closely mimics the behavior of the original victim model. The proposed attack is query-free and does not require the attacker to have any prior information about the victim model and input data. Using an IoT- edge hardware testbed running collaborative DNN inference, we demonstrate the effectiveness of the proposed attack model in extracting the victim model with high levels of certainty across many realistic scenarios.more » « less
An official website of the United States government

