skip to main content


Title: A Machine Learning Approach for Combating Cyber Attacks in Self-Driving Vehicles
Self-driving vehicles are very susceptible to cyber attacks. This paper aims to utilize a machine learning approach in combating cyber attacks on self-driving vehicles. We focus on detecting incorrect data that are injected into the data bus of vehicles. We will utilize the extreme gradient boosting approach, as a promising example of machine learning, to classify such incorrect information. We will discuss in details the research methodology, which includes acquiring the driving data, preprocessing it, artificially inserting incorrect information, and finally classifying it. Our results show that the considered algorithm achieve accuracy of up to 92% in detecting the abnormal behavior on the car data bus.  more » « less
Award ID(s):
1816112
NSF-PAR ID:
10227657
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of IEEE Southeastcon
ISSN:
1558-058X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Communication networks in power systems are a major part of the smart grid paradigm. It enables and facilitates the automation of power grid operation as well as self-healing in contingencies. Such dependencies on communication networks, though, create a roam for cyber-threats. An adversary can launch an attack on the communication network, which in turn reflects on power grid operation. Attacks could be in the form of false data injection into system measurements, flooding the communication channels with unnecessary data, or intercepting messages. Using machine learning-based processing on data gathered from communication networks and the power grid is a promising solution for detecting cyber threats. In this paper, a co-simulation of cyber-security for cross-layer strategy is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection as well as identification of anomalies in the operation of the power grid. The framework is implemented on the IEEE 118-bus system. The system is constructed in Mininet to simulate a communication network and obtain data for analysis. A distributed three controller software-defined networking (SDN) framework is proposed that utilizes the Open Network Operating System (ONOS) cluster. According to the findings of our suggested architecture, it outperforms a single SDN controller framework by a factor of more than ten times the throughput. This provides for a higher flow of data throughout the network while decreasing congestion caused by a single controller’s processing restrictions. Furthermore, our CECD-AS approach outperforms state-of-the-art physics and machine learning-based techniques in terms of attack classification. The performance of the framework is investigated under various types of communication attacks. 
    more » « less
  2. The widespread application of phasor measurement units has improved grid operational reliability. However, this has increased the risk of cyber threats such as false data injection attack that mislead time-critical measurements, which may lead to incorrect operator actions. While a single incorrect operator action might not result in a cascading failure, a series of actions impacting critical lines and transformers, combined with pre-existing faults or scheduled maintenance, might lead to widespread outages. To prevent cascading failures, controlled islanding strategies are traditionally implemented. However, islanding is effective only when the received data are trustworthy. This paper investigates two multi-objective controlled islanding strategies to accommodate data uncertainties under scenarios of lack of or partial knowledge of false data injection attacks. When attack information is not available, the optimization problem maximizes island observability using a minimum number of phasor measurement units for a more accurate state estimation. When partial attack information is available, vulnerable phasor measurement units are isolated to a smaller island to minimize the impacts of attacks. Additional objectives ensure steady-state and transient-state stability of the islands. Simulations are performed on 200-bus, 500-bus, and 2000-bus systems. 
    more » « less
  3. Modern vehicles can be thought of as complex distributed embedded systems that run a variety of automotive applications with real-time constraints. Recent advances in the automotive industry towards greater autonomy are driving vehicles to be increasingly connected with various external systems (e.g., roadside beacons, other vehicles), which makes emerging vehicles highly vulnerable to cyber-attacks. Additionally, the increased complexity of automotive applications and the in-vehicle networks results in poor attack visibility, which makes detecting such attacks particularly challenging in automotive systems. In this work, we present a novel anomaly detection framework called LATTE to detect cyber-attacks in Controller Area Network (CAN) based networks within automotive platforms. Our proposed LATTE framework uses a stacked Long Short Term Memory (LSTM) predictor network with novel attention mechanisms to learn the normal operating behavior at design time. Subsequently, a novel detection scheme (also trained at design time) is used to detect various cyber-attacks (as anomalies) at runtime. We evaluate our proposed LATTE framework under different automotive attack scenarios and present a detailed comparison with the best-known prior works in this area, to demonstrate the potential of our approach. 
    more » « less
  4. null (Ed.)
    Connected Autonomous Vehicular (CAV) platoon refers to a group of vehicles that coordinate their movements and operate as a single unit. The vehicle at the head acts as the leader of the platoon and determines the course of the vehicles following it. The follower vehicles utilize Vehicle-to-Vehicle (V2V) communication and automated driving support systems to automatically maintain a small fixed distance between each other. Reliance on V2V communication exposes platoons to several possible malicious attacks which can compromise the safety, stability, and efficiency of the vehicles. We present a novel distributed resiliency architecture, RePLACe for CAV platoon vehicles to defend against adversaries corrupting V2V communication reporting preceding vehicle position. RePLACe is unique in that it can provide real-time defense against a spectrum of communication attacks. RePLACe provides systematic augmentation of a platoon controller architecture with real-time detection and mitigation functionality using machine learning. Unlike computationally intensive cryptographic solutions RePLACe accounts for the limited computation capabilities provided by automotive platforms as well as the real-time requirements of the application. Furthermore, unlike control-theoretic approaches, the same framework works against the broad spectrum of attacks. We also develop a systematic approach for evaluation of resiliency of CAV applications against V2V attacks. We perform extensive experimental evaluation to demonstrate the efficacy of RePLACe. 
    more » « less
  5. Models produced by machine learning, particularly deep neural networks, are state-of-the-art for many machine learning tasks and demonstrate very high prediction accuracy. Unfortunately, these models are also very brittle and vulnerable to specially crafted adversarial examples. Recent results have shown that accuracy of these models can be reduced from close to hundred percent to below 5\% using adversarial examples. This brittleness of deep neural networks makes it challenging to deploy these learning models in security-critical areas where adversarial activity is expected, and cannot be ignored. A number of methods have been recently proposed to craft more effective and generalizable attacks on neural networks along with competing efforts to improve robustness of these learning models. But the current approaches to make machine learning techniques more resilient fall short of their goal. Further, the succession of new adversarial attacks against proposed methods to increase neural network robustness raises doubts about a foolproof approach to robustify machine learning models against all possible adversarial attacks. In this paper, we consider the problem of detecting adversarial examples. This would help identify when the learning models cannot be trusted without attempting to repair the models or make them robust to adversarial attacks. This goal of finding limitations of the learning model presents a more tractable approach to protecting against adversarial attacks. Our approach is based on identifying a low dimensional manifold in which the training samples lie, and then using the distance of a new observation from this manifold to identify whether this data point is adversarial or not. Our empirical study demonstrates that adversarial examples not only lie farther away from the data manifold, but this distance from manifold of the adversarial examples increases with the attack confidence. Thus, adversarial examples that are likely to result into incorrect prediction by the machine learning model is also easier to detect by our approach. This is a first step towards formulating a novel approach based on computational geometry that can identify the limiting boundaries of a machine learning model, and detect adversarial attacks. 
    more » « less