Volunteer computing uses Internet-connected devices (laptops, PCs, smart devices, etc.), in which their owners volunteer them as storage and computing power resources, has become an essential mechanism for resource management in numerous applications. The growth of the volume and variety of data traffic on the Internet leads to concerns on the robustness of cyberphysical systems especially for critical infrastructures. Therefore, the implementation of an efficient Intrusion Detection System for gathering such sensory data has gained vital importance. In this article, we present a comparative study of Artificial Intelligence (AI)-driven intrusion detection systems for wirelessly connected sensors that track crucial applications. Specifically, we present an in-depth analysis of the use of machine learning, deep learning and reinforcement learning solutions to recognise intrusive behavior in the collected traffic. We evaluate the proposed mechanisms by using KDD’99 as real attack dataset in our simulations. Results present the performance metrics for three different IDSs, namely the Adaptively Supervised and Clustered Hybrid IDS (ASCH-IDS), Restricted Boltzmann Machine-based Clustered IDS (RBC-IDS), and Q-learning based IDS (Q-IDS), to detect malicious behaviors. We also present the performance of different reinforcement learning techniques such as State-Action-Reward-State-Action Learning (SARSA) and the Temporal Difference learning (TD). Through simulations, we show that Q-IDS performs with detection rate while SARSA-IDS and TD-IDS perform at the order of .
more »
« less
Adversarial Data-Augmented Resilient Intrusion Detection System for Unmanned Aerial Vehicles
With the growing adoption of unmanned aerial vehicles (UAVs) across various domains, the security of their operations is paramount. UAVs, heavily dependent on GPS navigation, are at risk of jamming and spoofing cyberattacks, which can severely jeopardize their performance, safety, and mission integrity. Intrusion detection systems (IDSs) are typically employed as defense mechanisms, often leveraging traditional machine learning techniques. However, these IDSs are susceptible to adversarial attacks that exploit machine learning models by introducing input perturbations. In this work, we propose a novel IDS for UAVs to enhance resilience against such attacks using generative adversarial networks (GAN). We also comprehensively study several evasion-based adversarial attacks and utilize them to compare the performance of the proposed IDS with existing ones. The resilience is achieved by generating synthetic data based on the identified weak points in the IDS and incorporating these adversarial samples in the training process to regularize the learning. The evaluation results demonstrate that the proposed IDS is significantly robust against adversarial machine learning based attacks compared to the state-of-the-art IDSs while maintaining a low false positive rate.
more »
« less
- PAR ID:
- 10478492
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Big Data 2023
- Page Range / eLocation ID:
- 10
- Subject(s) / Keyword(s):
- Unmanned aerial vehicles Embedded systems Adversarial attacks Machine learning Generative adversarial networks Intrusion Detection Systems
- Format(s):
- Medium: X
- Location:
- Sorrento, Italy
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Models produced by machine learning, particularly deep neural networks, are state-of-the-art for many machine learning tasks and demonstrate very high prediction accuracy. Unfortunately, these models are also very brittle and vulnerable to specially crafted adversarial examples. Recent results have shown that accuracy of these models can be reduced from close to hundred percent to below 5\% using adversarial examples. This brittleness of deep neural networks makes it challenging to deploy these learning models in security-critical areas where adversarial activity is expected, and cannot be ignored. A number of methods have been recently proposed to craft more effective and generalizable attacks on neural networks along with competing efforts to improve robustness of these learning models. But the current approaches to make machine learning techniques more resilient fall short of their goal. Further, the succession of new adversarial attacks against proposed methods to increase neural network robustness raises doubts about a foolproof approach to robustify machine learning models against all possible adversarial attacks. In this paper, we consider the problem of detecting adversarial examples. This would help identify when the learning models cannot be trusted without attempting to repair the models or make them robust to adversarial attacks. This goal of finding limitations of the learning model presents a more tractable approach to protecting against adversarial attacks. Our approach is based on identifying a low dimensional manifold in which the training samples lie, and then using the distance of a new observation from this manifold to identify whether this data point is adversarial or not. Our empirical study demonstrates that adversarial examples not only lie farther away from the data manifold, but this distance from manifold of the adversarial examples increases with the attack confidence. Thus, adversarial examples that are likely to result into incorrect prediction by the machine learning model is also easier to detect by our approach. This is a first step towards formulating a novel approach based on computational geometry that can identify the limiting boundaries of a machine learning model, and detect adversarial attacks.more » « less
-
Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective. Objective To help security practitioners and researchers build a more robust model against non-adaptive, white-box and non-targeted adversarial evasion attacks through the idea of ensemble model. Method We propose an approach called Omni, the main idea of which is to explore methods that create an ensemble of “unexpected models”; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary’s target model, with which we then make an optimized weighted ensemble prediction. Results In studies with five types of adversarial evasion attacks (FGSM, BIM, JSMA, DeepFool and Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAndMal2017 and the Contagio PDF dataset), we show Omni is a promising approach as a defense strategy against adversarial attacks when compared with other baseline treatments Conclusions When employing ensemble defense against adversarial evasion attacks, we suggest to create ensemble with unexpected models that are distant from the attacker’s expected model (i.e., target model) through methods such as hyperparameter optimization.more » « less
-
The landscape of automotive vehicle attack surfaces continues to grow, and vulnerabilities in the controller area network (CAN) expose vehicles to cyber-physical risks and attacks that can endanger the safety of passengers and pedestrians. Intrusion detection systems (IDS) for CAN have emerged as a key mitigation approach for these risks, but uniform methods to compare proposed IDS techniques are lacking. In this paper, we present a framework for comparative performance analysis of state-of-the-art IDSs for CAN bus to provide a consistent methodology to evaluate and assess proposed approaches. This framework relies on previously published datasets comprising message logs recorded from a real vehicle CAN bus coupled with traditional classifier performance metrics to reduce the discrepancies that arise when comparing IDS approaches from disparate sources.more » « less
-
Despite the increased accuracy of intrusion detection systems (IDS) in identifying cyberattacks in computer networks and devices connected to the internet, distributed or coordinated attacks can still go undetected or not detected on time. The single vantage point limits the ability of these IDSs to detect such attacks. Due to this reason, there is a need for attack characteristics’ exchange among different IDS nodes. Researchers proposed a cooperative intrusion detection system to share these attack characteristics effectively. This approach was useful; however, the security of the shared data cannot be guaranteed. More specifically, maintaining the integrity and consistency of shared data becomes a significant concern. In this paper, we propose a blockchain-based solution that ensures the integrity and consistency of attack characteristics shared in a cooperative intrusion detection system. The proposed architecture achieves this by detecting and preventing fake features injection and compromised IDS nodes. It also facilitates scalable attack features exchange among IDS nodes, ensures heterogeneous IDS nodes participation, and it is robust to public IDS nodes joining and leaving the network. We evaluate the security analysis and latency. The result shows that the proposed approach detects and prevents compromised IDS nodes, malicious features injection, manipulation, or deletion, and it is also scalable with low latency.more » « less
An official website of the United States government

