This content will become publicly available on August 31, 2025
Internet of Things (IoT) cyber threats, exemplified by jackware and crypto mining, underscore the vulnerability of IoT devices. Due to the multi-step nature of many attacks, early detection is vital for a swift response and preventing malware propagation. However, accurately detecting early-stage attacks is challenging, as attackers employ stealthy, zero-day, or adversarial machine learning to evade detection. To enhance security, we propose ARIoTEDef, an Adversarially Robust IoT Early Defense system, which identifies early-stage infections and evolves autonomously. It models multi-stage attacks based on a cyber kill chain and maintains stage-specific detectors. When anomalies in the later action stage emerge, the system retroactively analyzes event logs using an attention-based sequence-to-sequence model to identify early infections. Then, the infection detector is updated with information about the identified infections. We have evaluated ARIoTEDef against multi-stage attacks, such as the Mirai botnet. Results show that the infection detector’s average F1 score increases from 0.31 to 0.87 after one evolution round. We have also conducted an extensive analysis of ARIoTEDef against adversarial evasion attacks. Our results show that ARIoTEDef is robust and benefits from multiple rounds of evolution.
more » « less- Award ID(s):
- 2229876
- PAR ID:
- 10524829
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Internet of Things
- Volume:
- 5
- Issue:
- 3
- ISSN:
- 2691-1914
- Page Range / eLocation ID:
- 1 to 34
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
As cyber attacks are growing with an unprecedented rate in the recent years, organizations are seeking an efficient and scalable solution towards a holistic protection system. As the adversaries are becoming more skilled and organized, traditional rule based detection systems have been proved to be quite ineffective against the continuously evolving cyber attacks. Consequently, security researchers are focusing on applying machine learning techniques and big data analytics to defend against cyber attacks. Over the recent years, several anomaly detection systems have been claimed to be quite successful against the sophisticated cyber attacks including the previously unseen zero-day attacks. But often, these systems do not consider the adversary's adaptive attacking behavior for bypassing the detection procedure. As a result, deploying these systems in active real-world scenarios fails to provide significant benefits in the presence of intelligent adversaries that are carefully manipulating the attack vectors. In this work, we analyze the adversarial impact on anomaly detection models that are built upon centroid-based clustering from game-theoretic aspect and propose adversarial anomaly detection technique for these models. The experimental results show that our game-theoretic anomaly detection models can withstand attacks more effectively compared to the traditional models.more » « less
-
This research serves as a broad examination of the different threats and attacks against the IoT architecture. This research analyzes the different layers of the IoT architecture and the cyber attacks that threaten them each. Intrusion detection systems provide a means of protection against various attacks. Hence substantiating the proposal of a host-based signature type intrusion detection system utilizing the semi-Markov process for IoT devices in a smart home environment. The semi-Markov chain could potentially prove as an effective means to acutely identify behavioral anomalies associated with nodes within an IoT environment.more » « less
-
Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective. Objective To help security practitioners and researchers build a more robust model against non-adaptive, white-box and non-targeted adversarial evasion attacks through the idea of ensemble model. Method We propose an approach called Omni, the main idea of which is to explore methods that create an ensemble of “unexpected models”; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary’s target model, with which we then make an optimized weighted ensemble prediction. Results In studies with five types of adversarial evasion attacks (FGSM, BIM, JSMA, DeepFool and Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAndMal2017 and the Contagio PDF dataset), we show Omni is a promising approach as a defense strategy against adversarial attacks when compared with other baseline treatments Conclusions When employing ensemble defense against adversarial evasion attacks, we suggest to create ensemble with unexpected models that are distant from the attacker’s expected model (i.e., target model) through methods such as hyperparameter optimization.more » « less
-
Although a substantial amount of studies is dedicated to morphing detection, most of them fail to generalize for morph faces outside of their training paradigm. Moreover, recent morph detection methods are highly vulnerable to adversarial attacks. In this paper, we intend to learn a morph detection model with high generalization to a wide range of morphing attacks and high robustness against different adversarial attacks. To this aim, we develop an ensemble of convolutional neural networks (CNNs) and Transformer models to benefit from their capabilities simultaneously. To improve the robust accuracy of the ensemble model, we employ multi-perturbation adversarial training and generate adversarial examples with high transferability for several single models. Our exhaustive evaluations demonstrate that the proposed robust ensemble model generalizes to several morphing attacks and face datasets. In addition, we validate that our robust ensemble model gains better robustness against several adversarial attacks while outperforming the state-of-the-art studies.more » « less
-
Deep Neural Networks (DNNs) have shown phenomenal success in a wide range of real-world applications. However, a concerning weakness of DNNs is that they are vulnerable to adversarial attacks. Although there exist methods to detect adversarial attacks, they often suffer constraints on specific attack types and provide limited information to downstream systems. We specifically note that existing adversarial detectors are often binary classifiers, which differentiate clean or adversarial examples. However, detection of adversarial examples is much more complicated than such a scenario. Our key insight is that the confidence probability of detecting an input sample as an adversarial example will be more useful for the system to properly take action to resist potential attacks. In this work, we propose an innovative method for fast confidence detection of adversarial attacks based on integrity of sensor pattern noise embedded in input examples. Experimental results show that our proposed method is capable of providing a confidence distribution model of most of popular adversarial attacks. Furthermore, our presented method can provide early attack warning with even the attack types based on different properties of the confidence distribution models. Since fast confidence detection is a computationally heavy task, we propose an FPGA-Based hardware architecture based on a series of optimization techniques, such as incremental multi-level quantization and etc. We realize our proposed method on an FPGA platform and achieve a high efficiency of 29.740 IPS/W with a power consumption of only 0.7626W.more » « less