skip to main content


Title: Omni: automated ensemble with unexpected models against adversarial evasion attack
Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective. Objective To help security practitioners and researchers build a more robust model against non-adaptive, white-box and non-targeted adversarial evasion attacks through the idea of ensemble model. Method We propose an approach called Omni, the main idea of which is to explore methods that create an ensemble of “unexpected models”; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary’s target model, with which we then make an optimized weighted ensemble prediction. Results In studies with five types of adversarial evasion attacks (FGSM, BIM, JSMA, DeepFool and Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAndMal2017 and the Contagio PDF dataset), we show Omni is a promising approach as a defense strategy against adversarial attacks when compared with other baseline treatments Conclusions When employing ensemble defense against adversarial evasion attacks, we suggest to create ensemble with unexpected models that are distant from the attacker’s expected model (i.e., target model) through methods such as hyperparameter optimization.  more » « less
Award ID(s):
1909516
NSF-PAR ID:
10358628
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Empirical software engineering
Volume:
27
Issue:
26
ISSN:
1573-7616
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this article, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defender’s feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attacker’s freedom in conducting manipulations in the problem space; knowing the attacker’s manipulation set is critical to the defender’s success; and the effectiveness of adversarial training depends on the defender’s capability in identifying the most powerful attack. We also discuss a number of future research directions. 
    more » « less
  2. With the growing adoption of unmanned aerial vehicles (UAVs) across various domains, the security of their operations is paramount. UAVs, heavily dependent on GPS navigation, are at risk of jamming and spoofing cyberattacks, which can severely jeopardize their performance, safety, and mission integrity. Intrusion detection systems (IDSs) are typically employed as defense mechanisms, often leveraging traditional machine learning techniques. However, these IDSs are susceptible to adversarial attacks that exploit machine learning models by introducing input perturbations. In this work, we propose a novel IDS for UAVs to enhance resilience against such attacks using generative adversarial networks (GAN). We also comprehensively study several evasion-based adversarial attacks and utilize them to compare the performance of the proposed IDS with existing ones. The resilience is achieved by generating synthetic data based on the identified weak points in the IDS and incorporating these adversarial samples in the training process to regularize the learning. The evaluation results demonstrate that the proposed IDS is significantly robust against adversarial machine learning based attacks compared to the state-of-the-art IDSs while maintaining a low false positive rate. 
    more » « less
  3. null (Ed.)
    Recent self-propagating malware (SPM) campaigns compromised hundred of thousands of victim machines on the Internet. It is challenging to detect these attacks in their early stages, as adversaries utilize common network services, use novel techniques, and can evade existing detection mechanisms. We propose PORTFILER (PORT-Level Network Traffic ProFILER), a new machine learning system applied to network traffic for detecting SPM attacks. PORTFILER extracts port-level features from the Zeek connection logs collected at a border of a monitored network, applies anomaly detection techniques to identify suspicious events, and ranks the alerts across ports for investigation by the Security Operations Center (SOC). We propose a novel ensemble methodology for aggregating individual models in PORTFILER that increases resilience against several evasion strategies compared to standard ML baselines. We extensively evaluate PORTFILER on traffic collected from two university networks, and show that it can detect SPM attacks with different patterns, such as WannaCry and Mirai, and performs well under evasion. Ranking across ports achieves precision over 0.94 and false positive rates below 8 × 10−4 in the top 100 highly ranked alerts. When deployed on the university networks, PORTFILER detected anomalous SPM-like activity on one of the campus networks, confirmed by the university SOC as malicious. PORTFILER also detected a Mirai attack recreated on the two university networks with higher precision and recall than deep learning based autoencoder methods. 
    more » « less
  4. Machine learning (ML) techniques are increasingly common in security applications, such as malware and intrusion detection. However, ML models are often susceptible to evasion attacks, in which an adversary makes changes to the input (such as malware) in order to avoid being detected. A conventional approach to evaluate ML robustness to such attacks, as well as to design robust ML, is by considering simplified feature-space models of attacks, where the attacker changes ML features directly to effect evasion, while minimizing or constraining the magnitude of this change. We investigate the effectiveness of this approach to designing robust ML in the face of attacks that can be realized in actual malware (realizable attacks). We demonstrate that in the context of structure-based PDF malware detection, such techniques appear to have limited effectiveness, but they are effective with content-based detectors. In either case, we show that augmenting the feature space models with conserved features (those that cannot be unilaterally modified without compromising malicious functionality) significantly improves performance. Finally, we show that feature space models enable generalized robustness when faced with a variety of realizable attacks, as compared to classifiers which are tuned to be robust to a specific realizable attack. 
    more » « less
  5. Dong, Yinpeng ; Pang, Tianyu ; Yang, Xiao ; Wong, Eric ; Kolter, Zico ; He, Yuan (Ed.)
    Current machine learning models suffer from evasion attacks (i.e., adversarial examples) raising concerns in security-sensitive settings such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation. Recently, defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al. 2019) on ImageNet. However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood. The stochasticity of these models also makes evaluation more challenging and many proposed attacks on deterministic models inapplicable. First, we show that the BPDA attack (Athalye, Carlini, and Wagner 2018) used in BaRT’s evaluation is ineffective and likely overestimates its robustness. We then attempt to construct the strongest possible RT defense through the informed selection of transformations and Bayesian optimization for tuning their parameters. Furthermore, we create the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly used EoT attack (4.3× improvement). Our result indicates that the RT defense on Imagenette dataset (ten-class subset of ImageNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT). However, the attack is still not sufficiently strong, and thus, the AdvRT model is no more robust than its RT counterpart. In the process of formulating our defense and attack, we perform several ablation studies and uncover insights that we hope will broadly benefit scientific communities studying stochastic neural networks and their robustness properties. 
    more » « less