skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Interdicting Attack Plans with Boundedly Rational Players and Multiple Attackers: An Adversarial Risk Analysis Approach
Cybersecurity planning supports the selection of and implementation of security controls in resource-constrained settings to manage risk. Doing so requires considering adaptive adversaries with different levels of strategic sophistication in modeling efforts to support risk management. However, most models in the literature only consider rational or nonstrategic adversaries. Therefore, we study how to inform defensive decision making to mitigate the risk from boundedly rational players, with a particular focus on making integrated, interdependent planning decisions. To achieve this goal, we introduce a modeling framework for selecting a portfolio of security mitigations that interdict adversarial attack plans that uses a structured approach for risk analysis. Our approach adapts adversarial risk analysis and cognitive hierarchy theory to consider a maximum-reliability path interdiction problem with a single defender and multiple attackers who have different goals and levels of strategic sophistication. Instead of enumerating all possible attacks and defenses, we introduce a solution technique based on integer programming and approximation algorithms to iteratively solve the defender’s and attackers’ problems. A case study illustrates the proposed models and provides insights into defensive planning. Funding: A. Peper and L. A. Albert were supported in part by the National Science Foundation [Grant 2000986].  more » « less
Award ID(s):
2000986
PAR ID:
10543386
Author(s) / Creator(s):
; ;
Publisher / Repository:
INFORMS
Date Published:
Journal Name:
Decision Analysis
Volume:
20
Issue:
3
ISSN:
1545-8490
Page Range / eLocation ID:
202 to 219
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Cellular Vehicle-to-Everything (C-V2X) networks are increasingly adopted by automotive original equipment manufacturers (OEMs). C-V2X, as defined in 3GPP Release 14 Mode 4, allows vehicles to self-manage the network in absence of a cellular base-station. Since C-V2X networks convey safety-critical messages, it is crucial to assess their security posture. This work contributes a novel set of Denial-of-Service (DoS) attacks on C-V2X networks operating in Mode 4. The attacks are caused by adversarial resource block selection and vary in sophistication and efficiency. In particular, we consider "oblivious" adversaries that ignore recent transmission activity on resource blocks, "smart" adversaries that do monitor activity on each resource block, and "cooperative" adversaries that work together to ensure they attack different targets. We analyze and simulate these attacks to showcase their effectiveness. Assuming a fixed number of attackers, we show that at low vehicle density, smart and cooperative attacks can significantly impact network performance, while at high vehicle density, oblivious attacks are almost as effective as the more sophisticated attacks. 
    more » « less
  2. An important way cyber adversaries ind vulnerabilities in mod- ern networks is through reconnaissance, in which they attempt to identify coniguration speciics of network hosts. To increase un- certainty of adversarial reconnaissance, the network administrator (henceforth, defender) can introduce deception into responses to network scans, such as obscuring certain system characteristics. We introduce a novel game theoretic model of deceptive interac- tions of this kind between a defender and a cyber attacker, which we call the Cyber Deception Game. We consider both a powerful (rational) attacker, who is aware of the defender’s exact deception strategy, and a naive attacker who is not. We show that computing the optimal deception strategy is NP-hard for both types of attackers. For the case with a powerful attacker, we provide a mixed-integer linear program solution as well as a fast and efective greedy algo- rithm. Similarly, we provide complexity results and propose exact and heuristic approaches when the attacker is naive. Our exten- sive experimental analysis demonstrates the efectiveness of our approaches. 
    more » « less
  3. Modern machine learning underpins a large variety of commercial software products, including many cybersecurity solutions. Widely different models, from large transformers trained for auto-regressive natural language modeling to gradient boosting forests designed to recognize malicious software, all share a common element: they are trained on an ever increasing quantity of data to achieve impressive performance levels in their tasks. Consequently, the training phase of modern machine learning systems holds dual significance: it is pivotal in achieving the expected high-performance levels of these models, and concurrently, it presents a prime attack surface for adversaries striving to manipulate the behavior of the final trained system. This dissertation explores the complexities and hidden dangers of training supervised machine learning models in an adversarial setting, with a particular focus on models designed for cybersecurity tasks. Guided by the belief that an accurate understanding of the offensive capabilities of the adversary is the cornerstone on which to found any successful defensive strategy, the bulk of this thesis is composed by the introduction of novel training-time attacks. We start by proposing training-time attack strategies that operate in a clean-label regime, requiring minimal adversarial control over the training process, allowing the attacker to subvert the victim model’s prediction through simple poisoned data dissemination. Leveraging the characteristics of the data domain and model explanation techniques, we craft training data perturbations that stealthily subvert malicious software classifiers. We then shift the focus of our analysis on the long-standing problem of network flow traffic classification. In this context we develop new poisoning strategies that work around the constraints of the data domain through different strategies, including generative modeling. Finally, we examine unusual attack vectors, when the adversary is capable of tampering with different elements of the training process, such as the network connections during a federated learning protocol. We show that such an attacker can induce targeted performance degradation through strategic network interference, while maintaining stable the performance of the victim model on other data instances. We conclude by investigating mitigation techniques designed to target these insidious clean-label backdoor attacks in the cybersecurity domain. 
    more » « less
  4. Yang, DN; Xie, X; Tseng, VS; Pei, J; Huang, JW; Lin, JCW (Ed.)
    Extensive research in Medical Imaging aims to uncover critical diagnostic features in patients, with AI-driven medical diagnosis relying on sophisticated machine learning and deep learning models to analyze, detect, and identify diseases from medical images. Despite the remarkable accuracy of these models under normal conditions, they grapple with trustworthiness issues, where their output could be manipulated by adversaries who introduce strategic perturbations to the input images. Furthermore, the scarcity of publicly available medical images, constituting a bottleneck for reliable training, has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images—a practice referred to as transfer learning. However, a significant domain discrepancy exists between natural and medical images, which causes AI models resulting from transfer learning to exhibit heightened vulnerability to adversarial attacks. This paper proposes a domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion. We systematically analyze the performance of transfer learning in the face of various adversarial attacks under different data modalities, with the overarching goal of fortifying the model’s robustness and security in medical imaging tasks. The results demonstrate high effectiveness in reducing attack efficacy, contributing toward more trustworthy transfer learning in biomedical applications. 
    more » « less
  5. Yang, DN; Xie, X; Tseng, VS; Pei, J; Huang, JW; Lin, JCW (Ed.)
    Extensive research in Medical Imaging aims to uncover critical diagnostic features in patients, with AI-driven medical diagnosis relying on sophisticated machine learning and deep learning models to analyze, detect, and identify diseases from medical images. Despite the remarkable accuracy of these models under normal conditions, they grapple with trustworthiness issues, where their output could be manipulated by adversaries who introduce strategic perturbations to the input images. Furthermore, the scarcity of publicly available medical images, constituting a bottleneck for reliable training, has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images—a practice referred to as transfer learning. However, a significant domain discrepancy exists between natural and medical images, which causes AI models resulting from transfer learning to exhibit heightened vulnerability to adversarial attacks. This paper proposes a domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion. We systematically analyze the performance of transfer learning in the face of various adversarial attacks under different data modalities, with the overarching goal of fortifying the model’s robustness and security in medical imaging tasks. The results demonstrate high effectiveness in reducing attack efficacy, contributing toward more trustworthy transfer learning in biomedical applications. 
    more » « less