skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimal Joint Defense and Monitoring for Networks Security under Uncertainty: A POMDP‐Based Approach
The increasing interconnectivity in our infrastructure poses a significant security challenge, with external threats having the potential to penetrate and propagate throughout the network. Bayesian attack graphs have proven to be effective in capturing the propagation of attacks in complex interconnected networks. However, most existing security approaches fail to systematically account for the limitation of resources and uncertainty arising from the complexity of attacks and possible undetected compromises. To address these challenges, this paper proposes a partially observable Markov decision process (POMDP) model for network security under uncertainty. The POMDP model accounts for uncertainty in monitoring and defense processes, as well as the probabilistic attack propagation. This paper develops two security policies based on the optimal stationary defense policy for the underlying POMDP state process (i.e., a network with known compromises): the estimation‐based policy that performs the defense actions corresponding to the optimal minimum mean square error state estimation and the distribution‐based policy that utilizes the posterior distribution of network compromises to make defense decisions. Optimal monitoring policies are designed to specifically support each of the defense policies, allowing dynamic allocation of monitoring resources to capture network vulnerabilities/compromises. The performance of the proposed policies is examined in terms of robustness, accuracy, and uncertainty using various numerical experiments.  more » « less
Award ID(s):
2311969 2202395
PAR ID:
10571228
Author(s) / Creator(s):
; ;
Publisher / Repository:
DOI PREFIX: 10.1049
Date Published:
Journal Name:
IET Information Security
Volume:
2024
Issue:
1
ISSN:
1751-8709
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Early attack detection is essential to ensure the security of complex networks, especially those in critical infrastructures. This is particularly crucial in networks with multi-stage attacks, where multiple nodes are connected to external sources, through which attacks could enter and quickly spread to other network elements. Bayesian attack graphs (BAGs) are powerful models for security risk assessment and mitigation in complex networks, which provide the probabilistic model of attackers’ behavior and attack progression in the network. Most attack detection techniques developed for BAGs rely on the assumption that network compromises will be detected through routine monitoring, which is unrealistic given the ever-growing complexity of threats. This paper derives the optimal minimum mean square error (MMSE) attack detection and monitoring policy for the most general form of BAGs. By exploiting the structure of BAGs and their partial and imperfect monitoring capacity, the proposed detection policy achieves the MMSE optimality possible only for linear-Gaussian state space models using Kalman filtering. An adaptive resource monitoring policy is also introduced for monitoring nodes if the expected predictive error exceeds a user-defined value. Exact and efficient matrix-form computations of the proposed policies are provided, and their high performance is demonstrated in terms of the accuracy of attack detection and the most efficient use of available resources using synthetic Bayesian attack graphs with different topologies. 
    more » « less
  2. null (Ed.)
    Moving target defense (MTD) is a proactive defense approach that aims to thwart attacks by continuously changing the attack surface of a system (e.g., changing host or network configurations), thereby increasing the adversary’s uncertainty and attack cost. To maximize the impact of MTD, a defender must strategically choose when and what changes to make, taking into account both the characteristics of its system as well as the adversary’s observed activities. Finding an optimal strategy for MTD presents a significant challenge, especially when facing a resourceful and determined adversary who may respond to the defender’s actions. In this paper, we propose a multi-agent partially-observable Markov Decision Process model of MTD and formulate a two-player general-sum game between the adversary and the defender. To solve this game, we propose a multi-agent reinforcement learning framework based on the double oracle algorithm. Finally, we provide experimental results to demonstrate the effectiveness of our framework in finding optimal policies. 
    more » « less
  3. Most of the traditional state estimation algorithms are provided false alarm when there is attack. This paper proposes an attack-resilient algorithm where attack is automatically ignored, and the state estimation process is continuing which acts a grid-eye for monitoring whole power systems. After modeling the smart grid incorporating distributed energy resources, the smart sensors are deployed to gather measurement information where sensors are prone to attacks. Based on the noisy and cyber attack measurement information, the optimal state estimation algorithm is designed. When the attack is happened, the measurement residual error dynamic goes high and it can ignore using proposed saturation function. Moreover, the proposed saturation function is automatically computed in a dynamic way considering residual error and deigned parameters. Combing the aforementioned approaches, the Kalman filter algorithm is modified which is applied to the smart grid state estimation. The simulation results show that the proposed algorithm provides high estimation accuracy. 
    more » « less
  4. Moving Target Defense (MTD) has been introduced as a new game changer strategy in cybersecurity to strengthen defenders and conversely weaken adversaries. The successful implementation of an MTD system can be influenced by several factors including the effectiveness of the employed technique, the deployment strategy, the cost of the MTD implementation, and the impact from the enforced security policies. Several efforts have been spent on introducing various forms of MTD techniques. However, insufficient research work has been conducted on cost and policy analysis and more importantly the selection of these policies in an MTD-based setting. This poster paper proposes a Markov Decision Process (MDP) modeling-based approach to analyze security policies and further select optimal policies for moving target defense implementation and deployment. The adapted value iteration method would solve the Bellman Optimality Equation for optimal policy selection for each state of the system. The results of some simulations indicate that such modeling can be used to analyze the impact of costs of possible actions towards the optimal policies. 
    more » « less
  5. The emerging Internet of Things (IoT) has increased the complexity and difficulty of network administration. Fortunately, Software-Defined Networking (SDN) provides an easy and centralized approach to administer a large number of IoT devices and can greatly reduce the workload of network administrators. SDN-based implementation of networks, however,has also introduced new security concerns, such as increasing number of DDoS attacks. This paper introduces an easy and lightweight defense strategy against DDoS attacks on IoT devices in a SDN environment using Markov Decision Process (MDP)in which optimal policies regarding handling network flows are determined with the intention of preventing DDoS attacks. 
    more » « less