skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Adversarial Model Predictive Control via Second-Order Cone Programming
We study the problem of designing attacks to safety critical systems in which the adversary seeks to maximize the overall system cost within a model predictive control framework. Although in general this problem is NP-hard, we characterize a family of problems that can be solved in polynomial time via a second-order cone programming relaxation. In particular, we show that positive systems fall under this family. We provide examples demonstrating the design of optimal attacks on an autonomous vehicle and a microgrid.  more » « less
Award ID(s):
1752362 1736448 1711188
PAR ID:
10132910
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Conference on Decision and Control
Page Range / eLocation ID:
1403 to 1409
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies a security problem for a class cloud-connected multi-agent systems, where autonomous agents coordinate via a combination of short-range ad-hoc communication links and long-range cloud services. We consider a simplified model for the dynamics of a cloud-connected multi-agent system and attacks, where the states evolve according to linear time-invariant impulsive dynamics, and attacks are modeled as exogenous inputs designed by an omniscent attacker that alters the continuous and impulsive updates. We propose a definition of attack detectability, characterize the existence of stealthy attacks as a function of the system parameters and attack properties, and design a family of undetectable attacks. We illustrate our results on a cloud-based surveillance example. 
    more » « less
  2. The adoption of digital technology in industrial control systems (ICS) enables improved control over operation, ease of system diagnostics and reduction in cost of maintenance of cyber physical systems (CPS). However, digital systems expose CPS to cyber-attacks. The problem is grave since these cyber-attacks can lead to cascading failures affecting safety in CPS. Unfortunately, the relationship between safety events and cyber-attacks in ICS is ill-understood and how cyber-attacks can lead to cascading failures affecting safety. Consequently, CPS operators are ill-prepared to handle cyber-attacks on their systems. In this work, we envision adopting Explainable AI to assist CPS oper-ators in analyzing how a cyber-attack can trigger safety events in CPS and then interactively determining potential approaches to mitigate those threats. We outline the design of a formal framework, which is based on the notion of transition systems, and the associated toolsets for this purpose. The transition system is represented as an AI Planning problem and adopts the causal formalism of human reasoning to asssit CPS operators in their analyses. We discuss some of the research challenges that need to be addressed to bring this vision to fruition. 
    more » « less
  3. null (Ed.)
    Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (corresponding to actuators in engineering systems) are equally perverse, but such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack. We propose the white-box Myopic Action Space (MAS) attack algorithm that distributes the attacks across the action space dimensions. Next, we reformulate the optimization problem above with the same objective function, but with a temporally coupled constraint on the attack budget to take into account the approximated dynamics of the agent. This leads to the white-box Look-ahead Action Space (LAS) attack algorithm that distributes the attacks across the action and temporal dimensions. Our results showed that using the same amount of resources, the LAS attack deteriorates the agent's performance significantly more than the MAS attack. This reveals the possibility that with limited resource, an adversary can utilize the agent's dynamics to malevolently craft attacks that causes the agent to fail. Additionally, we leverage these attack strategies as a possible tool to gain insights on the potential vulnerabilities of DRL agents. 
    more » « less
  4. Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide evidence to the contrary. We first establish that, in the general case, robustness to backdoors implies model robustness to adversarial examples, a major open problem in itself. Furthermore, detecting the presence of a backdoor in a FL model is unlikely assuming first order oracles or polynomial time. We couple our theoretical results with a new family of backdoor attacks, which we refer to as edge-case backdoors. An edge-case backdoor forces a model to misclassify on seemingly easy inputs that are however unlikely to be part of the training, or test data, i.e., they live on the tail of the input distribution. We explain how these edge-case backdoors can lead to unsavory failures and may have serious repercussions on fairness, and exhibit that with careful tuning at the side of the adversary, one can insert them across a range of machine learning tasks (e.g., image classification, OCR, text prediction, sentiment analysis). 
    more » « less
  5. The advent of autonomous mobile multi-robot systems has driven innovation in both the industrial and defense sectors. The integration of such systems in safety- and security- critical applications has raised concern over their resilience to attack. In this work, we investigate the security problem of a stealthy adversary masquerading as a properly functioning agent. We show that conventional multi-agent pathfinding solutions are vulnerable to these physical masquerade attacks. Furthermore, we provide a constraint-based formulation of multi-agent pathfinding that yields multi-agent plans that are provably resilient to physical masquerade attacks. This formalization leverages inter-agent observations to facilitate introspective monitoring to guarantee resilience. 
    more » « less