skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Secure Control Learning Framework for Cyber-Physical Systems under Sensor Attacks
In this paper, we develop a learning-based secure control framework for cyber-physical systems in the presence of sensor attacks. Specifically, we use several observer-based estimators to detect the attacks while also introducing a threat detection level function. We then solve the underlying joint state estimation and attack mitigation problems by using a reinforcement learning algorithm. Finally, an illustrative numericalexampleisprovidedtoshowtheefficacyoftheproposed framework.  more » « less
Award ID(s):
1851588
PAR ID:
10121587
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
A Secure Control Learning Framework for Cyber-Physical Systems under Sensor Attacks
Page Range / eLocation ID:
4280-4285
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Semantic communication is of crucial importance for the next-generation wireless communication networks. The existing works have developed semantic communication frameworks based on deep learning. However, systems powered by deep learning are vulnerable to threats such as backdoor attacks and adversarial attacks. This paper delves into backdoor attacks targeting deep learning-enabled semantic communication systems. Since current works on backdoor attacks are not tailored for semantic communication scenarios, a new backdoor attack paradigm on semantic symbols (BASS) is introduced, based on which the corresponding defense measures are designed. Specifically, a training framework is proposed to prevent BASS. Additionally, reverse engineering-based and pruning-based defense strategies are designed to protect against backdoor attacks in semantic communication. Simulation results demonstrate the effectiveness of both the proposed attack paradigm and the defense strategies. 
    more » « less
  2. Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks’ accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks’ accuracies drops to around 50% for ALMOST-synthesized circuits, all while not undermining design optimization. 
    more » « less
  3. We present a probabilistic framework for studying adversarial attacks on discrete data. Based on this framework, we derive a perturbation-based method, Greedy Attack, and a scalable learning-based method, Gumbel Attack, that illustrate various tradeoffs in the design of attacks. We demonstrate the effectiveness of these methods using both quantitative metrics and human evaluation on various stateof-the-art models for text classification, including a word-based CNN, a character-based CNN and an LSTM. As an example of our results, we show that the accuracy of character-based convolutional networks drops to the level of random selection by modifying only five characters through Greedy Attack. 
    more » « less
  4. In this article, a new framework for the resilient control of continuous-time linear systems under denial-of-service (DoS) attacks and system uncertainty is presented. Integrating techniques from reinforcement learning and output regulation theory, it is shown that resilient optimal controllers can be learned directly from real-time state and input data collected from the systems subjected to attacks. Sufficient conditions are given under which the closed-loop system remains stable given any upper bound of DoS attack duration. Simulation results are used to demonstrate the efficacy of the proposed learning-based framework for resilient control under DoS attacks and model uncertainty. 
    more » « less
  5. Summary This article develops a data‐based and private learning framework of the detection and mitigation against replay attacks for cyber‐physical systems. Optimal watermarking signals are added to assist in the detection of potential replay attacks. In order to improve the confidentiality of the output data, we first add a level of differential privacy. We then use a data‐based technique to learn the best defending strategy in the presence of worst case disturbances, stochastic noise, and replay attacks. A data‐based Neyman‐Pearson detector design is also proposed to identify replay attacks. Finally, simulation results show the efficacy of the proposed approach along with a comparison of our data‐based technique to a model‐based one. 
    more » « less