Proliferation of distributed Cyber-Physical Systems has raised the need for developing computationally efficient security solutions. Toward this objective, distributed state estimators that can withstand attacks on agents (or nodes) of the system have been developed, but many of these works consider the estimation error to asymptotically converge to zero by restricting the number of agents that can be compromised. We propose Resilient Distributed Kalman Filter (RDKF), a novel distributed algorithm that estimates states within an error bound and does not depend on the number of agents that can be compromised by an attack. Our method is based on convex optimization and performs well in practice, which we demonstrate with the help of a simulation example. We theoretically show that, in a connected network, the estimation error generated by the Distributed Kalman Filter and our RDKF at each agent converges to zero in an attack free and noise free scenario. Furthermore, our resiliency analysis result shows that the RDKF algorithm bounds the disturbance on the state estimate caused by an attack.
more »
« less
On the Impact of Trusted Nodes in Resilient Distributed State Estimation of LTI Systems
We address the problem of distributed state estimation of a linear dynamical process in an attack-prone environment. A network of sensors, some of which can be compromised by adversaries, aim to estimate the state of the process. In this context, we investigate the impact of making a small subset of the nodes immune to attacks, or “trusted”. Given a set of trusted nodes, we identify separate necessary and sufficient conditions for resilient distributed state estimation. We use such conditions to illustrate how even a small trusted set can achieve a desired degree of robustness (where the robustness metric is specific to the problem under consideration) that could otherwise only be achieved via additional measurement and communication-link augmentation. We then establish that, unfortunately, the problem of selecting trusted nodes is NP-hard. Finally, we develop an attack-resilient, provably-correct distributed state estimation algorithm that appropriately leverages the presence of the trusted nodes.
more »
« less
- Award ID(s):
- 1635014
- PAR ID:
- 10086132
- Date Published:
- Journal Name:
- 2018 IEEE Conference on Decision and Control (CDC)
- Page Range / eLocation ID:
- 4547 to 4552
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Most of the traditional state estimation algorithms are provided false alarm when there is attack. This paper proposes an attack-resilient algorithm where attack is automatically ignored, and the state estimation process is continuing which acts a grid-eye for monitoring whole power systems. After modeling the smart grid incorporating distributed energy resources, the smart sensors are deployed to gather measurement information where sensors are prone to attacks. Based on the noisy and cyber attack measurement information, the optimal state estimation algorithm is designed. When the attack is happened, the measurement residual error dynamic goes high and it can ignore using proposed saturation function. Moreover, the proposed saturation function is automatically computed in a dynamic way considering residual error and deigned parameters. Combing the aforementioned approaches, the Kalman filter algorithm is modified which is applied to the smart grid state estimation. The simulation results show that the proposed algorithm provides high estimation accuracy.more » « less
-
To improve the resilience of distributed training to worst-case, or Byzantine node failures, several recent approaches have replaced gradient averaging with robust aggregation methods. Such techniques can have high computational costs, often quadratic in the number of compute nodes, and only have limited robustness guarantees. Other methods have instead used redundancy to guarantee robustness, but can only tolerate limited number of Byzantine failures. In this work, we present DETOX, a Byzantine-resilient distributed training framework that combines algorithmic redundancy with robust aggregation. DETOX operates in two steps, a filtering step that uses limited redundancy to significantly reduce the effect of Byzantine nodes, and a hierarchical aggregation step that can be used in tandem with any state-of-the-art robust aggregation method. We show theoretically that this leads to a substantial increase in robustness, and has a per iteration runtime that can be nearly linear in the number of compute nodes. We provide extensive experiments over real distributed setups across a variety of large-scale machine learning tasks, showing that DETOX leads to orders of magnitude accuracy and speedup improvements over many state-of-the-art Byzantine-resilient approaches.more » « less
-
We study the decentralized resilient state-tracking problem in which each node in a network has the objective of tracking the state of a linear dynamical system based on its local measurements and information exchanged with its neighboring nodes, despite an attack on some of the nodes. We propose a novel algorithm that solves the decentralized resilient state-tracking problem by relating it to the dynamic average consensus problem. Compared with existing solutions in the literature, our algorithm provides a solution for the most general class of decentralized resilient state-tracking problem instances.more » « less
-
Federated learning—multi-party, distributed learning in a decentralized environment—is vulnerable to model poisoning attacks, more so than centralized learning. This is because malicious clients can collude and send in carefully tailored model updates to make the global model inaccurate. This motivated the development of Byzantine-resilient federated learning algorithms, such as Krum, Bulyan, FABA, and FoolsGold. However, a recently developed untargeted model poisoning attack showed that all prior defenses can be bypassed. The attack uses the intuition that simply by changing the sign of the gradient updates that the optimizer is computing, for a set of malicious clients, a model can be diverted from the optima to increase the test error rate. In this work, we develop FLAIR—a defense against this directed deviation attack (DDA), a state-of-the-art model poisoning attack. FLAIR is based on ourintuition that in federated learning, certain patterns of gradient flips are indicative of an attack. This intuition is remarkably stable across different learning algorithms, models, and datasets. FLAIR assigns reputation scores to the participating clients based on their behavior during the training phase and then takes a weighted contribution of the clients. We show that where the existing defense baselines of FABA [IJCAI’19], FoolsGold [Usenix ’20], and FLTrust [NDSS ’21] fail when 20-30% of the clients are malicious, FLAIR provides byzantine-robustness upto a malicious client percentage of 45%. We also show that FLAIR provides robustness against even a white-box version of DDA.more » « less
An official website of the United States government

