We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.
more »
« less
Exploiting Trust for Resilient Hypothesis Testing withMalicious Robots
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. For the Two Stage Approach, we assume that the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to simultaneously estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis. We exploit particular structures in the problem to show that this approach remains computationally tractable even with unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions subject to a Sybil attack on a mock-up road network. We extract the trust observations for each robot from communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT algorithms respectively.
more »
« less
- Award ID(s):
- 2147631
- PAR ID:
- 10520126
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Robotics
- ISSN:
- 1552-3098
- Page Range / eLocation ID:
- 1 to 20
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decisionmaking at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots.We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters.We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit toGoogleMaps, subject to a Sybil attack.We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender.We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.more » « less
-
null (Ed.)We consider the problem of fault-tolerant parallel search on an infinite line by n robots. Starting from the origin, the robots are required to find a target at an unknown location. The robots can move with maximum speed 1 and can communicate wirelessly among themselves. However, among the n robots, there are f robots that exhibit byzantine faults. A faulty robot can fail to report the target even after reaching it, or it can make malicious claims about having found the target when in fact it has not. Given the presence of such faulty robots, the search for the target can only be concluded when the non-faulty robots have sufficient evidence that the target has been found. We aim to design algorithms that minimize the value of S_d(n, f), the time to find a target at a (unknown) distance d from the origin by n robots among which f are faulty. We give several different algorithms whose running time depends on the ratio f/n, the density of faulty robots, and also prove lower bounds. Our algorithms are optimal for some densities of faulty robots.more » « less
-
Sural, Shamik; Lu, Haibing (Ed.)Timed data release refers to protecting sensitive data that can be accessed only after a pre-determined amount of time has passed. While blockchain-based solutions for timed data release provide a promising approach for decentralizing the process, designing an attack-resilient timed-release service that is resilient to malicious adversaries in a blockchain network is inherently challenging. A timed-release service on a blockchain network is inevitably exposed to the risk of post-facto attacks where adversaries may launch attacks after the data is released in the blockchain network. Existing incentive-based solutions for timed data release in Ethereum blockchains guarantee protection under the assumption of a fully rational adversarial environment in which every peer acts rationally. However, these schemes fail invariably when even a single participating peer node in the protocol starts acting maliciously and deviates from the rational behavior. In this paper, we propose a systematic solution for attack-resilient and practical blockchain-based timed data release in a mixed adversarial environment, where both malicious adversaries and rational adversaries exist. We first propose an effective uncertainty-aware reputation measure to capture the behaviors of the peer involved in timed data release activities in the network. In light of such a measure, we present the design of a basic protocol that consists of two critical ingredients, namely reputation-aware peer recruitment and verifiable enforcement protocols. The former, prior to the start of the enforcement protocols, performs peer recruitment based on the reputation measure to make the design probabilistically attack-resilient to the post-facto attacks. The latter is responsible for contractually guarding the recruited peers at runtime by transparently reporting observed adversarial behaviors. However, the basic recruitment design is only aware of the reputation of the peers and it does not consider the working time schedule of the participating peers and as a result, it results in lower attack-resilience. To enhance the attack resilience further without impacting the verifiable enforcement protocols, we propose a temporal graph-based reputation-aware peer recruitment algorithm that carefully determines the peer recruitment plan to make the service more attack-resilient. In our proposed approach, we formally capture the timed data release service as a temporal graph and we develop a novel maximal attack-resilient path-finding algorithm on the temporal graph for the participating peers. We implement a prototype of the proposed approach using Smart Contracts and deploy it on the Ethereum official test network, Rinkeby. For extensively evaluating the proposed techniques, we perform simulation experiments to validate the effectiveness of the reputation-aware timed data release protocols as well as our proposed temporal-graph-based improvements. The results demonstrate the effectiveness and strong attack resilience of the proposed mechanisms and our approach incurs only a modest gas cost.more » « less
-
The ubiquitous deployment of robots across diverse domains, from industrial automation to personal care, underscores their critical role in modern society. However, this growing dependence has also revealed security vulnerabilities. An attack vector involves the deployment of malicious software (malware) on robots, which can cause harm to robots themselves, users, and even the surrounding environment. Machine learning approaches, particularly supervised ones, have shown promise in malware detection by building intricate models to identify known malicious code patterns. However, these methods are inherently limited in detecting unseen or zero-day malware variants as they require regularly updated massive datasets that might be unavailable to robots. To address this challenge, we introduce ROBOGUARDZ, a novel malware detection framework based on zero-shot learning for robots. This approach allows ROBOGUARDZ to identify unseen malware by establishing relationships between known malicious code and benign behaviors, allowing detection even before the code executes on the robot. To ensure practical deployment in resource-constrained robotic hardware, we employ a unique parallel structured pruning and quantization strategy that compresses the ROBOGUARDZ detection model by 37.4% while maintaining its accuracy. This strategy reduces the size of the model and computational demands, making it suitable for real-world robotic systems. We evaluated ROBOGUARDZ on a recent dataset containing real-world binary executables from multi-sensor autonomous car controllers. The framework was deployed on two popular robot embedded hardware platforms. Our results demonstrate an average detection accuracy of 94.25% and a low false negative rate of 5.8% with a minimal latency of 20 ms, which demonstrates its effectiveness and practicality.more » « less