We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. For the Two Stage Approach, we assume that the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to simultaneously estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis. We exploit particular structures in the problem to show that this approach remains computationally tractable even with unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions subject to a Sybil attack on a mock-up road network. We extract the trust observations for each robot from communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT algorithms respectively.
more »
« less
Characterizing the Modification Space of Signature IDS Rules
Abstract—Signature-based Intrusion Detection Systems (SIDSs) are traditionally used to detect malicious activity in networks. A notable example of such a system is Snort, which compares network traffic against a series of rules that match known exploits. Current SIDS rules are designed to minimize the amount of legitimate traffic flagged incorrectly, reducing the burden on network administrators. However, different use cases than the traditional one–such as researchers studying trends or analyzing modified versions of known exploits–may require SIDSs to be less constrained in their operation. In this paper, we demonstrate that applying modifications to real-world SIDS rules allow for relaxing some constraints and characterizing the performance space of modified rules. We develop an iterative approach for exploring the space of modifications to SIDS rules. By taking the modifications that expand the ROC curve of performance and altering them further, we show how to modify rules in a directed manner. Using traffic collected and identified as benign or malicious from a cloud telescope, we find that the removal of a single component from SIDS rules has the largest impact on the performance space. Effectively modifying SIDS rules to reduce constraints can enable a broader range of detection for various objectives, from increased security to research purposes.
more »
« less
- Award ID(s):
- 2343611
- PAR ID:
- 10543943
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-2181-4
- Page Range / eLocation ID:
- 536 to 541
- Subject(s) / Keyword(s):
- cryptography and security
- Format(s):
- Medium: X
- Location:
- Boston, MA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.more » « less
-
Programming Protocol-independent Packet Processors (P4) is an open-source domain-specific language to aid the data plane devices in programming packet forwarding. It has a variety of constructs optimized for this purpose. With P4, one can program ASICs, PISA chips, FPGAs, and many network devices since the language constructs allow true independence in some aspects that OpenFlow could not support. However, there are some challenges facing this technology. The first challenge is that P4 does not account for malicious traffic detection in the data plane pipeline. 2. The controllers have no secure medium of attack signature exchange. This ongoing work presents a multichain solution for detecting malicious traffic and exchanging attack signatures among controllers. This architecture uses an Artificial Immune System (AIS) based Intrusion Detection System (IDS), which runs on a distributed blockchain network, to introspect the P4 data plane to analyze and detect anomaly traffic flows. This IDS resides on the SideChain smart contracts and constantly monitors the traffic flow at the data planes based on introspection. Once malicious traffic is detected on any SideChain, the signatures are extracted and passed through the signature forwarding node to the MainChain for real-time storage. The malicious signatures are sent to all controllers via the mainchain network. We minimize the congestion the solution can cause to the P4 network by utilizing a load balancer to serve the SideChain. To evaluate the performance, we evaluate the False Positive Rate (FPR), Detection Rate (DR), and Accuracy (ACC) of the IDS. We also compute the execution time, performance overhead, and scalability of the proposed solution.more » « less
-
One of the effective ways of detecting malicious traffic in computer networks is intrusion detection systems (IDS). Though IDS identify malicious activities in a network, it might be difficult to detect distributed or coordinated attacks because they only have single vantage point. To combat this problem, cooperative intrusion detection system was proposed. In this detection system, nodes exchange attack features or signatures with a view of detecting an attack that has previously been detected by one of the other nodes in the system. Exchanging of attack features is necessary because a zero-day attacks (attacks without known signature) experienced in different locations are not the same. Although this solution enhanced the ability of a single IDS to respond to attacks that have been previously identified by cooperating nodes, malicious activities such as fake data injection, data manipulation or deletion and data consistency are problems threatening this approach. In this paper, we propose a solution that leverages blockchain’s distributive technology, tamper-proof ability and data immutability to detect and prevent malicious activities and solve data consistency problems facing cooperative intrusion detection. Focusing on extraction, storage and distribution stages of cooperative intrusion detection, we develop a blockchain-based solution that securely extracts features or signatures, adds extra verification step, makes storage of these signatures and features distributive and data sharing secured. Performance evaluation of the system with respect to its response time and resistance to the features/signatures injection is presented. The result shows that the proposed solution prevents stored attack features or signature against malicious data injection, manipulation or deletion and has low latency.more » « less
-
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decisionmaking at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots.We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters.We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit toGoogleMaps, subject to a Sybil attack.We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender.We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.more » « less
An official website of the United States government

