skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Over-the-Air Collaborative Learning in Joint Decision Making
We propose an over-the-air learning framework for collaborative decision making in wireless sensor networks. The low complexity framework leverages low-latency sensor transmission for a decision server to coordinate measurement sensors for hypothesis testing through over-the-air aggregation of sensor data over a multiple-access channel. We formulate several collaborative over-the-air hypothesis testing problems under different practical protocols for collaborative learning and decision making. We develop hypothesis tests for these network protocols and deployment scenarios including channel fading. We provide performance benchmark for both basic likelihood ratio test and generalized likelihood ratio test under different deployment conditions. Our results clearly demonstrate gain provided by increasing number of collaborative sensors.  more » « less
Award ID(s):
1824553 2029027 2009001
PAR ID:
10442925
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 IEEE Global Telecommunications Conference
Page Range / eLocation ID:
3581 to 3586
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose an over-the-air learning framework for collaborative decision making in wireless sensor networks. The low complexity framework leverages low-latency sensor transmission for a decision server to coordinate measurement sensors for hypothesis testing through over-the-air aggregation of sensor data over a multiple-access channel. We formulate several collaborative over-the-air hypothesis testing problems under different practical protocols for collaborative learning and decision making. We develop hypothesis tests for these network protocols and deployment scenarios including channel fading. We provide performance benchmark for both basic likelihood ratio test and generalized likelihood ratio test under different deployment conditions. Our results clearly demonstrate gain provided by increasing number of collaborative sensors. 
    more » « less
  2. We propose a decentralized, sequential and adaptive hypothesis test in sensor networks, which extends Chernoff’s test to a decentralized setting. We show that the proposed test achieves the same asymptotic optimality of the original one, minimizing the expected cost required to reach a decision plus the expected cost of making a wrong decision, when the observation cost per unit time tends to zero. We also show that the proposed test is parsimonious in terms of communications. Namely, in the regime of vanishing observation cost per unit time, the expected number of channel uses required by each sensor to complete the test converges to four. 
    more » « less
  3. Abstract In this work, we address the question of how to enhance signal-agnostic searches by leveraging multiple testing strategies. Specifically, we consider hypothesis tests relying on machine learning, where model selection can introduce a bias towards specific families of new physics signals. Focusing on the New Physics Learning Machine, a methodology to perform a signal-agnostic likelihood-ratio test, we explore a number of approaches to multiple testing, such as combiningp-values and aggregating test statistics. Our findings show that it is beneficial to combine different tests, characterised by distinct choices of hyperparameters, and that performances comparable to the best available test are generally achieved, while also providing a more uniform response to various types of anomalies. This study proposes a methodology that is valid beyond machine learning approaches and could in principle be applied to a larger class model-agnostic analyses based on hypothesis testing. 
    more » « less
  4. A<sc>bstract</sc> Signal-agnostic data exploration based on machine learning could unveil very subtle statistical deviations of collider data from the expected Standard Model of particle physics. The beneficial impact of a large training sample on machine learning solutions motivates the exploration of increasingly large and inclusive samples of acquired data with resource efficient computational methods. In this work we consider the New Physics Learning Machine (NPLM), a multivariate goodness-of-fit test built on the Neyman-Pearson maximum-likelihood-ratio construction, and we address the problem of testing large size samples under computational and storage resource constraints. We propose to perform parallel NPLM routines over batches of the data, and to combine them by locally aggregating over the data-to-reference density ratios learnt by each batch. The resulting data hypothesis defining the likelihood-ratio test is thus shared over the batches, and complies with the assumption that the expected rate of new physical processes is time invariant. We show that this method outperforms the simple sum of the independent tests run over the batches, and can recover, or even surpass, the sensitivity of the single test run over the full data. Beside the significant advantage for the offline application of NPLM to large size samples, the proposed approach offers new prospects toward the use of NPLM to construct anomaly-aware summary statistics in quasi-online data streaming scenarios. 
    more » « less
  5. We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. For the Two Stage Approach, we assume that the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to simultaneously estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis. We exploit particular structures in the problem to show that this approach remains computationally tractable even with unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions subject to a Sybil attack on a mock-up road network. We extract the trust observations for each robot from communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT algorithms respectively. 
    more » « less