skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Adaptive Federated Learning for Automatic Modulation Classification Under Class and Noise Imbalance
The ability to rapidly understand and label the radio spectrum in an autonomous way is key for monitoring spectrum interference, spectrum utilization efficiency, protecting passive users, monitoring and enforcing compliance with regulations, detecting faulty radios, dynamic spectrum access, opportunistic mesh networking, and numerous NextG regulatory and defense applications. We consider the problem of automatic modulation classification (AMC) by a distributed network of wireless sensors that monitor the spectrum for signal transmissions of interest over a large deployment area. Each sensor receives signals under a specific channel condition depending on its location and trains an individual model of a deep neural network (DNN) accordingly to classify signals. To improve modulation classification accuracy, we consider federated learning (FL) where each individual sensor shares its trained model with a centralized controller, which, after aggregation, initializes its model for the next round of training. Without exchanging any spectrum data (such as in cooperative spectrum sensing), this process is repeated over time. A common DNN is built across the net- work while preserving the privacy associated with signals collected at different locations. Given their distributed nature, the statistics of the data across these sensors are likely to differ significantly. We propose the use of adaptive federated learning for AMC. Specifically, we use FEDADAM -an algorithm using Adam for server optimization – and ex- amine how it compares to the FEDAVG algorithm -one of the standard FL algorithms, which averages client parameters after some local iterations, in particular in challenging scenarios that include class imbalance and/or noise-level imbalance across the network. Our extensive numerical studies over 11 standard modulation classes corroborate the merit of adaptive FL, outperforming its standard alternatives in various challenging cases and for various network sizes.  more » « less
Award ID(s):
2030234
PAR ID:
10598785
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
AAAI
Date Published:
Journal Name:
Proceedings of the AAAI Symposium Series
Volume:
3
Issue:
1
ISSN:
2994-4317
Page Range / eLocation ID:
309 to 309
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A central theme in federated learning (FL) is the fact that client data distributions are often not independent and identically distributed (IID), which has strong implications on the training process. While most existing FL algorithms focus on the conventional non-IID setting of class imbalance or missing classes across clients, in practice, the distribution differences could be more complex, e.g., changes in class conditional (domain) distributions. In this paper, we consider this complex case in FL wherein each client has access to only one domain distribution. For tasks such as domain generalization, most existing learning algorithms require access to data from multiple clients (i.e., from multiple domains) during training, which is prohibitive in FL. To address this challenge, we propose a federated domain translation method that generates pseudodata for each client which could be useful for multiple downstream learning tasks. We empirically demonstrate that our translation model is more resource-efficient (in terms of both communication and computation) and easier to train in an FL setting than standard domain translation methods. Furthermore, we demonstrate that the learned translation model enables use of state-of-the-art domain generalization methods in a federated setting, which enhances accuracy and robustness to increases in the synchronization period compared to existing methodology. 
    more » « less
  2. Artificial intelligence (AI) supported network traffic classification (NTC) has been developed lately for network measurement and quality-of-service (QoS) purposes. More recently, federated learning (FL) approach has been promoted for distributed NTC development due to its nature of unshared dataset for better privacy and confidentiality in raw networking data collection and sharing. However, network measurement still require invasive probes and constant traffic monitoring. In this paper, we propose a non-invasive network traffic estimation and user profiling mechanism by leveraging label inference of FL-based NTC. In specific, the proposed scheme only monitors weight differences in FL model updates from a targeting user and recovers its network application (APP) labels as well as a rough estimate on the traffic pattern. Assuming a slotted FL update mechanism, the proposed scheme further maps inferred labels from multiple slots to different profiling classes that depend on, e.g., QoS and APP categorization. Without loss of generality, user profiles are determined based on normalized productivity, entertainment, and casual usage scores derived from an existing commercial router and its backend server. A slot extension mechanism is further developed for more accurate profiling beyond raw traffic measurement. Evaluations conducted on seven popular APPs across three user profiles demonstrate that our approach can achieve accurate networking user profiling without invasive physical probes nor constant traffic monitoring. 
    more » « less
  3. Federated learning (FL) is known to be susceptible to model poisoning attacks in which malicious clients hamper the accuracy of the global model by sending manipulated model updates to the central server during the FL training process. Existing defenses mainly focus on Byzantine-robust FL aggregations, and largely ignore the impact of the underlying deep neural network (DNN) that is used to FL training. Inspired by recent findings on critical learning periods (CLP) in DNNs, where small gradient errors have irrecoverable impact on the final model accuracy, we propose a new defense, called a CLP-aware defense against poisoning of FL (DeFL). The key idea of DeFL is to measure fine-grained differences between DNN model updates via an easy-to-compute federated gradient norm vector (FGNV) metric. Using FGNV, DeFL simultaneously detects malicious clients and identifies CLP, which in turn is leveraged to guide the adaptive removal of detected malicious clients from aggregation. As a result, DeFL not only mitigates model poisoning attacks on the global model but also is robust to detection errors. Our extensive experiments on three benchmark datasets demonstrate that DeFL produces significant performance gain over conventional defenses against state-of-the-art model poisoning attacks. 
    more » « less
  4. We consider the problem of predicting cellular network performance (signal maps) from measurements collected by several mobile devices. We formulate the problem within the online federated learning framework: (i) federated learning (FL) enables users to collaboratively train a model, while keeping their training data on their devices; (ii) measurements are collected as users move around over time and are used for local training in an online fashion. We consider an honest-but-curious server, who observes the updates from target users participating in FL and infers their location using a deep leakage from gradients (DLG) type of attack, originally developed to reconstruct training data of DNN image classifiers. We make the key observation that a DLG attack, applied to our setting, infers the average location of a batch of local data, and can thus be used to reconstruct the target users' trajectory at a coarse granularity. We build on this observation to protect location privacy, in our setting, by revisiting and designing mechanisms within the federated learning framework including: tuning the FL parameters for averaging, curating local batches so as to mislead the DLG attacker, and aggregating across multiple users with different trajectories. We evaluate the performance of our algorithms through both analysis and simulation based on real-world mobile datasets, and we show that they achieve a good privacy-utility tradeoff. 
    more » « less
  5. Continual Federated Learning (CFL) is a distributed machine learning technique that enables multiple clients to collaboratively train a shared model without sharing their data, while also adapting to new classes without forgetting previously learned ones. This dynamic, adaptive learning process parallels the concept of founda- tion models in FL, where large, pre-trained models are fine-tuned in a decentralized, federated setting. While foundation models in FL leverage pre-trained knowledge as a starting point, CFL continu- ously updates the shared model as new tasks and data distributions emerge, requiring ongoing adaptation. Currently, there are limited evaluation models and metrics in measuring fairness in CFL, and ensuring fairness over time can be challenging as the system evolves. To address this challenge, this article explores temporal fairness in CFL, examining how the fairness of the model can be influenced by the selection and participation of clients over time. Based on individual fairness, we introduce a novel fairness metric that captures temporal aspects of client behavior and evaluates different client selection strategies for their impact on promoting fairness. 
    more » « less