skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Integrated Distributed Wireless Sensing with Over-The-Air Federated Learning
Over-the-air federated learning (OTA-FL) is a communication-effective approach for achieving distributed learning tasks. In this paper, we aim to enhance OTA-FL by seamlessly combining sensing into the communication-computation integrated system. Our research reveals that the wireless waveform used to convey OTA-FL parameters possesses inherent properties that make it well-suited for sensing, thanks to its remarkable auto-correlation characteristics. By leveraging the OTA-FL learning statistics, i.e., means and variances of local gradients in each training round, the sensing results can be embedded therein without the need for additional time or frequency resources. Finally, by considering the imperfections of learning statistics that are neglected in the prior works, we end up with an optimized the transceiver design to maximize the OTA-FL performance. Simulations validate that the proposed method not only achieves outstanding sensing performance but also significantly lowers the learning error bound.  more » « less
Award ID(s):
2103256 1901134 2212318
PAR ID:
10494146
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium
ISBN:
979-8-3503-2010-7
Page Range / eLocation ID:
600 to 603
Format(s):
Medium: X
Location:
Pasadena, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Over-the-air federated learning (OTA-FL) is a communication effective approach for achieving distributed learning tasks. In this paper, we aim to enhance OTA-FL by seamlessly combining sensing into the communication-computation integrated system. Our research reveals that the wireless waveform used to convey OTA-FL parameters possesses inherent properties that make it well-suited for sensing, thanks to its remarkable auto- correlation characteristics. By leveraging the OTA-FL learning statistics, i.e., means and variances of local gradients in each training round, the sensing results can be embedded therein without the need for additional time or frequency resources. Finally, by considering the imperfections of learning statistics that are neglected in the prior works, we end up with an optimized the transceiver design to maximize the OTA-FL performance. Simulations validate that the proposed method not only achieves outstanding sensing performance but also significantly lowers the learning error bound. 
    more » « less
  2. Over-the-air federated learning (OTA-FL) is a communicationeffective approach for achieving distributed learning tasks. In this paper, we aim to enhance OTA-FL by seamlessly combining sensing into the communication-computation integrated system. Our research reveals that the wireless waveform used to convey OTA-FL parameters possesses inherent properties that make it well-suited for sensing, thanks to its remarkable auto-correlation characteristics. By leveraging the OTA-FL learning statistics, i.e., means and variances of local gradients in each training round, the sensing results can be embedded therein without the need for additional time or frequency resources. Finally, by considering the imperfections of learning statistics that are neglected in the prior works, we end up with an optimized the transceiver design to maximize the OTA-FL performance. Simulations validate that the proposed method not only achieves outstanding sensing performance but also significantly lowers the learning error bound. 
    more » « less
  3. Federated Learning (FL) revolutionizes collaborative machine learning among Internet of Things (IoT) devices by enabling them to train models collectively while preserving data privacy. FL algorithms fall into two primary categories: synchronous and asynchronous. While synchronous FL efficiently handles straggler devices, its convergence speed and model accuracy can be compromised. In contrast, asynchronous FL allows all devices to participate but incurs high communication overhead and potential model staleness. To overcome these limitations, the paper introduces a semi-synchronous FL framework that uses client tiering based on computing and communication latencies. Clients in different tiers upload their local models at distinct frequencies, striking a balance between straggler mitigation and communication costs. Building on this, the paper proposes the Dynamic client clustering, bandwidth allocation, and local training for semi-synchronous Federated learning (DecantFed) algorithm to dynamically optimize client clustering, bandwidth allocation, and local training workloads in order to maximize data sample processing rates in FL. DecantFed dynamically optimizes client clustering, bandwidth allocation, and local training workloads for maximizing data processing rates in FL. It also adapts client learning rates according to their tiers, thus addressing the model staleness issue. Extensive simulations using benchmark datasets like MNIST and CIFAR-10, under both IID and non-IID scenarios, demonstrate DecantFed’s superior performance. It outperforms FedAvg and FedProx in convergence speed and delivers at least a 28% improvement in model accuracy, compared to FedProx. 
    more » « less
  4. The recent developments in Federated Learning (FL) focus on optimizing the learning process for data, hardware, and model heterogeneity. However, most approaches assume all devices are stationary, charging, and always connected to the Wi-Fi when training on local data. We argue that when real devices move around, the FL process is negatively impacted and the device energy spent for communication is increased. To mitigate such effects, we propose a dynamic community selection algorithm which improves the communication energy efficiency and two new aggregation strategies that boost the learning performance in Hierarchical FL (HFL). For real mobility traces, we show that compared to state-of-the-art HFL solutions, our approach is scalable, achieves better accuracy on multiple datasets, converges up to 3.88× faster, and is significantly more energy efficient for both IID and non-IID scenarios. 
    more » « less
  5. As a popular distributed learning paradigm, federated learning (FL) over mobile devices fosters numerous applications, while their practical deployment is hindered by participating devices' computing and communication heterogeneity. Some pioneering research efforts proposed to extract subnetworks from the global model, and assign as large a subnetwork as possible to the device for local training based on its full computing capacity. Although such fixed size subnetwork assignment enables FL training over heterogeneous mobile devices, it is unaware of (i) the dynamic changes of devices' communication and computing conditions and (ii) FL training progress and its dynamic requirements of local training contributions, both of which may cause very long FL training delay. Motivated by those dynamics, in this paper, we develop a wireless and heterogeneity aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling. Instead of sticking to the fixed size subnetwork, WHALE-FL introduces a novel subnetwork selection utility function to capture device and FL training dynamics, and guides the mobile device to adaptively select the subnetwork size for local training based on (a) its computing and communication capacity, (b) its dynamic computing and/or communication conditions, and (c) FL training status and its corresponding requirements for local training contributions. Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy. 
    more » « less