The recent developments in Federated Learning (FL) focus on optimizing the learning process for data, hardware, and model heterogeneity. However, most approaches assume all devices are stationary, charging, and always connected to the Wi-Fi when training on local data. We argue that when real devices move around, the FL process is negatively impacted and the device energy spent for communication is increased. To mitigate such effects, we propose a dynamic community selection algorithm which improves the communication energy efficiency and two new aggregation strategies that boost the learning performance in Hierarchical FL (HFL). For real mobility traces, we show that compared to state-of-the-art HFL solutions, our approach is scalable, achieves better accuracy on multiple datasets, converges up to 3.88× faster, and is significantly more energy efficient for both IID and non-IID scenarios.
more »
« less
This content will become publicly available on March 26, 2026
Enhancing Data Security in Federated Learning with Dilithium
Federated learning (FL) enables multiple parties to collaboratively train machine learning models while preserving data privacy. However, securing communication within FL frameworks remains a significant challenge due to potential vulnerabilities to data breaches and integrity attacks. This paper proposes a novel approach using Dilithium, a robust digital signature framework, to enhance data security in FL. By integrating Dilithium into FL protocols, this study demonstrates robust communication security, preventing data tampering and unauthorized access, thereby promoting safer and more efficient collaborative model training across distributed networks. Furthermore, our approach incorporates an optimized client selection algorithm and a parallelized GPU-based training process that reduces latency and ensures seamless synchronization among participants. Experimental results demonstrate that our system achieves a total processing time of 6.891 seconds, significantly outperforming the 10.24 seconds of normal FL and 12.32 seconds of FL-Dilithium systems on the same computing platforms. Additionally, the proposed model achieves an accuracy of 94%, surpassing the 93% of the normal FL.
more »
« less
- Award ID(s):
- 2348464
- PAR ID:
- 10579464
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- Proceedings of IEEE International Symposium on Consumer Electronics
- ISSN:
- 2158-4001
- ISBN:
- 979-8-3315-2116-5
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Location:
- Las Vegas, NV, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Edge Computing (EC) has seen a continuous rise in its popularity as it provides a solution to the latency and communication issues associated with edge devices transferring data to remote servers. EC achieves this by bringing the cloud closer to edge devices. Even though EC does an excellent job of solving the latency and communication issues, it does not solve the privacy issues associated with users transferring personal data to the nearby edge server. Federated Learning (FL) is an approach that was introduced to solve the privacy issues associated with data transfers to distant servers. FL attempts to resolve this issue by bringing the code to the data, which goes against the traditional way of sending the data to remote servers. In FL, the data stays on the source device, and a Machine Learning (ML) model used to train the local data is brought to the end device instead. End devices train the ML model using local data and then send the model updates back to the server for aggregation. However, this process of asking random devices to train a model using its local data has potential risks such as a participant poisoning the model using malicious data for training to produce bogus parameters. In this paper, an approach to mitigate data poisoning attacks in a federated learning setting is investigated. The application of the approach is highlighted, and the practical and secure nature of this approach is illustrated as well using numerical results.more » « less
-
Over-the-air federated learning (OTA-FL) is a communication-effective approach for achieving distributed learning tasks. In this paper, we aim to enhance OTA-FL by seamlessly combining sensing into the communication-computation integrated system. Our research reveals that the wireless waveform used to convey OTA-FL parameters possesses inherent properties that make it well-suited for sensing, thanks to its remarkable auto-correlation characteristics. By leveraging the OTA-FL learning statistics, i.e., means and variances of local gradients in each training round, the sensing results can be embedded therein without the need for additional time or frequency resources. Finally, by considering the imperfections of learning statistics that are neglected in the prior works, we end up with an optimized the transceiver design to maximize the OTA-FL performance. Simulations validate that the proposed method not only achieves outstanding sensing performance but also significantly lowers the learning error bound.more » « less
-
Over-the-air federated learning (OTA-FL) is a communication effective approach for achieving distributed learning tasks. In this paper, we aim to enhance OTA-FL by seamlessly combining sensing into the communication-computation integrated system. Our research reveals that the wireless waveform used to convey OTA-FL parameters possesses inherent properties that make it well-suited for sensing, thanks to its remarkable auto- correlation characteristics. By leveraging the OTA-FL learning statistics, i.e., means and variances of local gradients in each training round, the sensing results can be embedded therein without the need for additional time or frequency resources. Finally, by considering the imperfections of learning statistics that are neglected in the prior works, we end up with an optimized the transceiver design to maximize the OTA-FL performance. Simulations validate that the proposed method not only achieves outstanding sensing performance but also significantly lowers the learning error bound.more » « less
-
As a promising approach to deal with distributed data, Federated Learning (FL) achieves major advancements in recent years. FL enables collaborative model training by exploiting the raw data dispersed in multiple edge devices. However, the data is generally non-independent and identically distributed, i.e., statistical heterogeneity, and the edge devices significantly differ in terms of both computation and communication capacity, i.e., system heterogeneity. The statistical heterogeneity leads to severe accuracy degradation while the system heterogeneity significantly prolongs the training process. In order to address the heterogeneity issue, we propose an Asynchronous Staleness-aware Model Update FL framework, i.e., FedASMU, with two novel methods. First, we propose an asynchronous FL system model with a dynamical model aggregation method between updated local models and the global model on the server for superior accuracy and high efficiency. Then, we propose an adaptive local model adjustment method by aggregating the fresh global model with local models on devices to further improve the accuracy. Extensive experimentation with 6 models and 5 public datasets demonstrates that FedASMU significantly outperforms baseline approaches in terms of accuracy (0.60% to 23.90% higher) and efficiency (3.54% to 97.98% faster).more » « less
An official website of the United States government
