- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
30
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Chai, Zheng (3)
-
Cheng, Yue (3)
-
Anwar, Ali (2)
-
Zhao, Liang (2)
-
Ali, Ahsan (1)
-
Baracaldo, Nathalie (1)
-
Chen, Yujing (1)
-
Ludwig, Heiko (1)
-
Rangwala, Huzefa (1)
-
Truex, Stacey (1)
-
Wang, Junxiang (1)
-
Yan, Feng (1)
-
Zawad, Syed (1)
-
Zhou, Yi (1)
-
#Tyler Phillips, Kenneth E. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Ahmed, Khadija. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Higgins, A. (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
:Chaosong Huang, Gang Lu (0)
-
A. Agarwal (0)
-
A. Beygelzimer (0)
-
A. E. Lischka (0)
-
A. E. Lischka, E. B. (0)
-
A. E. Lischka, E.B. Dyer (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Federated learning (FL) involves training a model over massive distributed devices, while keeping the training data localized and private. This form of collaborative learning exposes new tradeoffs among model convergence speed, model accuracy, balance across clients, and communication cost, with new challenges including: (1) straggler problem—where clients lag due to data or (computing and network) resource heterogeneity, and (2) communication bottleneck—where a large number of clients communicate their local updates to a central server and bottleneck the server. Many existing FL methods focus on optimizing along only one single dimension of the tradeoff space. Existing solutions use asynchronous model updating or tiering-based, synchronous mechanisms to tackle the straggler problem. However, asynchronous methods can easily create a communication bottleneck, while tiering may introduce biases that favor faster tiers with shorter response latencies. To address these issues, we present FedAT, a novel Federated learning system with Asynchronous Tiers under Non-i.i.d. training data. FedAT synergistically combines synchronous, intra-tier training and asynchronous, cross-tier training. By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy. FedAT uses a straggler-aware, weighted aggregation heuristic to steer and balance the training across clients for further accuracy improvement.more »
-
Wang, Junxiang ; Chai, Zheng ; Cheng, Yue ; Zhao, Liang ( , International Conference on Data Mining)
-
Chai, Zheng ; Ali, Ahsan ; Zawad, Syed ; Truex, Stacey ; Anwar, Ali ; Baracaldo, Nathalie ; Zhou, Yi ; Ludwig, Heiko ; Yan, Feng ; Cheng, Yue ( , Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (HPDC 20))