The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called . disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wall-clock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with , we train an astonishgly-wide 32–768-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of
- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
00000030000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Dun, Chen (3)
-
Kyrillidis, Anastasios (3)
-
Wolfe, Cameron R. (2)
-
Aghazadeh, Amirali (1)
-
Antunes, Dinler_A (1)
-
Balaji, Advait (1)
-
Baraniuk, Richard (1)
-
Barberan, C_J (1)
-
Bayer, Artun (1)
-
Chowdhury, Arindam (1)
-
Dannenfelser, Ruth (1)
-
Edrisi, Mohammadamin (1)
-
Elworth, R_A_Leo (1)
-
Jermaine, Chris (1)
-
Kille, Bryce (1)
-
Liao, Fangshuo (1)
-
Nakhleh, Luay (1)
-
Nute, Michael_G (1)
-
Sapoval, Nicolae (1)
-
Segarra, Santiago (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract , to SOTA performance on the Amazon2M dataset.$$8\times $$ -
Yuan, Binhang ; Wolfe, Cameron R. ; Dun, Chen ; Tang, Yuxin ; Kyrillidis, Anastasios ; Jermaine, Chris ( , Proceedings of the VLDB Endowment)
-
Sapoval, Nicolae ; Aghazadeh, Amirali ; Nute, Michael_G ; Antunes, Dinler_A ; Balaji, Advait ; Baraniuk, Richard ; Barberan, C_J ; Dannenfelser, Ruth ; Dun, Chen ; Edrisi, Mohammadamin ; et al ( , Nature Communications)
Abstract Deep Learning (DL) has recently enabled unprecedented advances in one of the grand challenges in computational biology: the half-century-old problem of protein structure prediction. In this paper we discuss recent advances, limitations, and future perspectives of DL on five broad areas: protein structure prediction, protein function prediction, genome engineering, systems biology and data integration, and phylogenetic inference. We discuss each application area and cover the main bottlenecks of DL approaches, such as training data, problem scope, and the ability to leverage existing DL architectures in new contexts. To conclude, we provide a summary of the subject-specific and general challenges for DL across the biosciences.