skip to main content


Title: CM-GCN: A Distributed Framework for Graph Convolutional Networks using Cohesive Mini-batches
Graph convolutional network (GCN) has been shown effective in many applications with graph structures. However, training a large-scale GCN is still challenging due to the high computation cost that grows with the size of the graph. In this paper, we propose CM-GCN, a distributed GCN framework using cohesive mini-batches to accelerate large-scale GCN training. The cohesive mini-batches group nodes that are tightly connected in the graph. As a result, CM-GCN can reduce the computation required to train a GCN. We propose a computation cost function to quantify the computation required for mini-batches. By exploring the submodular property of the computation cost function, we develop an efficient algorithm to partition nodes into tightly coupled mini-batches. Based on the computation cost function, we evenly distribute the workloads of mini-batches to workers. We design asynchronous computations between GCN layers to further eliminating the waiting among workers. We implement a CM-GCN framework and evaluate its performance with graphs that contain millions of nodes. Our evaluation shows that CM-GCN can achieve up to 3X speedup without compromising the training accuracy.  more » « less
Award ID(s):
1908536
NSF-PAR ID:
10356562
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE International Conference on Big Data (Big Data)
Page Range / eLocation ID:
153 to 163
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called . disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wall-clock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with , we train an astonishgly-wide 32–768-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of$$8\times $$8×, to SOTA performance on the Amazon2M dataset.

     
    more » « less
  2. Measuring importance of nodes in a graph is one of the key aspects in graph analysis. Betweenness centrality (BC) measures the amount of influence that a node has over the flow of information in a graph. However, the computation complexity of calculating BC is extremely high with large-scale graphs. This is especially true when analyzing the road networks with millions of nodes and edges. In this study, we propose a deep learning architecture RoadCaps to estimate BC with sub-second latencies. RoadCaps aggregates features from neighbor nodes using Graph Convolutional Networks and estimates the node level BC by mapping low-level concept to high-level information using Capsule Networks. Our empirical benchmarks demonstrates that RoadCaps outperforms base models such as GCN and GCNFCL in both accuracy and robustness. On average, RoadCaps generates a node’s BC value in 7.5 milliseconds. 
    more » « less
  3. Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art deep learning model for representation learning on graphs. However, it remains notoriously challenging to train and inference GCNs over large graph datasets, limiting their application to large real-world graphs and hindering the exploration of deeper and more sophisticated GCN graphs. This is because as the graph size grows, the sheer number of node features and the large adjacency matrix can easily explode the required memory and data movements. To tackle the aforementioned challenges, we explore the possibility of drawing lottery tickets when sparsifying GCN graphs, i.e., subgraphs that largely shrink the adjacency matrix yet are capable of achieving accuracy comparable to or even better than their full graphs. Specifically, we for the first time discover the existence of graph early-bird (GEB) tickets that emerge at the very early stage when sparsifying GCN graphs, and propose a simple yet effective detector to automatically identify the emergence of such GEB tickets. Furthermore, we advocate graph-model co-optimization and develop a generic efficient GCN early-bird training framework dubbed GEBT that can significantly boost the efficiency of GCN training by (1) drawing joint early-bird tickets between the GCN graphs and models and (2) enabling simultaneously sparsification of both the GCN graphs and models. Experiments on various GCN models and datasets consistently validate our GEB finding and the effectiveness of our GEBT, e.g., our GEBT achieves up to 80.2% ~ 85.6% and 84.6% ~ 87.5% savings of GCN training and inference costs while offering a comparable or even better accuracy as compared to state-of-the-art methods. Our source code and supplementary appendix are available at https://github.com/RICE-EIC/Early-Bird-GCN. 
    more » « less
  4. null (Ed.)

    Networked data often demonstrate the Pareto principle (i.e., 80/20 rule) with skewed class distributions, where most vertices belong to a few majority classes and minority classes only contain a handful of instances. When presented with imbalanced class distributions, existing graph embedding learning tends to bias to nodes from majority classes, leaving nodes from minority classes under-trained. In this paper, we propose Dual-Regularized Graph Convolutional Networks (DR-GCN) to handle multi-class imbalanced graphs, where two types of regularization are imposed to tackle class imbalanced representation learning. To ensure that all classes are equally represented, we propose a class-conditioned adversarial training process to facilitate the separation of labeled nodes. Meanwhile, to maintain training equilibrium (i.e., retaining quality of fit across all classes), we force unlabeled nodes to follow a similar latent distribution to the labeled nodes by minimizing their difference in the embedding space. Experiments on real-world imbalanced graphs demonstrate that DR-GCN outperforms the state-of-the-art methods in node classification, graph clustering, and visualization.

     
    more » « less
  5. Graph neural networks (GNNs) have achieved tremendous success in many graph learning tasks such as node classifica- tion, graph classification and link prediction. For the classifi- cation task, GNNs’ performance often highly depends on the number of labeled nodes and thus could be significantly ham- pered due to the expensive annotation cost. The sparse litera- ture on active learning for GNNs has primarily focused on se- lecting only one sample each iteration, which becomes ineffi- cient for large scale datasets. In this paper, we study the batch active learning setting for GNNs where the learning agent can acquire labels of multiple samples at each time. We formu- late batch active learning as a cooperative multi-agent rein- forcement learning problem and present a novel reinforced batch-mode active learning framework (BIGENE). To avoid the combinatorial explosion of the joint action space, we in- troduce a value decomposition method that factorizes the to- tal Q-value into the average of individual Q-values. More- over, we propose a novel multi-agent Q-network consisting of a graph convolutional network (GCN) component and a gated recurrent unit (GRU) component. The GCN compo- nent takes both the informativeness and inter-dependences between nodes into account and the GRU component enables the agent to consider interactions between selected nodes in the same batch. Experimental results on multiple public datasets demonstrate the effectiveness and efficiency of our proposed method. 
    more » « less