skip to main content


Title: Supervised Graph Representation Learning for Modeling the Relationship between Structural and Functional Brain Connectivity
In this paper, we propose a supervised graph representation learning method to model the relationship between brain functional connectivity (FC) and structural connectivity (SC) through a graph encoder-decoder system. The graph convolutional network (GCN) model is leveraged in the encoder to learn lower-dimensional node representations (i.e. node embeddings) integrating information from both node attributes and network topology. In doing so, the encoder manages to capture both direct and indirect interactions between brain regions in the node embeddings which later help reconstruct empirical FC networks. From node embeddings, graph representations are learnt to embed the entire graphs into a vector space. Our end-to-end model utilizes a multi-objective loss function to simultaneously learn node representations for FC network reconstruction and graph representations for subject classification. The experiment on a large population of non-drinkers and heavy drinkers shows that our model can provide a characterization of the population pattern in the SC-FC relationship, while also learning features that capture individual uniqueness for subject classification. The identified key brain subnetworks show significant between-group difference and support the promising prospect of GCN-based graph representation learning on brain networks to model human brain activity and function.  more » « less
Award ID(s):
1809356 1750428
NSF-PAR ID:
10222998
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Page Range / eLocation ID:
9065 to 9069
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, the relationship between functional and structural brain networks is investigated by training a graph encoder-decoder system to learn the mapping from brain structural connectivity (SC) to functional connectivity (FC). Our work leverages a graph convolutional network (GCN) model in the encoder which integrates both nodal attributes and the network topology information to generate new graph representations in lower dimensions. Using brain SC graphs as inputs, the novel GCN-based encoder-decoder system manages to account for both direct and indirect interactions between brain regions to reconstruct the empirical FC networks. In doing so, the latent variables within the system (i.e., the learnt low-dimensional embeddings) capture important information regarding the relation between functional and structural networks. By decomposing the reconstructed functional networks in terms of the output of each graph convolution filter, we can extract those brain regions which contribute most to the generation of FC networks from their SC counterparts. Experiments on a large population of healthy subjects from the Human Connectome Project show our model can learn a generalizable and interpretable SC-FC relationship. Overall, results here support the promising prospect of using GCNs to discover more about the complex nature of human brain activity and function. 
    more » « less
  2. Advances in graph signal processing for network neuroscience offer a unique pathway to integrate brain structure and function, with the goal of revealing some of the brain's organizing principles at the system level. In this direction, we develop a supervised graph representation learning framework to model the relationship between brain structural connectivity (SC) and functional connectivity (FC) via a graph encoder-decoder system. Specifically, we propose a Siamese network architecture equipped with graph convolutional encoders to learn graph (i.e., subject)-level embeddings that preserve application-dependent similarity measures between brain networks. This way, we effectively increase the number of training samples and bring in the flexibility to incorporate additional prior information via the prescribed target graph-level distance. While information on the brain structure-function coupling is implicitly distilled via reconstruction of brain FC from SC, our model also manages to learn representations that preserve the similarity between input graphs. The superior discriminative power of the learnt representations is demonstrated in downstream tasks including subject classification and visualization. All in all, this work advocates the prospect of leveraging learnt graph-level, similarity-preserving embeddings for brain network analysis, by bringing to bear standard tools of metric data analysis. 
    more » « less
  3. Network embedding has been an effective tool to analyze heterogeneous networks (HNs) by representing nodes in a low-dimensional space. Although many recent methods have been proposed for representation learning of HNs, there is still much room for improvement. Random walks based methods are currently popular methods to learn network embedding; however, they are random and limited by the length of sampled walks, and have difculty capturing network structural information. Some recent researches proposed using meta paths to express the sample relationship in HNs. Another popular graph learning model, the graph convolutional network (GCN) is known to be capable of better exploitation of network topology, but the current design of GCN is intended for homogenous networks. This paper proposes a novel combination of meta-graph and graph convolution, the meta-graph based graph convolutional networks (MGCN). To fully capture the complex long semantic information, MGCN utilizes different meta-graphs in HNs. As different meta-graphs express different semantic relationships, MGCN learns the weights of different meta-graphs to make up for the loss of semantics when applying GCN. In addition, we improve the current convolution design by adding node self-signicance. To validate our model in learning feature representation, we present comprehensive experiments on four real-world datasets and two representation tasks: classication and link prediction. WMGCN's representations can improve accuracy scores by up to around 10% in comparison to other popular representation learning models. What's more, WMGCN'feature learning outperforms other popular baselines. The experimental results clearly show our model is superior over other state-of-the-art representation learning algorithms. 
    more » « less
  4. Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification. Existing graph representation learning methods (e.g., based on random walk and contrastive learning) are limited to maximizing the local similarity of connected nodes. Such pair-wise learning schemes could fail to capture the global distribution of representations, since it has no explicit constraints on the global geometric properties of representation space. To this end, we propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner via maximizing rate reduction. In this way, G2R maps nodes in distinct groups (implicitly stored in the adjacency matrix) into different subspaces, while each subspace is compact and different subspaces are dispersedly distributed. G2R adopts a graph neural network as the encoder and maximizes the rate reduction with the adjacency matrix. Furthermore, we theoretically and empirically demonstrate that rate reduction maximization is equivalent to maximizing the principal angles between different subspaces. Experiments on real-world datasets show that G2R outperforms various baselines on node classification and community detection tasks. 
    more » « less
  5. Graph convolutional neural networks (GCNs) embed nodes in a graph into Euclidean space, which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure. Hyperbolic geometry offers an exciting alternative, as it enables embeddings with much smaller distortion. However, extending GCNs to hyperbolic geometry presents several unique challenges because it is not clear how to define neural network operations, such as feature transformation and aggregation, in hyperbolic space. Furthermore, since input features are often Euclidean, it is unclear how to transform the features into hyperbolic embeddings with the right amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural Network (HGCN), the first inductive hyperbolic GCN that leverages both the expressiveness of GCNs and hyperbolic geometry to learn inductive node representations for hierarchical and scale-free graphs. We derive GCNs operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the PubMed dataset. 
    more » « less