Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upperbounded by the 1WeisfeilerLehman (1WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d regular graphs. Here we develop a class of message passing GNNs, named Identityaware Graph Neural Networks (IDGNNs), with greater expressive power than the 1WL test. IDGNN offers a minimal but powerful solution to limitations of existing GNNs. IDGNN extends existing GNN architectures by inductively considering nodesâ€™ identities during message passing. To embed a given node, IDGNN first extracts the ego network centered at the node, then conducts rounds of heterogeneous message passing, where different sets of parameters are applied to the center node than to other surrounding nodes in the ego network. We further propose a simplified but faster version of IDGNN that injects node identity information as augmented node features. Altogether, both versions of ID GNN represent general extensions of message passing GNNs, where experiments show that transforming existing GNNs to IDGNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks;more »
Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning
Learning representations of sets of nodes in a graph is crucial for applications ranging from noderole discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, expressive power of GNNs is limited by the 1WeisfeilerLehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. More powerful GNNs, proposed recently by mimicking higherorderWL tests, only focus on representing entire graphs and they are computationally inefficient as they cannot utilize sparsity of the underlying graph. Here we propose and mathematically analyze a general class of structure related features, termed Distance Encoding (DE). DE assists GNNs in representing any set of nodes, while providing strictly more expressive power than the 1WL test. DE captures the distance between the node set whose representation is to be learned and each node in the graph. To capture the distance DE can apply various graphdistance measures such as shortest path distance or generalized PageRank scores. We propose two ways for GNNs to use DEs (1) as extra node features, and (2) as controllers of message aggregation in GNNs. Both approaches can utilize the sparse structure of the underlying more »
 Publication Date:
 NSFPAR ID:
 10219229
 Journal Name:
 Neural Information Processing Systems (NeurIPS)
 Sponsoring Org:
 National Science Foundation
More Like this


Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upperbounded by the 1WeisfeilerLehman (1WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different dregular graphs. Here we develop a class of message passing GNNs, named Identityaware Graph Neural Networks (IDGNNs), with greater expressive power than the 1WL test. IDGNN offers a minimal but powerful solution to limitations of existing GNNs. IDGNN extends existing GNN architectures by inductively considering nodesâ€™ identities during message passing. To embed a given node, IDGNN first extracts the ego network centered at the node, then conducts rounds of heterogeneous message passing, where different sets of parameters are applied to the center node than to other surrounding nodes in the ego network. We further propose a simplified but faster version of IDGNN that injects node identity information as augmented node features. Altogether, both versions of IDGNN represent general extensions of message passing GNNs, where experiments show that transforming existing GNNs to IDGNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks; 3% accuracymore »

Most stateoftheart Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor, layer, or subgraphsampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a minibatch. However, samplingbased methods are difficult to apply to GNNs that utilize manyhopsaway or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQGNN, a universal framework to scale up any convolutionbased GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to samplingbased techniques, our approach can effectively preserve all the messages passed to a minibatch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a lowrank version of the graph convolution matrix. We show that such a compact lowrank version of the gigantic convolution matrix is sufficient both theoreticallymore »

Graph Neural Networks (GNNs) have recently been used for node and graph classification tasks with great success, but GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels. In this work, we consider the task of inductive node classification using GNNs in supervised and semisupervised settings, with the goal of incorporating label dependencies. Because current GNNs are not universal (i.e., mostexpressive) graph representations, we propose a general collective learning approach to increase the representation power of any existing GNN. Our framework combines ideas from collective classification with selfsupervised learning, and uses a Monte Carlo approach to sampling embeddings for inductive learning across graphs. We evaluate performance on five realworld network datasets and demonstrate consistent, significant improvement in node classification accuracy, for a variety of stateoftheart GNNs.

The ability to detect and count certain substructures in graphs is important for solving many tasks on graphstructured data, especially in the contexts of computa tional chemistry and biology as well as social network analysis. Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation. We distinguish between two types of substructure counting: inducedsubgraphcount and subgraphcount, and establish both positive and negative answers for popular GNN architectures. Specifically, we prove that Message Passing Neural Networks (MPNNs), 2WeisfeilerLehman (2WL) and 2Invariant Graph Networks (2IGNs) cannot perform inducedsubgraphcount of any connected substructure consisting of 3 or more nodes, while they can perform subgraphcount of starshaped sub structures. As an intermediary step, we prove that 2WL and 2IGNs are equivalent in distinguishing nonisomorphic graphs, partly answering an open problem raised in [38]. We also prove positive results for kWL and kIGNs as well as negative results for kWL with a finite number of iterations. We then conduct experiments that support the theoretical results for MPNNs and 2IGNs. Moreover, motivated by substructure counting and inspired by [45],more »