Machine learning frameworks such as graph neural networks typically rely on a given, fixed
graph to exploit relational inductive biases and thus effectively learn from network data.
However, when said graphs are (partially) unobserved, noisy, or dynamic, the problem
of inferring graph structure from data becomes relevant. In this paper, we postulate a
graph convolutional relationship between the observed and latent graphs, and formulate
the graph structure learning task as a network inverse (deconvolution) problem. In lieu of
eigendecompositionbased spectral methods or iterative optimization solutions, we unroll and
truncate proximal gradient iterations to arrive at a parameterized neural network architecture
that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs
in a supervised fashion, perform link prediction or edgeweight regression tasks by adapting
the loss function, and they are inherently inductive as well as node permutation equivariant.
We corroborate GDN’s superior graph learning performance and its generalization to larger
graphs using synthetic data in supervised settings. Moreover, we demonstrate the robustness and representation power of GDNs on real world neuroimaging and social network datasets.
more »
« less
Graph Regression and Classification using Permutation Invariant Representations
We address the problem of graph regression using graph convolutional
neural networks and permutation invariant representation.
Many graph neural network algorithms can be abstracted
as a series of message passing functions between the
nodes, ultimately producing a set of latent features for each
node. Processing these latent features to produce a single estimate
over the entire graph is dependent on how the nodes are
ordered in the graph’s representation. We propose a permutation
invariant mapping that produces graph representations
that are invariant to any ordering of the nodes. This mapping
can serve as a pivotal piece in leveraging graph convolutional
networks for graph classification and graph regression problems.
We tested out this method and validated our solution on
the QM9 dataset.
more »
« less
 NSFPAR ID:
 10392152
 Date Published:
 Journal Name:
 AAAI/Graphs and more Complex structures for Learning and Reasoning Workshop
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


Graph neural networks (GNNs) have achieved lots of success on graphstructured data. In light of this, there has been increasing interest in studying their representation power. One line of work focuses on the universal approximation of permutationinvariant functions by certain classes of GNNs, and another demonstrates the limitation of GNNs via graph isomorphism tests. Our work connects these two perspectives and proves their equivalence. We further develop a framework of the representation power of GNNs with the language of sigmaalgebra, which incorporates both viewpoints. Using this framework, we compare the expressive power of different classes of GNNs as well as other methods on graphs. In particular, we prove that order2 Graph Ginvariant networks fail to distinguish nonisomorphic regular graphs with the same degree. We then extend them to a new architecture, RingGNN, which succeeds in distinguishing these graphs as well as for tasks on realworld datasets.more » « less

Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multilayer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translationinvariant and permutationinvariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve stateoftheart on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.more » « less

Measuring importance of nodes in a graph is one of the key aspects in graph analysis. Betweenness centrality (BC) measures the amount of influence that a node has over the flow of information in a graph. However, the computation complexity of calculating BC is extremely high with largescale graphs. This is especially true when analyzing the road networks with millions of nodes and edges. In this study, we propose a deep learning architecture RoadCaps to estimate BC with subsecond latencies. RoadCaps aggregates features from neighbor nodes using Graph Convolutional Networks and estimates the node level BC by mapping lowlevel concept to highlevel information using Capsule Networks. Our empirical benchmarks demonstrates that RoadCaps outperforms base models such as GCN and GCNFCL in both accuracy and robustness. On average, RoadCaps generates a node’s BC value in 7.5 milliseconds.more » « less

Set representation has become ubiquitous in deep learning for modeling the inductive bias of neural networks that are insensitive to the input order. DeepSets is the most widely used neural network architecture for set representation. It involves embedding each set element into a latent space with dimension L, followed by a sum pooling to obtain a wholeset embedding, and finally mapping the wholeset embedding to the output. In this work, we investigate the impact of the dimension L on the expressive power of DeepSets. Previous analyses either oversimplified highdimensional features to be onedimensional features or were limited to analytic activations, thereby diverging from practical use or resulting in L that grows exponentially with the set size N and feature dimension D. To investigate the minimal value of L that achieves sufficient expressive power, we present two setelement embedding layers: (a) linear + power activation (LP) and (b) linear + exponential activations (LE). We demonstrate that L being poly(N,D) is sufficient for set representation using both embedding layers. We also provide a lower bound of L for the LP embedding layer. Furthermore, we extend our results to permutationequivariant set functions and the complex field.more » « less