%AWasserman, Max%ASihag, Saurabh%AMateos, Gonzalo%ARibeiro, Alejandro%BJournal Name: Transactions on machine learning research
%D2023%I
%JJournal Name: Transactions on machine learning research
%K
%MOSTI ID: 10443080
%PMedium: X
%TLearning Graph Structure from Convolutional Mixtures
%XMachine learning frameworks such as graph neural networks typically rely on a given, fixed
graph to exploit relational inductive biases and thus effectively learn from network data.
However, when said graphs are (partially) unobserved, noisy, or dynamic, the problem
of inferring graph structure from data becomes relevant. In this paper, we postulate a
graph convolutional relationship between the observed and latent graphs, and formulate
the graph structure learning task as a network inverse (deconvolution) problem. In lieu of
eigendecomposition-based spectral methods or iterative optimization solutions, we unroll and
truncate proximal gradient iterations to arrive at a parameterized neural network architecture
that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs
in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting
the loss function, and they are inherently inductive as well as node permutation equivariant.
We corroborate GDNâ€™s superior graph learning performance and its generalization to larger
graphs using synthetic data in supervised settings. Moreover, we demonstrate the robustness and representation power of GDNs on real world neuroimaging and social network datasets.
%0Journal Article