skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data
Award ID(s):
2302786
PAR ID:
10511726
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S
Publisher / Repository:
Curran Associates, Inc
Date Published:
Journal Name:
Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
  2. Graph Neural Networks (GNNs) are based on repeated aggregations of information from nodes’ neighbors in a graph. However, because nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN representation technique that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations and unnecessary data transfers in GNN training and inference. HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time due to optimized computations. To identify redundant computations, we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training by up to 6.3× and 5.6×, with only 0.1% memory overhead. Overall, our results represent an important advancement in speeding-up and scaling-up GNNs without any loss in model predictive performance. 
    more » « less
  3. Starting with a vertex-weighted pointed graph [Formula: see text], we form the free loop algebra [Formula: see text] defined in Hartglass–Penneys’ article on canonical [Formula: see text]-algebras associated to a planar algebra. Under mild conditions, [Formula: see text] is a non-nuclear simple [Formula: see text]-algebra with unique tracial state. There is a canonical polynomial subalgebra [Formula: see text] together with a Dirac number operator [Formula: see text] such that [Formula: see text] is a spectral triple. We prove the Haagerup-type bound of Ozawa–Rieffel to verify [Formula: see text] yields a compact quantum metric space in the sense of Rieffel. We give a weighted analog of Benjamini–Schramm convergence for vertex-weighted pointed graphs. As our [Formula: see text]-algebras are non-nuclear, we adjust the Lip-norm coming from [Formula: see text] to utilize the finite dimensional filtration of [Formula: see text]. We then prove that convergence of vertex-weighted pointed graphs leads to quantum Gromov–Hausdorff convergence of the associated adjusted compact quantum metric spaces. As an application, we apply our construction to the Guionnet–Jones–Shyakhtenko (GJS) [Formula: see text]-algebra associated to a planar algebra. We conclude that the compact quantum metric spaces coming from the GJS [Formula: see text]-algebras of many infinite families of planar algebras converge in quantum Gromov–Hausdorff distance. 
    more » « less