skip to main content


Search for: All records

Creators/Authors contains: "Yu, R."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 6, 2025
  2. Free, publicly-accessible full text available April 18, 2025
  3. Free, publicly-accessible full text available February 13, 2025
  4. Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks. Previous work shows that with proper position embedding, GT can approximate MPNN arbitrarily well, implying that GT is at least as powerful as MPNN. In this paper, we study the inverse connection and show that MPNN with virtual node (VN), a commonly used heuristic with little theoretical understanding, is powerful enough to arbitrarily approximate the self-attention layer of GT. In particular, we first show that if we consider one type of linear transformer, the so-called Performer/Linear Transformer, then MPNN+ VN with only depth and width can approximate a self-attention layer in Performer/Linear Transformer. Next, via a connection between MPNN+ VN and DeepSets, we prove the MPNN+ VN with width and depth can approximate the self-attention layer arbitrarily well, where is the input feature dimension. Lastly, under some assumptions, we provide an explicit construction of MPNN+ VN with width and depth approximating the self-attention layer in GT arbitrarily well. On the empirical side, we demonstrate that 1) MPNN+ VN is a surprisingly strong baseline, outperforming GT on the recently proposed Long Range Graph Benchmark (LRGB) dataset, 2) our MPNN+ VN improves over early implementation on a wide range of OGB datasets and 3) MPNN+ VN outperforms Linear Transformer and MPNN on the climate modeling task. 
    more » « less