%AThorpe, Matthew%ANguyen, Tan%AXia, Heidi%AStrohmer, Thomas%ABertozzi, Andrea%AOsher, Stanley%AWang, Bao%D2022%I %K %MOSTI ID: 10349708 %PMedium: X %TGRAND++: Graph Neural Diffusion with A Source Term %XWe propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks. Country unknown/Code not availableOSTI-MSA