We establish rapid mixing of the randomcluster Glauber dynamics on random
The graph convolutional network (GCN) is a goto solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on largescale graphs, we pioneer efficient training of largescale GCN models with the proposal of a novel, distributed training framework, called . disjointly partitions the parameters of a GCN model into several, smaller subGCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wallclock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with , we train an astonishglywide 32–768dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of
 Award ID(s):
 2008555
 NSFPAR ID:
 10441079
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Journal of Applied and Computational Topology
 Volume:
 8
 Issue:
 5
 ISSN:
 23671726
 Format(s):
 Medium: X Size: p. 13631415
 Size(s):
 p. 13631415
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract regular graphs for all$$\varDelta $$ $\Delta $ and$$q\ge 1$$ $q\ge 1$ , where the threshold$$p $p<{p}_{u}(q,\Delta )$ corresponds to a uniqueness/nonuniqueness phase transition for the randomcluster model on the (infinite)$$p_u(q,\varDelta )$$ ${p}_{u}(q,\Delta )$ regular tree. It is expected that this threshold is sharp, and for$$\varDelta $$ $\Delta $ the Glauber dynamics on random$$q>2$$ $q>2$ regular graphs undergoes an exponential slowdown at$$\varDelta $$ $\Delta $ . More precisely, we show that for every$$p_u(q,\varDelta )$$ ${p}_{u}(q,\Delta )$ ,$$q\ge 1$$ $q\ge 1$ , and$$\varDelta \ge 3$$ $\Delta \ge 3$ , with probability$$p $p<{p}_{u}(q,\Delta )$ over the choice of a random$$1o(1)$$ $1o\left(1\right)$ regular graph on$$\varDelta $$ $\Delta $n vertices, the Glauber dynamics for the randomcluster model has mixing time. As a corollary, we deduce fast mixing of the Swendsen–Wang dynamics for the Potts model on random$$\varTheta (n \log n)$$ $\Theta (nlogn)$ regular graphs for every$$\varDelta $$ $\Delta $ , in the tree uniqueness region. Our proof relies on a sharp bound on the “shattering time”, i.e., the number of steps required to break up any configuration into$$q\ge 2$$ $q\ge 2$ sized clusters. This is established by analyzing a delicate and novel iterative scheme to simultaneously reveal the underlying random graph with clusters of the Glauber dynamics configuration on it, at a given time.$$O(\log n)$$ $O(logn)$ 
Abstract Let us call a simple graph on
vertices a prime gap graph if its vertex degrees are 1 and the first$$n\geqslant 2$$ $n\u2a7e2$ prime gaps. We show that such a graph exists for every large$$n1$$ $n1$n , and in fact for every if we assume the Riemann hypothesis. Moreover, an infinite sequence of prime gap graphs can be generated by the socalled degree preserving growth process. This is the first time a naturally occurring infinite sequence of positive integers is identified as graphic. That is, we show the existence of an interesting, and so far unique, infinite combinatorial object.$$n\geqslant 2$$ $n\u2a7e2$ 
Abstract A graph
G isH free if it has no induced subgraph isomorphic toH . We prove that a free graph with clique number$$P_5$$ ${P}_{5}$ has chromatic number at most$$\omega \ge 3$$ $\omega \ge 3$ . The best previous result was an exponential upper bound$$\omega ^{\log _2(\omega )}$$ ${\omega}^{{log}_{2}\left(\omega \right)}$ , due to Esperet, Lemoine, Maffray, and Morel. A polynomial bound would imply that the celebrated ErdősHajnal conjecture holds for$$(5/27)3^{\omega }$$ $(5/27){3}^{\omega}$ , which is the smallest open case. Thus, there is great interest in whether there is a polynomial bound for$$P_5$$ ${P}_{5}$ free graphs, and our result is an attempt to approach that.$$P_5$$ ${P}_{5}$ 
Abstract Extending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important, and much research has been done recently. The generalized Haar–Walsh transform (GHWT) developed by Irion and Saito (2014) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh–Hadamard transforms. We propose the
extended generalized Haar–Walsh transform (eGHWT), which is a generalization of the adapted time–frequency tilings of Thiele and Villemoes (1996). The eGHWT examines not only the efficiency of graphdomain partitions but also that of “sequencydomain” partitionssimultaneously . Consequently, the eGHWT and its associated bestbasis selection algorithm for graph signals significantly improve the performance of the previous GHWT with the similar computational cost, , where$$O(N \log N)$$ $O(NlogN)$N is the number of nodes of an input graph. While the GHWT bestbasis algorithm seeks the most suitable orthonormal basis for a given task among more than possible orthonormal bases in$$(1.5)^N$$ ${\left(1.5\right)}^{N}$ , the eGHWT bestbasis algorithm can find a better one by searching through more than$$\mathbb {R}^N$$ ${R}^{N}$ possible orthonormal bases in$$0.618\cdot (1.84)^N$$ $0.618\xb7{\left(1.84\right)}^{N}$ . This article describes the details of the eGHWT bestbasis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals. Furthermore, we also show how the eGHWT can be extended to 2D signals and matrixform data by viewing them as a tensor product of graphs generated from their columns and rows and demonstrate its effectiveness on applications such as image approximation.$$\mathbb {R}^N$$ ${R}^{N}$ 
Abstract In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a polyLaplacian regulariser. The methodology is readily adapted to graphs and here we consider graph polyLaplacian regularization in a fully supervised, nonparametric, noise corrupted, regression problem. In particular, given a dataset
and a set of noisy labels$$\{x_i\}_{i=1}^n$$ ${\left\{{x}_{i}\right\}}_{i=1}^{n}$ we let$$\{y_i\}_{i=1}^n\subset \mathbb {R}$$ ${\left\{{y}_{i}\right\}}_{i=1}^{n}\subset R$ be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph polyLaplacian term. When$$u_n{:}\{x_i\}_{i=1}^n\rightarrow \mathbb {R}$$ ${u}_{n}:{\left\{{x}_{i}\right\}}_{i=1}^{n}\to R$ , for iid noise$$y_i = g(x_i)+\xi _i$$ ${y}_{i}=g\left({x}_{i}\right)+{\xi}_{i}$ , and using the geometric random graph, we identify (with high probability) the rate of convergence of$$\xi _i$$ ${\xi}_{i}$ to$$u_n$$ ${u}_{n}$g in the large data limit . Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model.$$n\rightarrow \infty $$ $n\to \infty $