skip to main content


This content will become publicly available on December 1, 2024

Title: Marginal dynamics of interacting diffusions on unimodular Galton–Watson trees
Consider a system of homogeneous interacting diffusive particles labeled by the nodes of a unimodular Galton–Watson tree, where the state of each node evolves infinitesi- mally like a d-dimensional diffusion whose drift coefficient depends on (the histories of) its own state and the states of neighboring nodes, and whose diffusion coefficient depends only on (the history of) its own state. Under suitable regularity assumptions on the coefficients, an autonomous characterization is obtained for the marginal dis- tribution of the dynamics of the neighborhood of a typical node in terms of a certain local equation, which is a new kind of stochastic differential equation that is nonlinear in the sense of McKean. This equation describes a finite-dimensional non-Markovian stochastic process whose infinitesimal evolution at any time depends not only on the structure and current state of the neighborhood, but also on the conditional law of the current state given the past of the states of neighborhing nodes until that time. Such marginal distributions are of interest because they arise as weak limits of both marginal distributions and empirical measures of interacting diffusions on many sequences of sparse random graphs, including the configuration model and Erdös–Rényi graphs whose average degrees converge to a finite non-zero limit. The results obtained complement classical results in the mean-field regime, which characterize the limiting dynamics of homogeneous interacting diffusions on complete graphs, as the num- ber of nodes goes to infinity, in terms of a corresponding nonlinear Markov process. However, in the sparse graph setting, the topology of the graph strongly influences the dynamics, and the analysis requires a completely different approach. The proofs of existence and uniqueness of the local equation rely on delicate new conditional independence and symmetry properties of particle trajectories on unimodular Galton– Watson trees, as well as judicious use of changes of measure.  more » « less
Award ID(s):
1954351
NSF-PAR ID:
10478863
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
Probability Theory and Related Fields
Volume:
187
Issue:
3-4
ISSN:
0178-8051
Page Range / eLocation ID:
817 to 884
Subject(s) / Keyword(s):
["Interacting diffusions · Sparse graphs · Random graphs · Local weak convergence · Mean-field limits · Nonlinear Markov processes · Erdo ̋s\u2013Rényi graphs · Configuration model · Unimodularity · Markov random fields"]
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We introduce a new notion of conditional nonlinear expectation under probability distortion. Such a distorted nonlinear expectation is not subadditive in general, so it is beyond the scope of Peng’s framework of nonlinear expectations. A more fundamental problem when extending the distorted expectation to a dynamic setting is time inconsistency, that is, the usual “tower property” fails. By localizing the probability distortion and restricting to a smaller class of random variables, we introduce a so-called distorted probability and construct a conditional expectation in such a way that it coincides with the original nonlinear expectation at time zero, but has a time-consistent dynamics in the sense that the tower property remains valid. Furthermore, we show that in the continuous time model this conditional expectation corresponds to a parabolic differential equation whose coefficient involves the law of the underlying diffusion. This work is the first step toward a new understanding of nonlinear expectations under probability distortion and will potentially be a helpful tool for solving time-inconsistent stochastic optimization problems. 
    more » « less
  2. Learning representations of sets of nodes in a graph is crucial for applications ranging from node-role discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, expressive power of GNNs is limited by the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. More powerful GNNs, proposed recently by mimicking higher-order-WL tests, only focus on representing entire graphs and they are computationally inefficient as they cannot utilize sparsity of the underlying graph. Here we propose and mathematically analyze a general class of structure related features, termed Distance Encoding (DE). DE assists GNNs in representing any set of nodes, while providing strictly more expressive power than the 1-WL test. DE captures the distance between the node set whose representation is to be learned and each node in the graph. To capture the distance DE can apply various graph-distance measures such as shortest path distance or generalized PageRank scores. We propose two ways for GNNs to use DEs (1) as extra node features, and (2) as controllers of message aggregation in GNNs. Both approaches can utilize the sparse structure of the underlying graph, which leads to computational efficiency and scalability. We also prove that DE can distinguish node sets embedded in almost all regular graphs where traditional GNNs always fail. We evaluate DE on three tasks over six real networks: structural role prediction, link prediction, and triangle prediction. Results show that our models outperform GNNs without DE by up-to 15% in accuracy and AUROC. Furthermore, our models also significantly outperform other state-of-the-art methods especially designed for the above tasks. 
    more » « less
  3. Zonal jets in a barotropic setup emerge out of homogeneous turbulence through a flow-forming instability of the homogeneous turbulent state (zonostrophic instability), which occurs as the turbulence intensity increases. This has been demonstrated using the statistical state dynamics (SSD) framework with a closure at second order. Furthermore, it was shown that for small supercriticality the flow-forming instability follows Ginzburg–Landau (G–L) dynamics. Here, the SSD framework is used to study the equilibration of this flow-forming instability for small supercriticality. First, we compare the predictions of the weakly nonlinear G–L dynamics to the fully nonlinear SSD dynamics closed at second order for a wide range of parameters. A new branch of jet equilibria is revealed that is not contiguously connected with the G–L branch. This new branch at weak supercriticalities involves jets with larger amplitude compared to the ones of the G–L branch. Furthermore, this new branch continues even for subcritical values with respect to the linear flow-forming instability. Thus, a new nonlinear flow-forming instability out of homogeneous turbulence is revealed. Second, we investigate how both the linear flow-forming instability and the novel nonlinear flow-forming instability are equilibrated. We identify the physical processes underlying the jet equilibration as well as the types of eddies that contribute in each process. Third, we propose a modification of the diffusion coefficient of the G–L dynamics that is able to capture the evolution of weak jets at scales other than the marginal scale (side-band instabilities) for the linear flow-forming instability.

     
    more » « less
  4. Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely – dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework. 
    more » « less
  5. Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework. 
    more » « less