skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Replica-mean-field limits of fragmentation-interaction-aggregation processes
Abstract Network dynamics with point-process-based interactions are of paramount modeling interest. Unfortunately, most relevant dynamics involve complex graphs of interactions for which an exact computational treatment is impossible. To circumvent this difficulty, the replica-mean-field approach focuses on randomly interacting replicas of the networks of interest. In the limit of an infinite number of replicas, these networks become analytically tractable under the so-called ‘Poisson hypothesis’. However, in most applications this hypothesis is only conjectured. In this paper we establish the Poisson hypothesis for a general class of discrete-time, point-process-based dynamics that we propose to call fragmentation-interaction-aggregation processes, and which are introduced here. These processes feature a network of nodes, each endowed with a state governing their random activation. Each activation triggers the fragmentation of the activated node state and the transmission of interaction signals to downstream nodes. In turn, the signals received by nodes are aggregated to their state. Our main contribution is a proof of the Poisson hypothesis for the replica-mean-field version of any network in this class. The proof is obtained by establishing the propagation of asymptotic independence for state variables in the limit of an infinite number of replicas. Discrete-time Galves–Löcherbach neural networks are used as a basic instance and illustration of our analysis.  more » « less
Award ID(s):
2113213
PAR ID:
10538066
Author(s) / Creator(s):
; ;
Publisher / Repository:
Applied Probability Trust
Date Published:
Journal Name:
Journal of Applied Probability
Volume:
59
Issue:
1
ISSN:
0021-9002
Page Range / eLocation ID:
38 to 59
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Beck, Jeff (Ed.)
    Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria. 
    more » « less
  2. Abstract We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the O ( 1 / width ) fluctuations of the dynamical mean field theory order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. We find that once the mean field/µP parameterization is adopted, the leading finite size effect on the dynamics is to introduce initialization variance in the predictions and feature kernels of the networks. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For convolutional neural networks trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width. 
    more » « less
  3. Complex systems are characterized by intricate interactions between entities that evolve dynamically over time. Accurate inference of these dynamic relationships is crucial for understanding and predicting system behavior. In this paper, we propose Regulatory Temporal Interaction Network Inference (RiTINI) for inferring time-varying interaction graphs in complex systems using a novel combination of space-and-time graph attentions and graph neural ordinary differential equations (ODEs). RiTINI leverages time-lapse signals on a graph prior, as well as perturbations of signals at various nodes in order to effectively capture the dynamics of the underlying system. This approach is distinct from traditional causal inference networks, which are limited to inferring acyclic and static graphs. In contrast, RiTINI can infer cyclic, directed, and time-varying graphs, providing a more comprehensive and accurate representation of complex systems. The graph attention mechanism in RiTINI allows the model to adaptively focus on the most relevant interactions in time and space, while the graph neural ODEs enable continuous-time modeling of the system’s dynamics. We evaluate RiTINI’s performance on simulations of dynamical systems, neuronal networks, and gene regulatory networks, demonstrating its state-of-the-art capability in inferring interaction graphs compared to previous methods. 
    more » « less
  4. We present a machine learning model for the analysis of randomly generated discrete signals, modeled as the points of an inhomogeneous, compound Poisson point process. Like the wavelet scattering transform introduced by Mallat, our construction is naturally invariant to translations and reflections, but it decouples the roles of scale and frequency, replacing wavelets with Gabor-type measurements. We show that, with suitable nonlinearities, our measurements distinguish Poisson point processes from common self-similar processes, and separate different types of Poisson point processes. 
    more » « less
  5. We consider the problem of analyzing timestamped relational events between a set of entities, such as messages between users of an on-line social network. Such data are often analyzed using static or discrete-time network models, which discard a significant amount of information by aggregating events over time to form network snapshots. In this paper, we introduce a block point process model (BPPM) for continuous-time event-based dynamic networks. The BPPM is inspired by the well-known stochastic block model (SBM) for static networks. We show that networks generated by the BPPM follow an SBM in the limit of a growing number of nodes. We use this property to develop principled and efficient local search and variational inference procedures initialized by regularized spectral clustering. We fit BPPMs with exponential Hawkes processes to analyze several real network data sets, including a Facebook wall post network with over 3,500 nodes and 130,000 events. 
    more » « less