skip to main content


Title: Growth and depletion in linear stochastic reaction networks
This paper is about a class of stochastic reaction networks. Of interest are the dynamics of interconversion among a finite number of substances through reactions that consume some of the substances and produce others. The models we consider are continuous-time Markov jump processes, intended as idealizations of a broad class of biological networks. Reaction rates depend linearly on “enzymes,” which are among the substances produced, and a reaction can occur only in the presence of sufficient upstream material. We present rigorous results for this class of stochastic dynamical systems, the mean-field behaviors of which are described by ordinary differential equations (ODEs). Under the assumption of exponential network growth, we identify certain ODE solutions as being potentially traceable and give conditions on network trajectories which, when rescaled, can with high probability be approximated by these ODE solutions. This leads to a complete characterization of the ω -limit sets of such network solutions (as points or random tori). Dimension reduction is noted depending on the number of enzymes. The second half of this paper is focused on depletion dynamics, i.e., dynamics subsequent to the “phase transition” that occurs when one of the substances becomes unavailable. The picture can be complex, for the depleted substance can be produced intermittently through other network reactions. Treating the model as a slow–fast system, we offer a mean-field description, a first step to understanding what we believe is one of the most natural bifurcations for reaction networks.  more » « less
Award ID(s):
1901009
NSF-PAR ID:
10431964
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
119
Issue:
51
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Continuous-time Markov chains are frequently used as stochastic models for chemical reaction networks, especially in the growing field of systems biology. A fundamental problem for these Stochastic Chemical Reaction Networks (SCRNs) is to understand the dependence of the stochastic behavior of these systems on the chemical reaction rate parameters. Towards solving this problem, in this paper we develop theoretical tools called comparison theorems that provide stochastic ordering results for SCRNs. These theorems give sufficient conditions for monotonic dependence on parameters in these network models, which allow us to obtain, under suitable conditions, information about transient and steady-state behavior. These theorems exploit structural properties of SCRNs, beyond those of general continuous-time Markov chains. Furthermore, we derive two theorems to compare stationary distributions and mean first passage times for SCRNs with different parameter values, or with the same parameters and different initial conditions. These tools are developed for SCRNs taking values in a generic (finite or countably infinite) state space and can also be applied for non-mass-action kinetics models. When propensity functions are bounded, our method of proof gives an explicit method for coupling two comparable SCRNs, which can be used to simultaneously simulate their sample paths in a comparable manner. We illustrate our results with applications to models of enzymatic kinetics and epigenetic regulation by chromatin modifications.

     
    more » « less
  2. While cross entropy (CE) is the most commonly used loss function to train deep neural networks for classification tasks, many alternative losses have been developed to obtain better empirical performance. Among them, which one is the best to use is still a mystery, because there seem to be multiple factors affecting the answer, such as properties of the dataset, the choice of network architecture, and so on. This paper studies the choice of loss function by examining the last-layer features of deep networks, drawing inspiration from a recent line work showing that the global optimal solution of CE and mean-square-error (MSE) losses exhibits a Neural Collapse phenomenon. That is, for sufficiently large networks trained until convergence, (i) all features of the same class collapse to the corresponding class mean and (ii) the means associated with different classes are in a configuration where their pairwise distances are all equal and maximized. We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse. Hence, all relevant losses (i.e., CE, LS, FL, MSE) produce equivalent features on training data. In particular, based on the unconstrained feature model assumption, we provide either the global landscape analysis for LS loss or the local landscape analysis for FL loss and show that the (only!) global minimizers are neural collapse solutions, while all other critical points are strict saddles whose Hessian exhibit negative curvature directions either in the global scope for LS loss or in the local scope for FL loss near the optimal solution. The experiments further show that Neural Collapse features obtained from all relevant losses (i.e., CE, LS, FL, MSE) lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence. 
    more » « less
  3. Abstract

    The dynamics of a chemical reaction network (CRN) is often modeled under the assumption of mass action kinetics by a system of ordinary differential equations (ODEs) with polynomial right-hand sides that describe the time evolution of concentrations of chemical species involved. Given an arbitrarily large integer$$K \in {\mathbb N}$$KN, we show that there exists a CRN such that its ODE model has at leastKstable limit cycles. Such a CRN can be constructed with reactions of at most second-order provided that the number of chemical species grows linearly withK. Bounds on the minimal number of chemical species and the minimal number of chemical reactions are presented for CRNs withK stable limit cycles and at most second order or seventh-order kinetics. We also show that CRNs with only two chemical species can haveKstable limit cycles, when the order of chemical reactions grows linearly withK.

     
    more » « less
  4. Beck, Jeff (Ed.)
    Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria. 
    more » « less
  5. null (Ed.)
    Abstract We formulate a class of stochastic partial differential equations based on Kelvin’s circulation theorem for ideal fluids. In these models, the velocity field is randomly transported by white-noise vector fields, as well as by its own average over realizations of this noise. We call these systems the Lagrangian averaged stochastic advection by Lie transport (LA SALT) equations. These equations are nonlinear and non-local, in both physical and probability space. Before taking this average, the equations recover the Stochastic Advection by Lie Transport (SALT) fluid equations introduced by Holm (Proc R Soc A 471(2176):20140963, 2015). Remarkably, the introduction of the non-locality in probability space in the form of momentum transported by its own mean velocity gives rise to a closed equation for the expectation field which comprises Navier–Stokes equations with Lie–Laplacian ‘dissipation’. As such, this form of non-locality provides a regularization mechanism. The formalism we develop is closely connected to the stochastic Weber velocity framework of Constantin and Iyer (Commun Pure Appl Math 61(3):330–345, 2008) in the case when the noise correlates are taken to be the constant basis vectors in $$\mathbb {R}^3$$ R 3 and, thus, the Lie–Laplacian reduces to the usual Laplacian. We extend this class of equations to allow for advected quantities to be present and affect the flow through exchange of kinetic and potential energies. The statistics of the solutions for the LA SALT fluid equations are found to be changing dynamically due to an array of intricate correlations among the physical variables. The statistical properties of the LA SALT physical variables propagate as local evolutionary equations which when spatially integrated become dynamical equations for the variances of the fluctuations. Essentially, the LA SALT theory is a non-equilibrium stochastic linear response theory for fluctuations in SALT fluids with advected quantities. 
    more » « less