skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Probability flow solution of the Fokker–Planck equation
Abstract The method of choice for integrating the time-dependent Fokker–Planck equation (FPE) in high-dimension is to generate samples from the solution via integration of the associated stochastic differential equation (SDE). Here, we study an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability. Acting as a transport map, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time. Unlike integration of the stochastic dynamics, the method has the advantage of giving direct access to quantities that are challenging to estimate from trajectories alone, such as the probability current, the density itself, and its entropy. The probability flow equation depends on the gradient of the logarithm of the solution (its ‘score’), and so isa-prioriunknown. To resolve this dependence, we model the score with a deep neural network that is learned on-the-fly by propagating a set of samples according to the instantaneous probability current. We show theoretically that the proposed approach controls the Kullback–Leibler (KL) divergence from the learned solution to the target, while learning on external samples from the SDE does not control either direction of the KL divergence. Empirically, we consider several high-dimensional FPEs from the physics of interacting particle systems. We find that the method accurately matches analytical solutions when they are available as well as moments computed via Monte-Carlo when they are not. Moreover, the method offers compelling predictions for the global entropy production rate that out-perform those obtained from learning on stochastic trajectories, and can effectively capture non-equilibrium steady-state probability currents over long time intervals.  more » « less
Award ID(s):
2134216
PAR ID:
10434854
Author(s) / Creator(s):
;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
Machine Learning: Science and Technology
Volume:
4
Issue:
3
ISSN:
2632-2153
Format(s):
Medium: X Size: Article No. 035012
Size(s):
Article No. 035012
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a supervised learning framework of training generative models for density estimation.Generative models, including generative adversarial networks (GANs), normalizing flows, and variational auto-encoders (VAEs), are usually considered as unsupervised learning models, because labeled data are usually unavailable for training. Despite the success of the generative models, there are several issues with the unsupervised training, e.g., requirement of reversible architectures, vanishing gradients, and training instability. To enable supervised learning in generative models, we utilize the score-based diffusion model to generate labeled data. Unlike existing diffusion models that train neural networks to learn the score function, we develop a training-free score estimation method. This approach uses mini-batch-based Monte Carlo estimators to directly approximate the score function at any spatial-temporal location in solving an ordinary differential equation (ODE), corresponding to the reverse-time stochastic differential equation (SDE). This approach can offer both high accuracy and substantial time savings in neural network training. Once the labeled data are generated, we can train a simple, fully connected neural network to learn the generative model in the supervised manner. Compared with existing normalizing flow models, our method does not require the use of reversible neural networks and avoids the computation of the Jacobian matrix. Compared with existing diffusion models, our method does not need to solve the reverse-time SDE to generate new samples. As a result, the sampling efficiency is significantly improved. We demonstrate the performance of our method by applying it to a set of 2D datasets as well as real data from the University of California Irvine (UCI) repository. 
    more » « less
  2. We consider particles obeying Langevin dynamics while being at known positions and having known velocities at the two end-points of a given interval. Their motion in phase space can be modeled as an Ornstein–Uhlenbeck process conditioned at the two end-points—a generalization of the Brownian bridge. Using standard ideas from stochastic optimal control we construct a stochastic differential equation (SDE) that generates such a bridge that agrees with the statistics of the conditioned process, as a degenerate diffusion. Higher order linear diffusions are also considered. In general, a time-varying drift is sufficient to modify the prior SDE and meet the end-point conditions. When the drift is obtained by solving a suitable differential Lyapunov equation, the SDE models correctly the statistics of the bridge. These types of models are relevant in controlling and modeling distribution of particles and the interpolation of density functions. 
    more » « less
  3. Score matching based diffusion has shown to achieve the state of art results in generation modeling. In the original score matching based diffusion algorithm, the forward equation is a differential equation for which the probability density equation evolves according to a linear partial differential equation, the Fokker-Planck equation. A drawback of this approach is that one needs the data distribution to have a Lipschitz logarithmic gradient. This excludes a large class of data distributions that have a compact support. We present a deterministic diffusion process for which the vector fields are always Lipschitz and hence the score does not explode for probability measures with compact support. This deterministic diffusion process can be seen as a regularization of the porous media equation equation, which enables one to guarantee long term convergence of the forward process to the noise distribution. Though the porous media equation is itself not always guaranteed to have a Lipschitz vector field, it can be used to understand the closeness of the output of the algorithm to the data distribution as a function of the the time horizon and score matching error. This analysis enables us to show that the algorithm has better dependence on the score matching error than approaches based on stochastic diffusions. Using numerical experiments we verify our theoretical results on example one and two dimensional data distributions which are compactly supported. Additionally, we validate the approach on a modified MNIST data set for which the distribution is concentrated on a compact set. In each of the experiments, the approach using deterministic diffusion performs better that the diffusion algorithm with stochastic forward process, when considering the FID scores of the generated samples. 
    more » « less
  4. The ability to distinguish between stochastic systems based on their trajectories is crucial in thermodynamics, chemistry, and biophysics. The Kullback–Leibler (KL) divergence, D_{KL}^{AB}(0,τ), quantifies the distinguishability between the two ensembles of length-τ trajectories from Markov processes A and B. However, evaluating D_{KL}^{AB}(0,τ) from histograms of trajectories faces sufficient sampling difficulties, and no theory explicitly reveals what dynamical features contribute to the distinguishability. This work provides a general formula that decomposes D_{KL}^{AB}(0,τ) in space and time for any Markov processes, arbitrarily far from equilibrium or steady state. It circumvents the sampling difficulty of evaluating D_{KL}^{AB}(0,τ). Furthermore, it explicitly connects trajectory KL divergence with individual transition events and their waiting time statistics. The results provide insights into understanding distinguishability between Markov processes, leading to new theoretical frameworks for designing biological sensors and optimizing signal transduction. 
    more » « less
  5. We provide a second-order stochastic differential equation (SDE), which characterizes the continuous-time dynamics of accelerated stochastic mirror descent (ASMD) for strongly convex functions. This SDE plays a central role in designing new discrete-time ASMD algorithms via numerical discretization, and providing neat analyses of their convergence rates based on Lyapunov functions. Our results suggest that the only existing ASMD algorithm, namely, AC-SA proposed in Ghadimi & Lan (2012) is one instance of its kind, and we can actually derive new instances of ASMD with fewer tuning parameters. This sheds light on revisiting accelerated stochastic optimization through the lens of SDEs, which can lead to a better understanding of acceleration in stochastic optimization, as well as new simpler algorithms. Numerical experiments on both synthetic and real data support our theory. 
    more » « less