skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Convergence and nonconvergence in a nonlocal gradient flow
Abstract We study the asymptotic convergence as of solutions of , a nonlocal differential equation that is formally a gradient flow in a constant‐mass subspace of arising from simplified models of phase transitions. In case the solution takes finitely many values, we provide a new proof of stabilization that uses a Łojasiewicz‐type gradient inequality near a degenerate curve of equilibria. Solutions with infinitely many values in generalneed notconverge to equilibrium, however, which we demonstrate by providing counterexamples for piecewise linear and cubic functions . Curiously, the exponentialrateof convergence in the finite‐value case can jump from order to arbitrarily small values upon perturbation of parameters.  more » « less
Award ID(s):
2106534
PAR ID:
10655257
Author(s) / Creator(s):
 ;  
Publisher / Repository:
London Mathematical Society
Date Published:
Journal Name:
Journal of the London Mathematical Society
Volume:
111
Issue:
1
ISSN:
0024-6107
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds. Though many min-max algorithms have been analyzed in the Euclidean setting, it has proved elusive to translate these results to the Riemannian case. Zhang et al. [2022] have recently shown that geodesic convex concave Riemannian problems always admit saddle-point solutions. Inspired by this result, we study whether a performance gap between Riemannian and optimal Euclidean space convex-concave algorithms is necessary. We answer this question in the negative—we prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate convergence at a linear rate in the geodesically strongly-convex-concave case, matching the Euclidean result. Our results also extend to the stochastic or non-smooth case where RCEG and Riemanian gradient ascent descent (RGDA) achieve near-optimal convergence rates up to factors depending on curvature of the manifold. 
    more » « less
  2. Abstract We study the convergence of several natural policy gradient (NPG) methods in infinite-horizon discounted Markov decision processes with regular policy parametrizations. For a variety of NPGs and reward functions we show that the trajectories in state-action space are solutions of gradient flows with respect to Hessian geometries, based on which we obtain global convergence guarantees and convergence rates. In particular, we show linear convergence for unregularized and regularized NPG flows with the metrics proposed by Kakade and Morimura and co-authors by observing that these arise from the Hessian geometries of conditional entropy and entropy respectively. Further, we obtain sublinear convergence rates for Hessian geometries arising from other convex functions like log-barriers. Finally, we interpret the discrete-time NPG methods with regularized rewards as inexact Newton methods if the NPG is defined with respect to the Hessian geometry of the regularizer. This yields local quadratic convergence rates of these methods for step size equal to the inverse penalization strength. 
    more » « less
  3. Abstract The stein variational gradient descent (SVGD) algorithm is a deterministic particle method for sampling. However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein gradient flow corresponding to the KL-divergence minimization. In this work, we propose the Regularized Stein Variational Gradient Flow, which interpolates between the Stein Variational Gradient Flow and the Wasserstein gradient flow. We establish various theoretical properties of the Regularized Stein Variational Gradient Flow (and its time-discretization) including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions. We provide preliminary numerical evidence of the improved performance offered by the regularization. 
    more » « less
  4. Abstract Motivated by the challenge of sampling Gibbs measures with nonconvex potentials, we study a continuum birth–death dynamics. We improve results in previous works (Liuet al2023Appl. Math. Optim.8748; Luet al2019 arXiv:1905.09863) and provide weaker hypotheses under which the probability density of the birth–death governed by Kullback–Leibler divergence or byχ2divergence converge exponentially fast to the Gibbs equilibrium measure, with a universal rate that is independent of the potential barrier. To build a practical numerical sampler based on the pure birth–death dynamics, we consider an interacting particle system, which is inspired by the gradient flow structure and the classical Fokker–Planck equation and relies on kernel-based approximations of the measure. Using the technique of Γ-convergence of gradient flows, we show that on the torus, smooth and bounded positive solutions of the kernelised dynamics converge on finite time intervals, to the pure birth–death dynamics as the kernel bandwidth shrinks to zero. Moreover we provide quantitative estimates on the bias of minimisers of the energy corresponding to the kernelised dynamics. Finally we prove the long-time asymptotic results on the convergence of the asymptotic states of the kernelised dynamics towards the Gibbs measure. 
    more » « less
  5. Motivated by gradient methods in optimization theory, we give methods based onψ‐fractional derivatives of orderαin order to solve unconstrained optimization problems. The convergence of these methods is analyzed in detail. This paper also presents an Adams–Bashforth–Moulton (ABM) method for the estimation of solutions to equations involvingψ‐fractional derivatives. Numerical examples using the ABM method show that the fractional orderαand weightψare tunable parameters, which can be helpful for improving the performance of gradient descent methods. 
    more » « less