skip to main content


Title: Weighted L 2 -contractivity of Langevin dynamics with singular potentials
Abstract Convergence to equilibrium of underdamped Langevin dynamics is studied under general assumptions on the potential U allowing for singularities. By modifying the direct approach to convergence in L 2 pioneered by Hérau and developed by Dolbeault et al , we show that the dynamics converges exponentially fast to equilibrium in the topologies L 2 (d μ ) and L 2 ( W * d μ ), where μ denotes the invariant probability measure and W * is a suitable Lyapunov weight. In both norms, we make precise how the exponential convergence rate depends on the friction parameter γ in Langevin dynamics, by providing a lower bound scaling as min( γ , γ −1 ). The results hold for usual polynomial-type potentials as well as potentials with singularities such as those arising from pairwise Lennard-Jones interactions between particles.  more » « less
Award ID(s):
1954264 1855504
NSF-PAR ID:
10316064
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Nonlinearity
Volume:
35
Issue:
2
ISSN:
0951-7715
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Estimating the normalizing constant of an unnormalized probability distribution has important applications in computer science, statistical physics, machine learning, and statistics. In this work, we consider the problem of estimating the normalizing constant to within a multiplication factor of 1 ± ε for a μ-strongly convex and L-smooth function f, given query access to f(x) and ∇f(x). We give both algorithms and lowerbounds for this problem. Using an annealing algorithm combined with a multilevel Monte Carlo method based on underdamped Langevin dynamics, we show that O(d^{4/3}/\eps^2) queries to ∇f are sufficient. Moreover, we provide an information theoretic lowerbound, showing that at least d^{1-o(1)}/\eps^{2-o(1)} queries are necessary. This provides a first nontrivial lowerbound for the problem. 
    more » « less
  2. Abstract

    Motivated by the challenge of sampling Gibbs measures with nonconvex potentials, we study a continuum birth–death dynamics. We improve results in previous works (Liuet al2023Appl. Math. Optim.8748; Luet al2019 arXiv:1905.09863) and provide weaker hypotheses under which the probability density of the birth–death governed by Kullback–Leibler divergence or byχ2divergence converge exponentially fast to the Gibbs equilibrium measure, with a universal rate that is independent of the potential barrier. To build a practical numerical sampler based on the pure birth–death dynamics, we consider an interacting particle system, which is inspired by the gradient flow structure and the classical Fokker–Planck equation and relies on kernel-based approximations of the measure. Using the technique of Γ-convergence of gradient flows, we show that on the torus, smooth and bounded positive solutions of the kernelised dynamics converge on finite time intervals, to the pure birth–death dynamics as the kernel bandwidth shrinks to zero. Moreover we provide quantitative estimates on the bias of minimisers of the energy corresponding to the kernelised dynamics. Finally we prove the long-time asymptotic results on the convergence of the asymptotic states of the kernelised dynamics towards the Gibbs measure.

     
    more » « less
  3. An outstanding problem in statistical mechanics is the determination of whether prescribed functional forms of the pair correlation function g2(r) [or equivalently, structure factor S(k)] at some number density ρ can be achieved by many-body systems in d-dimensional Euclidean space. The Zhang–Torquato conjecture states that any realizable set of pair statistics, whether from a nonequilibrium or equilibrium system, can be achieved by equilibrium systems involving up to two-body interactions. To further test this conjecture, we study the realizability problem of the nonequilibrium iso-g2 process, i.e., the determination of density-dependent effective potentials that yield equilibrium states in which g2 remains invariant for a positive range of densities. Using a precise inverse algorithm that determines effective potentials that match hypothesized functional forms of g2(r) for all r and S(k) for all k, we show that the unit-step function g2, which is the zero-density limit of the hard-sphere potential, is remarkably realizable up to the packing fraction ϕ = 0.49 for d = 1. For d = 2 and 3, it is realizable up to the maximum “terminal” packing fraction ϕc = 1/2d, at which the systems are hyperuniform, implying that the explicitly known necessary conditions for realizability are sufficient up through ϕc. For ϕ near but below ϕc, the large-r behaviors of the effective potentials are given exactly by the functional forms exp[ − κ(ϕ)r] for d = 1, r−1/2 exp[ − κ(ϕ)r] for d = 2, and r−1 exp[ − κ(ϕ)r] (Yukawa form) for d = 3, where κ−1(ϕ) is a screening length, and for ϕ = ϕc, the potentials at large r are given by the pure Coulomb forms in the respective dimensions as predicted by Torquato and Stillinger [Phys. Rev. E 68, 041113 (2003)]. We also find that the effective potential for the pair statistics of the 3D “ghost” random sequential addition at the maximum packing fraction ϕc = 1/8 is much shorter ranged than that for the 3D unit-step function g2 at ϕc; thus, it does not constrain the realizability of the unit-step function g2. Our inverse methodology yields effective potentials for realizable targets, and, as expected, it does not reach convergence for a target that is known to be non-realizable, despite the fact that it satisfies all known explicit necessary conditions. Our findings demonstrate that exploring the iso-g2 process via our inverse methodology is an effective and robust means to tackle the realizability problem and is expected to facilitate the design of novel nanoparticle systems with density-dependent effective potentials, including exotic hyperuniform states of matter.

     
    more » « less
  4. Abstract

    We investigated competition betweenSalpa thompsoniand protistan grazers during Lagrangian experiments near the Subtropical Front in the southwest Pacific sector of the Southern Ocean. Over a month, the salp community shifted from dominance by large (> 100 mm) oozooids and small (< 20 mm) blastozooids to large (~ 60 mm) blastozooids. Phytoplankton biomass was consistently dominated by nano‐ and microphytoplankton (> 2 μm cells). Using bead‐calibrated flow‐cytometry light scatter to estimate phytoplankton size, we quantified size‐specific salp and protistan zooplankton grazing pressure. Salps were able to feed at a > 10,000 : 1 predator : prey size (linear‐dimension) ratio. Small blastozooids efficiently retained cells > 1.4μm (high end of picoplankton size, 0.6–2 μm cells) and also obtained substantial nutrition from smaller bacteria‐sized cells. Larger salps could only feed efficiently on > 5.9μm cells and were largely incapable of feeding on picoplankton. Due to the high biomass of nano‐ and microphytoplankton, however, all salps derived most of their (phytoplankton‐based) nutrition from these larger autotrophs. Phagotrophic protists were the dominant competitors for these prey items and consumed approximately 50% of the biomass of all phytoplankton size classes each day. Using a Bayesian statistical framework, we developed an allometric‐scaling equation for salp clearance rates as a function of salp and prey size:urn:x-wiley:00243590:media:lno11770:lno11770-math-0001where ESD is prey equivalent spherical diameter (µm), TL isS. thompsonitotal length,φ = 5.6 × 10−3 ± 3.6 × 10−4,ψ = 2.1 ± 0.13,θ = 0.58 ± 0.08, andγ = 0.46 ± 0.03 and clearance rate is L d‐1salp‐1. We discuss the biogeochemical and food‐web implications of competitive interactions among salps, krill, and protozoans.

     
    more » « less
  5. null (Ed.)
    A bstract In this paper, we explore the impact of extra radiation on predictions of $$ pp\to \mathrm{t}\overline{\mathrm{t}}\mathrm{X},\mathrm{X}=\mathrm{h}/{\mathrm{W}}^{\pm }/\mathrm{Z} $$ pp → t t ¯ X , X = h / W ± / Z processes within the dimension-6 SMEFT framework. While full next-to-leading order calculations are of course preferred, they are not always practical, and so it is useful to be able to capture the impacts of extra radiation using leading-order matrix elements matched to the parton shower and merged. While a matched/merged leading-order calculation for $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{X} $$ t t ¯ X is not expected to reproduce the next-to-leading order inclusive cross section precisely, we show that it does capture the relative impact of the EFT effects by considering the ratio of matched SMEFT inclusive cross sections to Standard Model values, $$ {\sigma}_{\mathrm{SM}\mathrm{EFT}}\left(\mathrm{t}\overline{\mathrm{t}}\mathrm{X}+\mathrm{j}\right)/{\sigma}_{\mathrm{SM}}\left(\mathrm{t}\overline{\mathrm{t}}\mathrm{X}+\mathrm{j}\right)\equiv \mu $$ σ SMEFT t t ¯ X + j / σ SM t t ¯ X + j ≡ μ . Furthermore, we compare leading order calculations with and without extra radiation and find several cases, such as the effect of the operator $$ \left({\varphi}^{\dagger }i{\overleftrightarrow{D}}_{\mu}\varphi \right)\left(\overline{t}{\gamma}^{\mu }t\right) $$ φ † i D ↔ μ φ t ¯ γ μ t on $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{h} $$ t t ¯ h and $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{W} $$ t t ¯ W , for which the relative cross section prediction increases by more than 10% — significantly larger than the uncertainty derived by varying the input scales in the calculation, including the additional scales required for matching and merging. Being leading order at heart, matching and merging can be applied to all operators and processes relevant to $$ pp\to \mathrm{t}\overline{\mathrm{t}}\mathrm{X},\mathrm{X}=\mathrm{h}/{\mathrm{W}}^{\pm }/\mathrm{Z}+\mathrm{jet} $$ pp → t t ¯ X , X = h / W ± / Z + jet , is computationally fast and not susceptible to negative weights. Therefore, it is a useful approach in $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{X} $$ t t ¯ X + jet studies where complete next-to-leading order results are currently unavailable or unwieldy. 
    more » « less