In this paper, we study the convergence analysis for a robust stochastic structure-preserving Lagrangian numerical scheme in computing effective diffusivity of time-dependent chaotic flows, which are modeled by stochastic differential equations (SDEs). Our numerical scheme is based on a splitting method to solve the corresponding SDEs in which the deterministic subproblem is discretized using a structure-preserving scheme while the random subproblem is discretized using the Euler-Maruyama scheme. We obtain a sharp and uniform-in-time convergence analysis for the proposed numerical scheme that allows us to accurately compute long-time solutions of the SDEs. As such, we can compute the effective diffusivity for time-dependent chaotic flows. Finally, we present numerical results to demonstrate the accuracy and efficiency of the proposed method in computing effective diffusivity for the time-dependent Arnold-Beltrami-Childress (ABC) flow and Kolmogorov flow in three-dimensional space.
more »
« less
Weak Solutions to the Muskat Problem with Surface Tension Via Optimal Transport
Abstract Inspired by recent works on the threshold dynamics scheme for multi-phase mean curvature flow (by Esedoḡlu–Otto and Laux–Otto), we introduce a novel framework to approximate solutions of the Muskat problem with surface tension. Our approach is based on interpreting the Muskat problem as a gradient flow in a product Wasserstein space. This perspective allows us to construct weak solutions via a minimizing movements scheme. Rather than working directly with the singular surface tension force, we instead relax the perimeter functional with the heat content energy approximation of Esedoḡlu–Otto. The heat content energy allows us to show the convergence of the associated minimizing movement scheme in the Wasserstein space, and makes the scheme far more tractable for numerical simulations. Under a typical energy convergence assumption, we show that our scheme converges to weak solutions of the Muskat problem with surface tension. We then conclude the paper with a discussion on some numerical experiments and on equilibrium configurations.
more »
« less
- Award ID(s):
- 1900804
- PAR ID:
- 10463913
- Date Published:
- Journal Name:
- Archive for Rational Mechanics and Analysis
- Volume:
- 239
- Issue:
- 1
- ISSN:
- 0003-9527
- Page Range / eLocation ID:
- 389 to 430
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space. This equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in Wasserstein space. Solving the optimization problem associated with each JKO step, however, presents serious computational challenges. We introduce a scalable method to approximate Wasserstein gradient flows, targeted to machine learning applications. Our approach relies on input-convex neural networks (ICNNs) to discretize the JKO steps, which can be optimized by stochastic gradient descent. Contrarily to previous work, our method does not require domain discretization or particle simulation. As a result, we can sample from the measure at each time step of the diffusion and compute its probability density. We demonstrate the performance of our algorithm by computing diffusions following the Fokker-Planck equation and apply it to unnormalized density sampling as well as nonlinear filtering.more » « less
-
Abstract The stein variational gradient descent (SVGD) algorithm is a deterministic particle method for sampling. However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein gradient flow corresponding to the KL-divergence minimization. In this work, we propose the Regularized Stein Variational Gradient Flow, which interpolates between the Stein Variational Gradient Flow and the Wasserstein gradient flow. We establish various theoretical properties of the Regularized Stein Variational Gradient Flow (and its time-discretization) including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions. We provide preliminary numerical evidence of the improved performance offered by the regularization.more » « less
-
Abstract A local discontinuous Galerkin (LDG) method for approximating large deformations of prestrained plates is introduced and tested on several insightful numerical examples in Bonito et al. (2022, LDG approximation of large deformations of prestrained plates. J. Comput. Phys., 448, 110719). This paper presents a numerical analysis of this LDG method, focusing on the free boundary case. The problem consists of minimizing a fourth-order bending energy subject to a nonlinear and nonconvex metric constraint. The energy is discretized using LDG and a discrete gradient flow is used for computing discrete minimizers. We first show $$\varGamma $$-convergence of the discrete energy to the continuous one. Then we prove that the discrete gradient flow decreases the energy at each step and computes discrete minimizers with control of the metric constraint defect. We also present a numerical scheme for initialization of the gradient flow and discuss the conditional stability of it.more » « less
-
Motivated by the computation of the non-parametric maximum likelihood estimator (NPMLE) and the Bayesian posterior in statistics, this paper explores the problem of convex optimization over the space of all probability distributions. We introduce an implicit scheme, called the implicit KL proximal descent (IKLPD) algorithm, for discretizing a continuous-time gradient flow relative to the KullbackLeibler divergence for minimizing a convex target functional. We show that IKLPD converges to a global optimum at a polynomial rate from any initialization; moreover, if the objective functional is strongly convex relative to the KL divergence, for example, when the target functional itself is a KL divergence as in the context of Bayesian posterior computation, IKLPD exhibits globally exponential convergence. Computationally, we propose a numerical method based on normalizing flow to realize IKLPD. Conversely, our numerical method can also be viewed as a new approach that sequentially trains a normalizing flow for minimizing a convex functional with a strong theoretical guarantee.more » « less