skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: An approximation theory framework for measure-transport sampling algorithms
This article presents a general approximation-theoretic framework to analyze measure transport algorithms for probabilistic modeling. A primary motivating application for such algorithms is sampling—a central task in statistical inference and generative modeling. We provide a priori error estimates in the continuum limit, i.e., when the measures (or their densities) are given, but when the transport map is discretized or approximated using a finite-dimensional function space. Our analysis relies on the regularity theory of transport maps and on classical approximation theory for high-dimensional functions. A third element of our analysis, which is of independent interest, is the development of new stability estimates that relate the distance between two maps to the distance (or divergence) between the pushforward measures they define. We present a series of applications of our framework, where quantitative convergence rates are obtained for practical problems using Wasserstein metrics, maximum mean discrepancy, and Kullback–Leibler divergence. Specialized rates for approximations of the popular triangular Knöthe–Rosenblatt maps are obtained, followed by numerical experiments that demonstrate and extend our theory.  more » « less
Award ID(s):
2208535
PAR ID:
10661156
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
AMS
Date Published:
Journal Name:
Mathematics of Computation
Volume:
94
Issue:
354
ISSN:
0025-5718
Page Range / eLocation ID:
1863 to 1909
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider synthesis and analysis of probability measures using the entropy-regularized Wasserstein-2 cost and its unbiased version, the Sinkhorn divergence. The synthesis problem consists of computing the barycenter, with respect to these costs, of m reference measures given a set of coefficients belonging to the m-dimensional simplex. The analysis problem consists of finding the coefficients for the closest barycenter in the Wasserstein-2 distance to a given measure μ. Under the weakest assumptions on the measures thus far in the literature, we compute the derivative of the entropy-regularized Wasserstein-2 cost. We leverage this to establish a characterization of regularized barycenters as solutions to a fixed-point equation for the average of the entropic maps from the barycenter to the reference measures. This characterization yields a finite-dimensional, convex, quadratic program for solving the analysis problem when μ is a barycenter. It is shown that these coordinates, as well as the value of the barycenter functional, can be estimated from samples with dimension-independent rates of convergence, a hallmark of entropy-regularized optimal transport, and we verify these rates experimentally. We also establish that barycentric coordinates are stable with respect to perturbations in the Wasserstein-2 metric, suggesting a robustness of these coefficients to corruptions. We employ the barycentric coefficients as features for classification of corrupted point cloud data, and show that compared to neural network baselines, our approach is more efficient in small training data regimes. 
    more » « less
  2. null (Ed.)
    Abstract Optimal transport maps and plans between two absolutely continuous measures $$\mu$$ and $$\nu$$ can be approximated by solving semidiscrete or fully discrete optimal transport problems. These two problems ensue from approximating $$\mu$$ or both $$\mu$$ and $$\nu$$ by Dirac measures. Extending an idea from Gigli (2011, On Hölder continuity-in-time of the optimal transport map towards measures along a curve. Proc. Edinb. Math. Soc. (2), 54, 401–409), we characterize how transport plans change under the perturbation of both $$\mu$$ and $$\nu$$. We apply this insight to prove error estimates for semidiscrete and fully discrete algorithms in terms of errors solely arising from approximating measures. We obtain weighted $L^2$ error estimates for both types of algorithms with a convergence rate $$O(h^{1/2})$$. This coincides with the rate in Theorem 5.4 in Berman (2018, Convergence rates for discretized Monge–Ampère equations and quantitative stability of optimal transport. Preprint available at arXiv:1803.00785) for semidiscrete methods, but the error notion is different. 
    more » « less
  3. Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport---specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use input-convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream. 
    more » « less
  4. Abstract Contraction properties of transport maps between probability measures play an important role in the theory of functional inequalities. The actual construction of such maps, however, is a non-trivial task and, so far, relies mostly on the theory of optimal transport. In this work, we take advantage of the infinite-dimensional nature of the Gaussian measure and construct a new transport map, based on the Föllmer process, which pushes forward the Wiener measure onto probability measures on Euclidean spaces. Utilizing the tools of the Malliavin and stochastic calculus in Wiener space, we show that this Brownian transport map is a contraction in various settings where the analogous questions for optimal transport maps are open. The contraction properties of the Brownian transport map enable us to prove functional inequalities in Euclidean spaces, which are either completely new or improve on current results. Further and related applications of our contraction results are the existence of Stein kernels with desirable properties (which lead to new central limit theorems), as well as new insights into the Kannan–Lovász–Simonovits conjecture. We go beyond the Euclidean setting and address the problem of contractions on the Wiener space itself. We show that optimal transport maps and causal optimal transport maps (which are related to Brownian transport maps) between the Wiener measure and other target measures on Wiener space exhibit very different behaviors. 
    more » « less
  5. null (Ed.)
    Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution exp(−f) for a suitable function f. When the domain of the distribution is high-dimensional, this sampling can be challenging. Using heuristic sampling schemes such as Gibbs sampling does not necessarily lead to provable privacy. When f is convex, techniques from log-concave sampling lead to polynomial-time algorithms, albeit with large polynomials. Langevin dynamics-based algorithms offer much faster alternatives under some distance measures such as statistical distance. In this work, we establish rapid convergence for these algorithms under distance measures more suitable for differential privacy. For smooth, strongly-convex f, we give the first results proving convergence in R\'enyi divergence. This gives us fast differentially private algorithms for such f. Our techniques and simple and generic and apply also to underdamped Langevin dynamics. 
    more » « less