skip to main content

Search for: All records

Award ID contains: 2111322

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In this paper we study supervised learning tasks on the space of probability measures. We approach this problem by embedding the space of probability measures into$$L^2$$L2spaces using the optimal transport framework. In the embedding spaces, regular machine learning techniques are used to achieve linear separability. This idea has proved successful in applications and when the classes to be separated are generated by shifts and scalings of a fixed measure. This paper extends the class of elementary transformations suitable for the framework to families of shearings, describing conditions under which two classes of sheared distributions can be linearly separated. We furthermore give necessary bounds on the transformations to achieve a pre-specified separation level, and show how multiple embeddings can be used to allow for larger families of transformations. We demonstrate our results on image classification tasks.

  2. Abstract Discriminating between distributions is an important problem in a number of scientific fields. This motivated the introduction of Linear Optimal Transportation (LOT), which embeds the space of distributions into an $L^2$-space. The transform is defined by computing the optimal transport of each distribution to a fixed reference distribution and has a number of benefits when it comes to speed of computation and to determining classification boundaries. In this paper, we characterize a number of settings in which LOT embeds families of distributions into a space in which they are linearly separable. This is true in arbitrary dimension, and for families of distributions generated through perturbations of shifts and scalings of a fixed distribution. We also prove conditions under which the $L^2$ distance of the LOT embedding between two distributions in arbitrary dimension is nearly isometric to Wasserstein-2 distance between those distributions. This is of significant computational benefit, as one must only compute $N$ optimal transport maps to define the $N^2$ pairwise distances between $N$ distributions. We demonstrate the benefits of LOT on a number of distribution classification problems.
  3. This paper deals with polynomial Hermite splines. In the first part, we provide a simple and fast procedure to compute the refinement mask of the Hermite B-splines of any order and in the case of a general scaling factor. Our procedure is solely derived from the polynomial reproduction properties satisfied by Hermite splines and it does not require the explicit construction or evaluation of the basis functions. The second part of the paper discusses the factorization properties of the Hermite B-spline masks in terms of the augmented Taylor operator, which is shown to be the minimal annihilator for the space of discrete monomial Hermite sequences of a fixed degree. All our results can be of use, in particular, in the context of Hermite subdivision schemes and multi-wavelets.