skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Graph Theoretic Additive Approximation of Optimal Transport
Transportation cost is an attractive similarity measure between probability distributions due to its many useful theoretical properties. However, solving optimal transport exactly can be prohibitively expensive. Therefore, there has been significant effort towards the design of scalable approximation algorithms. Previous combinatorial results [Sharathkumar, Agarwal STOC '12, Agarwal, Sharathkumar STOC '14] have focused primarily on the design of near-linear time multiplicative approximation algorithms. There has also been an effort to design approximate solutions with additive errors [Cuturi NIPS '13, Altschuler \etal\ NIPS '17, Dvurechensky \etal\, ICML '18, Quanrud, SOSA '19] within a time bound that is linear in the size of the cost matrix and polynomial in C/\delta; here C is the largest value in the cost matrix and \delta is the additive error. We present an adaptation of the classical graph algorithm of Gabow and Tarjan and provide a novel analysis of this algorithm that bounds its execution time by \BigO(\frac{n^2 C}{\delta}+ \frac{nC^2}{\delta^2}). Our algorithm is extremely simple and executes, for an arbitrarily small constant \eps, only \lfloor \frac{2C}{(1-\eps)\delta}\rfloor + 1 iterations, where each iteration consists only of a Dijkstra-type search followed by a depth-first search. We also provide empirical results that suggest our algorithm is competitive with respect to a sequential implementation of the Sinkhorn algorithm in execution time. Moreover, our algorithm quickly computes a solution for very small values of \delta whereas Sinkhorn algorithm slows down due to numerical instability.  more » « less
Award ID(s):
1909171
PAR ID:
10175764
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Advances in neural information processing systems
ISSN:
1049-5258
Page Range / eLocation ID:
13813--13823
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The densest subgraph problem in a graph (\dsg), in the simplest form, is the following. Given an undirected graph $G=(V,E)$ find a subset $$S \subseteq V$$ of vertices that maximizes the ratio $|E(S)|/|S|$ where $E(S)$ is the set of edges with both endpoints in $$S$$. \dsg and several of its variants are well-studied in theory and practice and have many applications in data mining and network analysis. In this paper we study fast algorithms and structural aspects of \dsg via the lens of \emph{supermodularity}. For this we consider the densest supermodular subset problem (\dssp): given a non-negative supermodular function $$f: 2^V \rightarrow \mathbb{R}_+$, maximize $f(S)/|S|$$. For \dsg we describe a simple flow-based algorithm that outputs a $$(1-\eps)$-approximation in deterministic $$\tilde{O}(m/\eps)$$ time where $$m$$ is the number of edges. Our algorithm is the first to have a near-linear dependence on $$m$$ and $$1/\eps$$ and improves previous methods based on an LP relaxation. It generalizes to hypergraphs, and also yields a faster algorithm for directed \dsg. Greedy peeling algorithms have been very popular for \dsg and several variants due to their efficiency, empirical performance, and worst-case approximation guarantees. We describe a simple peeling algorithm for \dssp and analyze its approximation guarantee in a fashion that unifies several existing results. Boob et al.\ \cite{bgpstww-20} developed an \emph{iterative} peeling algorithm for \dsg which appears to work very well in practice, and made a conjecture about its convergence to optimality. We affirmatively answer their conjecture, and in fact prove that a natural generalization of their algorithm converges to a $$(1-\eps)$$-approximation for \emph{any} supermodular function $$f$$; the key to our proof is to consider an LP formulation that is derived via the \Lovasz extension of a supermodular function. For \dsg the bound on the number of iterations we prove is $$O(\frac{\Delta \ln |V|}{\lambda^*}\cdot \frac{1}{\eps^2})$ where $$\Delta$$ is the maximum degree and $$\lambda^*$$ is the optimum value. Our work suggests that iterative peeling can be an effective heuristic for several objectives considered in the literature. Finally, we show that the $$2$$-approximation for densest-at-least-$$k$$ subgraph \cite{ks-09} extends to the supermodular setting. We also give a unified analysis of the peeling algorithm for this problem, and via this analysis derive an approximation guarantee for a generalization of \dssp to maximize $$f(S)/g(|S|)$ for a concave function $$g$$. 
    more » « less
  2. null (Ed.)
    The 2-Wasserstein distance (or RMS distance) is a useful measure of similarity between probability distributions with exciting applications in machine learning. For discrete distributions, the problem of computing this distance can be expressed in terms of finding a minimum-cost perfect matching on a complete bipartite graph given by two multisets of points A, B ⊂ ℝ2, with |A| = |B| = n, where the ground distance between any two points is the squared Euclidean distance between them. Although there is a near-linear time relative ∊-approximation algorithm for the case where the ground distance is Euclidean (Sharathkumar and Agarwal, JACM 2020), all existing relative ∊-approximation algorithms for the RMS distance take Ω(n3/2) time. This is primarily because, unlike Euclidean distance, squared Euclidean distance is not a metric. In this paper, for the RMS distance, we present a new ∊-approximation algorithm that runs in O(n^5/4 poly{log n, 1/∊}) time. Our algorithm is inspired by a recent approach for finding a minimum-cost perfect matching in bipartite planar graphs (Asathulla et al, TALG 2020). Their algorithm depends heavily on the existence of sublinear sized vertex separators as well as shortest path data structures that require planarity. Surprisingly, we are able to design a similar algorithm for a complete geometric graph that is far from planar and does not have any vertex separators. Central components of our algorithm include a quadtree-based distance that approximates the squared Euclidean distance and a data structure that supports both Hungarian search and augmentation in sublinear time. 
    more » « less
  3. We study the densest subgraph problem and give algorithms via multiplicative weights update and area convexity that converge in $$O\left(\frac{\log m}{\epsilon^{2}}\right)$$ and $$O\left(\frac{\log m}{\epsilon}\right)$$ iterations, respectively, both with nearly-linear time per iteration. Compared with the work by Bahmani et al. (2014), our MWU algorithm uses a very different and much simpler procedure for recovering the dense subgraph from the fractional solution and does not employ a binary search. Compared with the work by Boob et al. (2019), our algorithm via area convexity improves the iteration complexity by a factor $$\Delta$$ — the maximum degree in the graph, and matches the fastest theoretical runtime currently known via flows (Chekuri et al., 2022) in total time. Next, we study the dense subgraph decomposition problem and give the first practical iterative algorithm with linear convergence rate $$O\left(mn\log\frac{1}{\epsilon}\right)$$ via accelerated random coordinate descent. This significantly improves over $$O\left(\frac{m\sqrt{mn\Delta}}{\epsilon}\right)$$ time of the FISTA-based algorithm by Harb et al. (2022). In the high precision regime $$\epsilon\ll\frac{1}{n}$$ where we can even recover the exact solution, our algorithm has a total runtime of $$O\left(mn\log n\right)$$, matching the state of the art exact algorithm via parametric flows (Gallo et al., 1989). Empirically, we show that this algorithm is very practical and scales to very large graphs, and its performance is competitive with widely used methods that have significantly weaker theoretical guarantees. 
    more » « less
  4. We provide faster algorithms for approximating the optimal transport distance, e.g. earth mover's distance, between two discrete probability distributions on n elements. We present two algorithms that compute couplings between marginal distributions with an expected transportation cost that is within an additive ϵ of optimal in time O(n^2/eps); one algorithm is straightforward to parallelize and implementable in depth O(1/eps). Further, we show that additional improvements on our results must be coupled with breakthroughs in algorithmic graph theory. 
    more » « less
  5. Woodruff, David P. (Ed.)
    We give improved algorithms for maintaining edge-orientations of a fully-dynamic graph, such that the maximum out-degree is bounded. On one hand, we show how to orient the edges such that maximum out-degree is proportional to the arboricity $$\alpha$$ of the graph, in, either, an amortised update time of $$O(\log^2 n \log \alpha)$$, or a worst-case update time of $$O(\log^3 n \log \alpha)$$. On the other hand, motivated by applications including dynamic maximal matching, we obtain a different trade-off. Namely, the improved update time of either $$O(\log n \log \alpha)$$, amortised, or $$O(\log ^2 n \log \alpha)$$, worst-case, for the problem of maintaining an edge-orientation with at most $$O(\alpha + \log n)$$ out-edges per vertex. Finally, all of our algorithms naturally limit the recourse to be polylogarithmic in $$n$$ and $$\alpha$$. Our algorithms adapt to the current arboricity of the graph, and yield improvements over previous work: Firstly, we obtain deterministic algorithms for maintaining a $$(1+\varepsilon)$$ approximation of the maximum subgraph density, $$\rho$$, of the dynamic graph. Our algorithms have update times of $$O(\varepsilon^{-6}\log^3 n \log \rho)$$ worst-case, and $$O(\varepsilon^{-4}\log^2 n \log \rho)$$ amortised, respectively. We may output a subgraph $$H$$ of the input graph where its density is a $$(1+\varepsilon)$$ approximation of the maximum subgraph density in time linear in the size of the subgraph. These algorithms have improved update time compared to the $$O(\varepsilon^{-6}\log ^4 n)$$ algorithm by Sawlani and Wang from STOC 2020. Secondly, we obtain an $$O(\varepsilon^{-6}\log^3 n \log \alpha)$$ worst-case update time algorithm for maintaining a $$(1~+~\varepsilon)\textnormal{OPT} + 2$$ approximation of the optimal out-orientation of a graph with adaptive arboricity $$\alpha$$, improving the $$O(\varepsilon^{-6}\alpha^2 \log^3 n)$$ algorithm by Christiansen and Rotenberg from ICALP 2022. This yields the first worst-case polylogarithmic dynamic algorithm for decomposing into $$O(\alpha)$$ forests. Thirdly, we obtain arboricity-adaptive fully-dynamic deterministic algorithms for a variety of problems including maximal matching, $$\Delta+1$$ colouring, and matrix vector multiplication. All update times are worst-case $$O(\alpha+\log^2n \log \alpha)$$, where $$\alpha$$ is the current arboricity of the graph. For the maximal matching problem, the state-of-the-art deterministic algorithms by Kopelowitz, Krauthgamer, Porat, and Solomon from ICALP 2014 runs in time $$O(\alpha^2 + \log^2 n)$$, and by Neiman and Solomon from STOC 2013 runs in time $$O(\sqrt{m})$$. We give improved running times whenever the arboricity $$\alpha \in \omega( \log n\sqrt{\log\log n})$$. 
    more » « less