skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Combinatorial Cut-Toggling Algorithm for Solving Laplacian Linear Systems
Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form $$\mathbf{L}\mathbf{x} = \mathbf{b}$$, where $$\mathbf{L}$$ is the Laplacian matrix of a weighted graph with weights $w(i,j)>0$ on the edges. The solution $$\mathbf{x}$$ of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge $(i,j)$ is $1/w(i,j)$$. Kelner, Orrechia, Sidford, and Zhu \cite{KOSZ13} give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles ({\it cycle toggling}). In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by {\it cut toggling}: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations. We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms \cite{HKNS15}. The conjecture implies that the data structure does not have an $$O(n^{1-\epsilon})$ time algorithm for any $$\epsilon > 0$$, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration. To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an $$\widetilde{O}(m^{1.5})$$ time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to $$O(m^{1 + \epsilon})$$ for any $$\epsilon > 0$$. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart.  more » « less
Award ID(s):
2007009
PAR ID:
10536290
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
Algorithmica
Volume:
85
Issue:
12
ISSN:
0178-4617
Page Range / eLocation ID:
3680 to 3716
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Tauman Kalai, Yael (Ed.)
    Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form š‹š± = š›, where š‹ is the Laplacian matrix of a weighted graph with weights w(i,j) > 0 on the edges. The solution š± of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge (i,j) is 1/w(i,j). Kelner, Orrechia, Sidford, and Zhu [Kelner et al., 2013] give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles (cycle toggling). In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by cut toggling: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations. We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms [Henzinger et al., 2015]. The conjecture implies that the data structure does not have an O(n^{1-ε}) time algorithm for any ε > 0, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration. To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an Õ(m^{1.5}) time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to O(m^{1 + ε}) for any ε > 0. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart. 
    more » « less
  2. We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including the following. \begin{itemize} \item \textbf{Diagonal preconditioning.} We give an algorithm which, given positive definite $$\mathbf{K} \in \mathbb{R}^{d \times d}$$ with $$\mathrm{nnz}(\mathbf{K})$$ nonzero entries, computes an $$\epsilon$$-optimal diagonal preconditioner in time $$\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1}))$$, where $$\kappa^\star$$ is the optimal condition number of the rescaled matrix. \item \textbf{Structured linear systems.} We give an algorithm which, given $$\mathbf{M} \in \mathbb{R}^{d \times d}$$ that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in $$\mathbf{M}$$ in $$\widetilde{O}(d^2)$$ time. \end{itemize} Our diagonal preconditioning results improve state-of-the-art runtimes of $$\Omega(d^{3.5})$$ attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of $$\Omega(d^{\omega})$$ where $$\omega > 2.3$$ is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call \emph{matrix-dictionary approximation SDPs}, which we leverage to solve an associated problem we call \emph{matrix-dictionary recovery}. 
    more » « less
  3. null (Ed.)
    We consider the classical Minimum Balanced Cut problem: given a graph $$G$$, compute a partition of its vertices into two subsets of roughly equal volume, while minimizing the number of edges connecting the subsets. We present the first {\em deterministic, almost-linear time} approximation algorithm for this problem. Specifically, our algorithm, given an $$n$$-vertex $$m$$-edge graph $$G$$ and any parameter $$1\leq r\leq O(\log n)$$, computes a $$(\log m)^{r^2}$$-approximation for Minimum Balanced Cut on $$G$$, in time $$O\left ( m^{1+O(1/r)+o(1)}\cdot (\log m)^{O(r^2)}\right )$$. In particular, we obtain a $$(\log m)^{1/\epsilon}$$-approximation in time $$m^{1+O(1/\sqrt{\epsilon})}$$ for any constant $$\epsilon$$, and a $$(\log m)^{f(m)}$$-approximation in time $$m^{1+o(1)}$$, for any slowly growing function $$m$$. We obtain deterministic algorithms with similar guarantees for the Sparsest Cut and the Lowest-Conductance Cut problems. Our algorithm for the Minimum Balanced Cut problem in fact provides a stronger guarantee: it either returns a balanced cut whose value is close to a given target value, or it certifies that such a cut does not exist by exhibiting a large subgraph of $$G$$ that has high conductance. We use this algorithm to obtain deterministic algorithms for dynamic connectivity and minimum spanning forest, whose worst-case update time on an $$n$$-vertex graph is $$n^{o(1)}$$, thus resolving a major open problem in the area of dynamic graph algorithms. Our work also implies deterministic algorithms for a host of additional problems, whose time complexities match, up to subpolynomial in $$n$$ factors, those of known randomized algorithms. The implications include almost-linear time deterministic algorithms for solving Laplacian systems and for approximating maximum flows in undirected graphs. 
    more » « less
  4. We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon)^{O(1/epsilon^4)} n log^2{n} function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to Omega(n^2) running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle. 
    more » « less
  5. We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $$1 - 1/e - \epsilon$$ approximation, using $$(1/\epsilon)^{O(1/\epsilon^4)} n \log^2{n}$$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $$\Omega(n^2)$$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle. 
    more » « less