skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: diGRASS: Directed Graph Spectral Sparsification via Spectrum-Preserving Symmetrization
Recent spectral graph sparsificationresearch aims to construct ultra-sparse subgraphs for preserving the original graph spectral (structural) properties, such as the first few Laplacian eigenvalues and eigenvectors, which has led to the development of a variety of nearly linear time numerical and graph algorithms. However, there is very limited progress in the spectral sparsification of directed graphs. In this work, we prove the existence of nearly linear-sized spectral sparsifiers for directed graphs under certain conditions. Furthermore, we introduce a practically efficient spectral algorithm (diGRASS) for sparsifying real-world, large-scale directed graphs leveraging spectral matrix perturbation analysis. The proposed method has been evaluated using a variety of directed graphs obtained from real-world applications, showing promising results for solving directed graph Laplacians, spectral partitioning of directed graphs, and approximately computing (personalized) PageRank vectors.  more » « less
Award ID(s):
2021309 2212370 2205572
PAR ID:
10501946
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Knowledge Discovery from Data
Volume:
18
Issue:
4
ISSN:
1556-4681
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Recent spectral graph sparsification techniques have shown promising performance in accelerating many numerical and graph algorithms, such as iterative methods for solving large sparse matrices, spectral partitioning of undirected graphs, vectorless verification of power/thermal grids, representation learning of large graphs, etc. However, prior spectral graph sparsification methods rely on fast Laplacian matrix solvers that are usually challenging to implement in practice. This work, for the first time, introduces a solver-free approach (SF-GRASS) for spectral graph sparsification by leveraging emerging spectral graph coarsening and graph signal processing (GSP) techniques. We introduce a local spectral embedding scheme for efficiently identifying spectrally-critical edges that are key to preserving graph spectral properties, such as the first few Laplacian eigenvalues and eigenvectors. Since the key kernel functions in SF-GRASS can be efficiently implemented using sparse-matrix-vector-multiplications (SpMVs), the proposed spectral approach is simple to implement and inherently parallel friendly. Our extensive experimental results show that the proposed method can produce a hierarchy of high-quality spectral sparsifiers in nearly-linear time for a variety of real-world, large-scale graphs and circuit networks when compared with prior state-of-the-art spectral methods. 
    more » « less
  2. Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs. 
    more » « less
  3. A sparsification of a given graph G is a sparser graph (typically a subgraph) which aims to approximate or preserve some property of G. Examples of sparsifications include but are not limited to spanning trees, Steiner trees, spanners, emulators, and distance preservers. Each vertex has the same priority in all of these problems. However, real-world graphs typically assign different “priorities” or “levels” to different vertices, in which higher-priority vertices require higher-quality connectivity between them. Multi-priority variants of the Steiner tree problem have been studied in prior literature but this generalization is much less studied for other sparsification problems. In this paper, we define a generalized multi-priority problem and present a rounding-up approach that can be used for a variety of graph sparsifications. Our analysis provides a systematic way to compute approximate solutions to multi-priority variants of a wide range of graph sparsification problems given access to a single-priority subroutine. 
    more » « less
  4. We introduce a notion of code sparsification that generalizes the notion of cut sparsification in graphs. For a (linear) code C ⊆ 𝔽nq of dimension k a (1 ± ɛ)-sparsification of size s is given by a weighted set S ⊆ [n] with |S| ≤ s such that for every codeword c ∈ C the projection c|s of c to the set S has (weighted) hamming weight which is a (1 ± ɛ) approximation of the hamming weight of c. We show that for every code there exists a (1 ± ɛ)-sparsification of size s = Õ(k log(q)/ɛ2). This immediately implies known results on graph and hypergraph cut sparsification up to polylogarithmic factors (with a simple unified proof) — the former follows from the well-known fact that cuts in a graph form a linear code over 𝔽2, while the latter is obtained by a simple encoding of hypergraph cuts. Further, by connections between the eigenvalues of the Laplacians of Cayley graphs over to the weights of codewords, we also give the first proof of the existence of spectral Cayley graph sparsifiers over by Cayley graphs, i.e., where we sparsify the set of generators to nearly-optimal size. Additionally, this work can be viewed as a continuation of a line of works on building sparsifiers for constraint satisfaction problems (CSPs); this result shows that there exist near-linear size sparsifiers for CSPs over 𝔽p-valued variables whose unsatisfying assignments can be expressed as the zeros of a linear equation modulo a prime p. As an application we give a full characterization of ternary Boolean CSPs (CSPs where the underlying predicate acts on three Boolean variables) that allow for near-linear size sparsification. This makes progress on a question posed by Kogan and Krauthgamer (ITCS 2015) asking which CSPs allow for near-linear size sparsifiers (in the number of variables). At the heart of our result is a codeword counting bound that we believe is of independent interest. Indeed, extending Karger's cut-counting bound (SODA 1993), we show a novel decomposition theorem of linear codes: we show that every linear code has a (relatively) small subset of coordinates such that after deleting those coordinates, the code on the remaining coordinates has a smooth upper bound on the number of codewords of small weight. Using the deleted coordinates in addition to a (weighted) random sample of the remaining coordinates now allows us to sparsify the whole code. The proof of this decomposition theorem extends Karger's proof (and the contraction method) in a clean way, while enabling the extensions listed above without any additional complexity in the proofs. 
    more » « less
  5. We introduce a notion of code sparsification that generalizes the notion of cut sparsification in graphs. For a (linear) code C ⊆ 𝔽nq of dimension k a (1 ± ɛ)-sparsification of size s is given by a weighted set S ⊆ [n] with |S| ≤ s such that for every codeword c ∈ C the projection c|s of c to the set S has (weighted) hamming weight which is a (1 ± ɛ) approximation of the hamming weight of c. We show that for every code there exists a (1 ± ɛ)-sparsification of size s = Õ(k log(q)/ɛ2). This immediately implies known results on graph and hypergraph cut sparsification up to polylogarithmic factors (with a simple unified proof) — the former follows from the well-known fact that cuts in a graph form a linear code over 𝔽2, while the latter is obtained by a simple encoding of hypergraph cuts. Further, by connections between the eigenvalues of the Laplacians of Cayley graphs over to the weights of codewords, we also give the first proof of the existence of spectral Cayley graph sparsifiers over by Cayley graphs, i.e., where we sparsify the set of generators to nearly-optimal size. Additionally, this work can be viewed as a continuation of a line of works on building sparsifiers for constraint satisfaction problems (CSPs); this result shows that there exist near-linear size sparsifiers for CSPs over 𝔽p-valued variables whose unsatisfying assignments can be expressed as the zeros of a linear equation modulo a prime p. As an application we give a full characterization of ternary Boolean CSPs (CSPs where the underlying predicate acts on three Boolean variables) that allow for near-linear size sparsification. This makes progress on a question posed by Kogan and Krauthgamer (ITCS 2015) asking which CSPs allow for near-linear size sparsifiers (in the number of variables). At the heart of our result is a codeword counting bound that we believe is of independent interest. Indeed, extending Karger's cut-counting bound (SODA 1993), we show a novel decomposition theorem of linear codes: we show that every linear code has a (relatively) small subset of coordinates such that after deleting those coordinates, the code on the remaining coordinates has a smooth upper bound on the number of codewords of small weight. Using the deleted coordinates in addition to a (weighted) random sample of the remaining coordinates now allows us to sparsify the whole code. The proof of this decomposition theorem extends Karger's proof (and the contraction method) in a clean way, while enabling the extensions listed above without any additional complexity in the proofs. 
    more » « less