skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise
Abstract Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis and can be computed efficiently by Sinkhorn–Knopp (SK) iterations. This paper proves the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates, when $$n$$ data points are i.i.d. sampled from a general $$d$$-dimensional manifold embedded in a possibly high-dimensional space. Under certain joint limit of $$n \to \infty $$ and kernel bandwidth $$\epsilon \to 0$$, the point-wise convergence rate of the graph Laplacian operator (under 2-norm) is proved to be $$ O( n^{-1/(d/2+3)})$$ at finite large $$n$$ up to log factors, achieved at the scaling of $$\epsilon \sim n^{-1/(d/2+3)} $$. When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency which matches the rate for clean manifold data plus an additional term proportional to the boundedness of the inner-products of the noise vectors among themselves and with data vectors. Motivated by our analysis, which suggests that not exact bi-stochastic normalization but an approximate one will achieve the same consistency rate, we propose an approximate and constrained matrix scaling problem that can be solved by SK iterations with early termination. Numerical experiments support our theoretical results and show the robustness of bi-stochastically normalized graph Laplacian to high-dimensional outlier noise.  more » « less
Award ID(s):
2007040
PAR ID:
10543110
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Information and Inference: A Journal of the IMA
Volume:
13
Issue:
4
ISSN:
2049-8772
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Kernelized Gram matrix $$W$$ constructed from data points $$\{x_i\}_{i=1}^N$$ as $$W_{ij}= k_0( \frac{ \| x_i - x_j \|^2} {\sigma ^2} ) $$ is widely used in graph-based geometric data analysis and unsupervised learning. An important question is how to choose the kernel bandwidth $$\sigma $$, and a common practice called self-tuned kernel adaptively sets a $$\sigma _i$$ at each point $$x_i$$ by the $$k$$-nearest neighbor (kNN) distance. When $$x_i$$s are sampled from a $$d$$-dimensional manifold embedded in a possibly high-dimensional space, unlike with fixed-bandwidth kernels, theoretical results of graph Laplacian convergence with self-tuned kernels have been incomplete. This paper proves the convergence of graph Laplacian operator $$L_N$$ to manifold (weighted-)Laplacian for a new family of kNN self-tuned kernels $$W^{(\alpha )}_{ij} = k_0( \frac{ \| x_i - x_j \|^2}{ \epsilon \hat{\rho }(x_i) \hat{\rho }(x_j)})/\hat{\rho }(x_i)^\alpha \hat{\rho }(x_j)^\alpha $$, where $$\hat{\rho }$$ is the estimated bandwidth function by kNN and the limiting operator is also parametrized by $$\alpha $$. When $$\alpha = 1$$, the limiting operator is the weighted manifold Laplacian $$\varDelta _p$$. Specifically, we prove the point-wise convergence of $$L_N f $$ and convergence of the graph Dirichlet form with rates. Our analysis is based on first establishing a $C^0$ consistency for $$\hat{\rho }$$ which bounds the relative estimation error $$|\hat{\rho } - \bar{\rho }|/\bar{\rho }$$ uniformly with high probability, where $$\bar{\rho } = p^{-1/d}$$ and $$p$$ is the data density function. Our theoretical results reveal the advantage of the self-tuned kernel over the fixed-bandwidth kernel via smaller variance error in low-density regions. In the algorithm, no prior knowledge of $$d$$ or data density is needed. The theoretical results are supported by numerical experiments on simulated data and hand-written digit image data. 
    more » « less
  2. In this paper we provide anO(mloglogO(1)nlog (1/ϵ))-expected time algorithm for solving Laplacian systems onn-nodem-edge graphs, improving upon the previous best expected runtime of\(O(m \sqrt {\log n} \mathrm{log log}^{O(1)} n \log (1/\epsilon)) \)achieved by (Cohen, Kyng, Miller, Pachocki, Peng, Rao, Xu 2014). To obtain this result we provide efficient constructions of low spectral stretch graph approximations with improved stretch and sparsity bounds. As motivation for this work, we show that for every set of vectors in\(\mathbb {R}^d \)(not just those induced by graphs) and all integerk> 1 there exist an ultra-sparsifier withd− 1 +O(d/k) re-weighted vectors of relative condition number at mostk2. For smallk, this improves upon the previous best known multiplicative factor of\(k \cdot \tilde{O}(\log d) \), which is only known for the graph case. Additionally, in the graph case we employ our low-stretch subgraph construction to obtainn− 1 +O(n/k)-edge ultrasparsifiers of relative condition numberk1 +o(1)fork=ω(log δn) for anyδ> 0: this improves upon the previous work fork=o(exp (log 1/2 −δn)). 
    more » « less
  3. Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form $$\mathbf{L}\mathbf{x} = \mathbf{b}$$, where $$\mathbf{L}$$ is the Laplacian matrix of a weighted graph with weights $w(i,j)>0$ on the edges. The solution $$\mathbf{x}$$ of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge $(i,j)$ is $1/w(i,j)$$. Kelner, Orrechia, Sidford, and Zhu \cite{KOSZ13} give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles ({\it cycle toggling}). In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by {\it cut toggling}: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations. We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms \cite{HKNS15}. The conjecture implies that the data structure does not have an $$O(n^{1-\epsilon})$ time algorithm for any $$\epsilon > 0$$, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration. To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an $$\widetilde{O}(m^{1.5})$$ time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to $$O(m^{1 + \epsilon})$$ for any $$\epsilon > 0$$. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart. 
    more » « less
  4. Spectral clustering has become one of the most popular algorithms in data clustering and community detection. We study the performance of classical two-step spectral clustering via the graph Laplacian to learn the stochastic block model. Our aim is to answer the following question: when is spectral clustering via the graph Laplacian able to achieve strong consistency, i.e., the exact recovery of the underlying hidden communities? Our work provides an entrywise analysis (an ℓ∞-norm perturbation bound) of the Fiedler eigenvector of both the unnormalized and the normalized Laplacian associated with the adjacency matrix sampled from the stochastic block model. We prove that spectral clustering is able to achieve exact recovery of the planted community structure under conditions that match the information-theoretic limits. 
    more » « less
  5. Abstract In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularization in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset$$\{x_i\}_{i=1}^n$$ { x i } i = 1 n and a set of noisy labels$$\{y_i\}_{i=1}^n\subset \mathbb {R}$$ { y i } i = 1 n R we let$$u_n{:}\{x_i\}_{i=1}^n\rightarrow \mathbb {R}$$ u n : { x i } i = 1 n R be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When$$y_i = g(x_i)+\xi _i$$ y i = g ( x i ) + ξ i , for iid noise$$\xi _i$$ ξ i , and using the geometric random graph, we identify (with high probability) the rate of convergence of$$u_n$$ u n togin the large data limit$$n\rightarrow \infty $$ n . Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model. 
    more » « less