skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Asynchronous Nonlinear Updates on Graphs
The notion of graph shift, introduced recently in graph signal processing, extends many classical signal processing techniques to graphs. Its practical importance follows from its localization: a single graph shift requires nodes to communicate only with their neighbors. However, communications should happen simultaneously, which requires a synchronization over the graph. In order to overcome this restriction, recent studies consider a random asynchronous variant of the graph shift, which is also suitable for autonomous networks. A graph signal under this randomized scheme is shown to converge (under mild conditions) to an eigenvector of the eigenvalue 1 of the operator even if the operator has other eigenvalues with magnitudes larger than unity. If the eigenvalue 1 does not exist, the operator can be easily normalized in theory. However, in practice, the normalization requires one to know the (dominant) eigenvalues, which may not be possible to obtain in large autonomous networks. To eliminate this limitation, this study considers the use of a nonlinearity in the updates making the scheme similar in spirit to the Hopfield neural network model. Our simulation results show that a graph signal still approaches the eigenvector of the dominant eigenvalue although the convergence is not exact. Nevertheless, approximation is sufficient to accomplish certain tasks including autonomous clustering.  more » « less
Award ID(s):
1712633
PAR ID:
10094557
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Asilomar Conf. Sig., Sys., and Computers
Page Range / eLocation ID:
998 to 1002
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In recent years the convergence behavior of random node asynchronous graph communications have been studied for the case of undirected graphs. This paper extends these results to the case of graphs having arbitrary directed edges possibly with a nondiagonalizable adjacency matrix. Assuming that the graph operator has eigenvalue 1 and the input signal satisfies a certain condition (which ensures the existence of fixed points), this study presents the necessary and sufficient condition for the mean-squared convergence of the graph signal. The presented condition depends on the graph operator as well as the update probabilities, and the convergence of the randomized asynchronous updates may be achieved even when the underlying operator is not stable in the synchronous setting. As an application, the node-asynchronous updates are combined with polynomial filtering in order to obtain a spectral clustering for directed networks. The convergence is also verified with numerical simulations. 
    more » « less
  2. Throughout many scientific and engineering fields, including control theory, quantum mechanics, advanced dynamics, and network theory, a great many important applications rely on the spectral decomposition of matrices. Traditional methods such as the power iteration method, Jacobi eigenvalue method, and QR decomposition are commonly used to compute the eigenvalues and eigenvectors of a square and symmetric matrix. However, these methods suffer from certain drawbacks: in particular, the power iteration method can only find the leading eigen-pair (i.e., the largest eigenvalue and its corresponding eigenvector), while the Jacobi and QR decomposition methods face significant performance limitations when facing with large scale matrices. Typically, even producing approximate eigenpairs of a general square matrix requires at least O(N^3) time complexity, where N is the number of rows of the matrix. In this work, we exploit the newly developed memristor technology to propose a low-complexity, scalable memristor-based method for deriving a set of dominant eigenvalues and eigenvectors for real symmetric non-negative matrices. The time complexity for our proposed algorithm is O(N^2 /Δ) (where Δ governs the accuracy). We present experimental studies to simulate the memristor-supporting algorithm, with results demonstrating that the average error for our method is within 4%, while its performance is up to 1.78X better than traditional methods. 
    more » « less
  3. This paper considers a random component-wise variant of the unnormalized power method, which is similar to the regular power iteration except that only a random subset of indices is updated in each iteration. For the case of normal matrices, it was previously shown that random component-wise updates converge in the mean-squared sense to an eigenvector of eigenvalue 1 of the underlying matrix even in the case of the matrix having spectral radius larger than unity. In addition to the enlarged convergence regions, this study shows that the eigenvalue gap does not directly a ect the convergence rate of the randomized updates unlike the regular power method. In particular, it is shown that the rate of convergence is a ected by the phase of the eigenvalues in the case of random component-wise updates, and the randomized updates favor negative eigenvalues over positive ones. As an application, this study considers a reformulation of the component-wise updates revealing a randomized algorithm that is proven to converge to the dominant left and right singular vectors of a normalized data matrix. The algorithm is also extended to handle large-scale distributed data when computing an arbitrary rank approximation of an arbitrary data matrix. Numerical simulations verify the convergence of the proposed algorithms under di erent parameter settings. 
    more » « less
  4. Abstract Covariance matrices are fundamental to the analysis and forecast of economic, physical and biological systems. Although the eigenvalues $$\{\lambda _i\}$$ and eigenvectors $$\{\boldsymbol{u}_i\}$$ of a covariance matrix are central to such endeavours, in practice one must inevitably approximate the covariance matrix based on data with finite sample size $$n$$ to obtain empirical eigenvalues $$\{\tilde{\lambda }_i\}$$ and eigenvectors $$\{\tilde{\boldsymbol{u}}_i\}$$, and therefore understanding the error so introduced is of central importance. We analyse eigenvector error $$\|\boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2$$ while leveraging the assumption that the true covariance matrix having size $$p$$ is drawn from a matrix ensemble with known spectral properties—particularly, we assume the distribution of population eigenvalues weakly converges as $$p\to \infty $$ to a spectral density $$\rho (\lambda )$$ and that the spacing between population eigenvalues is similar to that for the Gaussian orthogonal ensemble. Our approach complements previous analyses of eigenvector error that require the full set of eigenvalues to be known, which can be computationally infeasible when $$p$$ is large. To provide a scalable approach for uncertainty quantification of eigenvector error, we consider a fixed eigenvalue $$\lambda $$ and approximate the distribution of the expected square error $$r= \mathbb{E}\left [\| \boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2\right ]$$ across the matrix ensemble for all $$\boldsymbol{u}_i$$ associated with $$\lambda _i=\lambda $$. We find, for example, that for sufficiently large matrix size $$p$$ and sample size $n> p$, the probability density of $$r$$ scales as $1/nr^2$. This power-law scaling implies that the eigenvector error is extremely heterogeneous—even if $$r$$ is very small for most eigenvectors, it can be large for others with non-negligible probability. We support this and further results with numerical experiments. 
    more » « less
  5. Many recent works have studied the eigenvalue spectrum of the Conjugate Kernel (CK) defined by the nonlinear feature map of a feedforward neural network. However, existing results only establish weak convergence of the empirical eigenvalue distribution, and fall short of providing precise quantitative characterizations of the “spike” eigenvalues and eigenvectors that often capture the low-dimensional signal structure of the learning problem. In this work, we characterize these signal eigenvalues and eigenvectors for a nonlinear version of the spiked covariance model, including the CK as a special case. Using this general result, we give a quantitative description of how spiked eigenstructure in the input data propagates through the hidden layers of a neural network with random weights. As a second application, we study a simple regime of representation learning where the weight matrix develops a rank-one signal component over training and characterize the alignment of the target function with the spike eigenvector of the CK on test data. 
    more » « less