skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Spectra of products of digraphs
A unified approach to the determination of eigenvalues and eigenvectors of specific matrices associated with directed graphs is presented. Matrices studied include the new distance matrix, with natural extensions to the distance Laplacian and distance signless Laplacian, in addition to the new adjacency matrix, with natural extensions to the Laplacian and signless Laplacian. Various sums of Kronecker products of nonnegative matrices are introduced to model the Cartesian and lexicographic products of digraphs. The Jordan canonical form is applied extensively to the analysis of spectra and eigenvectors. The analysis shows that Cartesian products provide a method for building infinite families of transmission regular digraphs with few distinct distance eigenvalues.  more » « less
Award ID(s):
1839918
PAR ID:
10220866
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
The Electronic Journal of Linear Algebra
Volume:
36
Issue:
36
ISSN:
1081-3810
Page Range / eLocation ID:
744 to 763
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary Modern statistical methods for multivariate time series rely on the eigendecomposition of matrix-valued functions such as time-varying covariance and spectral density matrices. The curse of indeterminacy or misidentification of smooth eigenvector functions has not received much attention. We resolve this important problem and recover smooth trajectories by examining the distance between the eigenvectors of the same matrix-valued function evaluated at two consecutive points. We change the sign of the next eigenvector if its distance with the current one is larger than the square root of 2. In the case of distinct eigenvalues, this simple method delivers smooth eigenvectors. For coalescing eigenvalues, we match the corresponding eigenvectors and apply an additional signing around the coalescing points. We establish consistency and rates of convergence for the proposed smooth eigenvector estimators. Simulation results and applications to real data confirm that our approach is needed to obtain smooth eigenvectors. 
    more » « less
  2. Abstract Covariance matrices are fundamental to the analysis and forecast of economic, physical and biological systems. Although the eigenvalues $$\{\lambda _i\}$$ and eigenvectors $$\{\boldsymbol{u}_i\}$$ of a covariance matrix are central to such endeavours, in practice one must inevitably approximate the covariance matrix based on data with finite sample size $$n$$ to obtain empirical eigenvalues $$\{\tilde{\lambda }_i\}$$ and eigenvectors $$\{\tilde{\boldsymbol{u}}_i\}$$, and therefore understanding the error so introduced is of central importance. We analyse eigenvector error $$\|\boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2$$ while leveraging the assumption that the true covariance matrix having size $$p$$ is drawn from a matrix ensemble with known spectral properties—particularly, we assume the distribution of population eigenvalues weakly converges as $$p\to \infty $$ to a spectral density $$\rho (\lambda )$$ and that the spacing between population eigenvalues is similar to that for the Gaussian orthogonal ensemble. Our approach complements previous analyses of eigenvector error that require the full set of eigenvalues to be known, which can be computationally infeasible when $$p$$ is large. To provide a scalable approach for uncertainty quantification of eigenvector error, we consider a fixed eigenvalue $$\lambda $$ and approximate the distribution of the expected square error $$r= \mathbb{E}\left [\| \boldsymbol{u}_i - \tilde{\boldsymbol{u}}_i \|^2\right ]$$ across the matrix ensemble for all $$\boldsymbol{u}_i$$ associated with $$\lambda _i=\lambda $$. We find, for example, that for sufficiently large matrix size $$p$$ and sample size $n> p$, the probability density of $$r$$ scales as $1/nr^2$. This power-law scaling implies that the eigenvector error is extremely heterogeneous—even if $$r$$ is very small for most eigenvectors, it can be large for others with non-negligible probability. We support this and further results with numerical experiments. 
    more » « less
  3. Eighteen successful diffusion couple experiments in 8-component SiO2–TiO2–Al2O3–FeO–MgO–CaO–Na2O–K2O basaltic melts were conducted at 1260°C and 0.5 GPa and at 1500°C and 1.0 GPa. These experiments are combined with previous data at 1350°C and 1.0 GPa (Guo and Zhang, 2018) to study the temperature dependence of multicomponent diffusion in basaltic melts. Effective binary diffusion coefficients of components with monotonic diffusion profiles were extracted and show a strong dependence on their counter-diffusing component even though the average (or interface) compositions are the same. The diffusion matrix at 1260°C was obtained by simultaneously fitting diffusion profiles of all diffusion couple experiments as well as appropriate data from the literature. All features of concentration profiles in both diffusion couples and mineral dissolution are well reproduced by this new diffusion matrix. At 1500°C, only diffusion couple experiments are used to obtain the diffusion matrix. Eigenvectors of the diffusion matrix are used to discuss the diffusion (exchange) mechanism, and eigenvalues characterize the diffusion rate. Diffusion mechanisms at both 1260 and 1500°C are inferred from eigenvectors of diffusion matrices and compared with those at 1350°C reported in Guo and Zhang (2018). There is indication that diffusion eigenvectors in basaltic melts do not depend much on temperature, but complexity is present for some eigenvectors. The two slowest eigenvectors involve the exchange of SiO2 and/or Al2O3 with nonalkalis. The third slowest eigenvector is due to the exchange of divalent oxides with other oxides. The fastest eigenvector is due to the exchange of Na2O with other oxide components. Some eigenvalues differ from each other by less than 1/3, and their eigenvectors are less well defined. We define small difference in eigenvalues as near degeneracy. In strict mathematical degeneracy, eigenvectors are not uniquely defined because any linear combination of two eigenvectors is also an eigenvector. In the case of near degeneracy, more constraints either in terms of higher data quality or more experiments are needed to resolve the eigenvectors. We made a trial effort to generate a set of average eigenvectors, which are assumed to be constant as temperature varies. The corresponding eigenvalues are roughly Arrhenian. Thus, the temperature-dependent diffusion matrix can be roughly predicted. The method is applied to predict experimental diffusion profiles in basaltic melts during olivine and anorthite dissolution at ~1400°C with preliminary success. We further applied our diffusion matrix to investigate multicomponent diffusion during magma mixing in the Bushveld Complex and found such diffusion may result in an increased likelihood of sulfide and Fe-Ti oxide mineralization. 
    more » « less
  4. Throughout many scientific and engineering fields, including control theory, quantum mechanics, advanced dynamics, and network theory, a great many important applications rely on the spectral decomposition of matrices. Traditional methods such as the power iteration method, Jacobi eigenvalue method, and QR decomposition are commonly used to compute the eigenvalues and eigenvectors of a square and symmetric matrix. However, these methods suffer from certain drawbacks: in particular, the power iteration method can only find the leading eigen-pair (i.e., the largest eigenvalue and its corresponding eigenvector), while the Jacobi and QR decomposition methods face significant performance limitations when facing with large scale matrices. Typically, even producing approximate eigenpairs of a general square matrix requires at least O(N^3) time complexity, where N is the number of rows of the matrix. In this work, we exploit the newly developed memristor technology to propose a low-complexity, scalable memristor-based method for deriving a set of dominant eigenvalues and eigenvectors for real symmetric non-negative matrices. The time complexity for our proposed algorithm is O(N^2 /Δ) (where Δ governs the accuracy). We present experimental studies to simulate the memristor-supporting algorithm, with results demonstrating that the average error for our method is within 4%, while its performance is up to 1.78X better than traditional methods. 
    more » « less
  5. null (Ed.)
    Recent spectral graph sparsification techniques have shown promising performance in accelerating many numerical and graph algorithms, such as iterative methods for solving large sparse matrices, spectral partitioning of undirected graphs, vectorless verification of power/thermal grids, representation learning of large graphs, etc. However, prior spectral graph sparsification methods rely on fast Laplacian matrix solvers that are usually challenging to implement in practice. This work, for the first time, introduces a solver-free approach (SF-GRASS) for spectral graph sparsification by leveraging emerging spectral graph coarsening and graph signal processing (GSP) techniques. We introduce a local spectral embedding scheme for efficiently identifying spectrally-critical edges that are key to preserving graph spectral properties, such as the first few Laplacian eigenvalues and eigenvectors. Since the key kernel functions in SF-GRASS can be efficiently implemented using sparse-matrix-vector-multiplications (SpMVs), the proposed spectral approach is simple to implement and inherently parallel friendly. Our extensive experimental results show that the proposed method can produce a hierarchy of high-quality spectral sparsifiers in nearly-linear time for a variety of real-world, large-scale graphs and circuit networks when compared with prior state-of-the-art spectral methods. 
    more » « less