The spark of a matrix is the smallest number of nonzero coordinates of any nonzero null vector. For real symmetric matrices, the sparsity of null vectors is shown to be associated with the structure of the graph obtained from the off-diagonal pattern of zero and nonzero entries. The smallest possible spark of a matrix corresponding to a graph is defined as the spark of the graph. Connections are established between graph spark and well-known concepts including minimum rank, forts, orthogonal representations, Parter and Fiedler vertices, and vertex connectivity.
more »
« less
Orthogonal realizations of random sign patterns and other applications of the SIPP
A sign pattern is an array with entries in $$\{+,-,0\}$$. A real matrix $$Q$$ is row orthogonal if $QQ^T = I$. The Strong Inner Product Property (SIPP), introduced in [B.A. Curtis and B.L. Shader, Sign patterns of orthogonal matrices and the strong inner product property, Linear Algebra Appl. 592: 228-259, 2020], is an important tool when determining whether a sign pattern allows row orthogonality because it guarantees there is a nearby matrix with the same property, allowing zero entries to be perturbed to nonzero entries, while preserving the sign of every nonzero entry. This paper uses the SIPP to initiate the study of conditions under which random sign patterns allow row orthogonality with high probability. Building on prior work, $$5\times n$$ nowhere zero sign patterns that minimally allow orthogonality are determined. Conditions on zero entries in a sign pattern are established that guarantee any row orthogonal matrix with such a sign pattern has the SIPP.
more »
« less
- Award ID(s):
- 1839918
- PAR ID:
- 10495656
- Publisher / Repository:
- University of Wyoming
- Date Published:
- Journal Name:
- The Electronic Journal of Linear Algebra
- Volume:
- 39
- ISSN:
- 1081-3810
- Page Range / eLocation ID:
- 434 to 459
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)In the non-negative matrix factorization (NMF) problem, the input is an m×n matrix M with non-negative entries and the goal is to factorize it as M ≈ AW. The m × k matrix A and the k × n matrix W are both constrained to have non-negative entries. This is in contrast to singular value decomposition, where the matrices A and W can have negative entries but must satisfy the orthogonality constraint: the columns of A are orthogonal and the rows of W are also orthogonal. The orthogonal non-negative matrix factorization (ONMF) problem imposes both the non-negativity and the orthogonality constraints, and previous work showed that it leads to better performances than NMF on many clustering tasks. We give the first constant-factor approximation algorithm for ONMF when one or both of A and W are subject to the orthogonality constraint. We also show an interesting connection to the correlation clustering problem on bipartite graphs. Our experiments on synthetic and real-world data show that our algorithm achieves similar or smaller errors compared to previous ONMF algorithms while ensuring perfect orthogonality (many previous algorithms do not satisfy the hard orthogonality constraint).more » « less
-
We develop a non-symmetric strong multiplicity property for matrices that may or may not be symmetric. We say a sign pattern allows the non-symmetric strong multiplicity property if there is a matrix with the non-symmetric strong multiplicity property that has the given sign pattern. We show that this property of a matrix pattern preserves multiplicities of eigenvalues for superpatterns of the pattern. We also provide a bifurcation lemma, showing that a matrix pattern with the property also allows refinements of the multiplicity list of eigenvalues. We conclude by demonstrating how this property can help with the inverse eigenvalue problem of determining the number of distinct eigenvalues allowed by a sign pattern.more » « less
-
It is shown that for any positive integer \begin{document}$$ n \ge 3 $$\end{document}, there is a stable irreducible \begin{document}$$ n\times n $$\end{document} matrix \begin{document}$ A $$\end{document} with \begin{document}$$ 2n+1-\lfloor\frac{n}{3}\rfloor $$\end{document} nonzero entries exhibiting Turing instability. Moreover, when \begin{document}$$ n = 3 $$\end{document}, the result is best possible, i.e., every \begin{document}$$ 3\times 3 $$\end{document} stable matrix with five or fewer nonzero entries will not exhibit Turing instability. Furthermore, we determine all possible \begin{document}$$ 3\times 3 $$\end{document} irreducible sign pattern matrices with 6 nonzero entries which can be realized by a matrix \begin{document}$$ A $$\end{document}$ that exhibits Turing instability.more » « less
-
Candecomp / PARAFAC (CP) decomposition, a generalization of the matrix singular value decomposition to higher-dimensional tensors, is a popular tool for analyzing multidimensional sparse data. On tensors with billions of nonzero entries, computing a CP decomposition is a computationally intensive task. We propose the first distributed-memory implementations of two randomized CP decomposition algorithms,CP-ARLS-LEV and STS-CP, that offer nearly an order-of-magnitude speedup at high decomposition ranks over well-tuned non-randomized decomposition packages. Both algorithms rely on leverage score sampling and enjoy strong theoretical guarantees, each with varying time and accuracy tradeoffs. We tailor the communication schedule for our random sampling algorithms, eliminating expensive reduction collectives and forcing communication costs to scale with the random sample count. Finally, we optimize the local storage format for our methods, switching between analogues of compressed sparse column and compressed sparse row formats. Experiments show that our methods are fast and scalable,producing 11x speedup over SPLATT by decomposing the billion-scale Reddit tensor on 512 CPU cores in under two minutes.more » « less
An official website of the United States government

