If the Laplacian matrix of a graph has a full set of orthogonal eigenvectors with entries $$\pm1$$, then the matrix formed by taking the columns as the eigenvectors is a Hadamard matrix and the graph is said to be Hadamard diagonalizable. In this article, we prove that if $n=8k+4$ the only possible Hadamard diagonalizable graphs are $$K_n$$, $$K_{n/2,n/2}$$, $$2K_{n/2}$$, and $$nK_1$$, and we develop a computational method for determining all graphs diagonalized by a given Hadamard matrix of any order. Using these two tools, we determine and present all Hadamard diagonalizable graphs up to order 36. Note that it is not even known how many Hadamard matrices there are of order 36.
more »
« less
On the existence of complex Hadamard submatrices of the Fourier matrices
Abstract We use a theorem of Lam and Leung to prove that a submatrix of a Fourier matrix cannot be Hadamard for particular cases when the dimension of the submatrix does not divide the dimension of the Fourier matrix. We also make some observations on the trace-spectrum relationship of dephased Hadamard matrices of low dimension.
more »
« less
- Award ID(s):
- 1743819
- PAR ID:
- 10088774
- Date Published:
- Journal Name:
- Demonstratio Mathematica
- Volume:
- 52
- Issue:
- 1
- ISSN:
- 2391-4661
- Page Range / eLocation ID:
- 1 to 9
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Random projections or sketching are widely used in many algorithmic and learning contexts. Here we study the performance of iterative Hessian sketch for leastsquares problems. By leveraging and extending recent results from random matrix theory on the limiting spectrum of matrices randomly projected with the subsampled randomized Hadamard transform, and truncated Haar matrices, we can study and compare the resulting algorithms to a level of precision that has not been possible before. Our technical contributions include a novel formula for the second moment of the inverse of projected matrices. We also find simple closed-form expressions for asymptotically optimal step-sizes and convergence rates. These show that the convergence rate for Haar and randomized Hadamard matrices are identical, and asymptotically improve upon Gaussian random projections. These techniques may be applied to other algorithms that employ randomized dimension reduction.more » « less
-
We give algorithms with lower arithmetic operation counts for both the Walsh-Hadamard Transform (WHT) and the Discrete Fourier Transform (DFT) on inputs of power-of-2 size N. For the WHT, our new algorithm has an operation count of 23/24N logN + O(N). To our knowledge, this gives the first improvement on the N logN operation count of the simple, folklore Fast Walsh-Hadamard Transform algorithm. For the DFT, our new FFT algorithm uses 15/4N logN + O(N) real arithmetic operations. Our leading constant 15/4 = 3.75 improves on the leading constant of 5 from the Cooley-Tukey algorithm from 1965, leading constant 4 from the split-radix algorithm of Yavne from 1968, leading constant 34/9=3.7777 from a modification of the split-radix algorithm by Van Buskirk from 2004, and leading constant 3.76875 from a theoretically optimized version of Van Buskirk’s algorithm by Sergeev from 2017. Our new WHT algorithm takes advantage of a recent line of work on the non-rigidity of the WHT: we decompose the WHT matrix as the sum of a low-rank matrix and a sparse matrix, and then analyze the structures of these matrices to achieve a lower operation count. Our new DFT algorithm comes from a novel reduction, showing that parts of the previous best FFT algorithms can be replaced by calls to an algorithm for the WHT. Replacing the folklore WHT algorithm with our new improved algorithm leads to our improved FFT.more » « less
-
Abstract We show the optimal coherence of $2d$ lines in $$\mathbb{C}^{d}$$ is given by the Welch bound whenever a skew Hadamard matrix of order $d+1$ exists. Our proof uses a variant of Hadamard matrix doubling that converts any equiangular tight frame of size $$\tfrac{d-1}{2} \times d$$ into another one of size $$d \times 2d$$. Among $$d \leq 160$$, this produces equiangular tight frames of new sizes when $d = 11$, $35$, $39$, $43$, $47$, $59$, $67$, $71$, $83$, $95$, $103$, $107$, $111$, $119$, $123$, $127$, $131$, $143$, $151$ and $155$.more » « less
-
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching. We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT). While current randomized solvers for least-squares optimization prescribe an embedding dimension at least greater than the data dimension, we show that the embedding dimension can be reduced to the effective dimension of the optimization problem, and still preserve high-probability convergence guarantees. In this regard, we derive sharp matrix deviation inequalities over ellipsoids for both Gaussian and SRHT embeddings. Specifically, we improve on the constant of a classical Gaussian concentration bound whereas, for SRHT embeddings, our deviation inequality involves a novel technical approach. Leveraging these bounds, we are able to design a practical and adaptive algorithm which does not require to know the effective dimension beforehand. Our method starts with an initial embedding dimension equal to 1 and, over iterations, increases the embedding dimension up to the effective one at most. Hence, our algorithm improves the state-of-the-art computational complexity for solving regularized least-squares problems. Further, we show numerically that it outperforms standard iterative solvers such as the conjugate gradient method and its pre-conditioned version on several standard machine learning datasets.more » « less
An official website of the United States government

