skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Cayley Graphs Without a Bounded Eigenbasis
Abstract Does every $$n$$-vertex Cayley graph have an orthonormal eigenbasis all of whose coordinates are $$O(1/\sqrt{n})$$? While the answer is yes for abelian groups, we show that it is no in general. On the other hand, we show that every $$n$$-vertex Cayley graph (and more generally, vertex-transitive graph) has an orthonormal basis whose coordinates are all $$O(\sqrt{\log n / n})$$, and that this bound is nearly best possible. Our investigation is motivated by a question of Assaf Naor, who proved that random abelian Cayley graphs are small-set expanders, extending a classic result of Alon–Roichman. His proof relies on the existence of a bounded eigenbasis for abelian Cayley graphs, which we now know cannot hold for general groups. On the other hand, we navigate around this obstruction and extend Naor’s result to nonabelian groups.  more » « less
Award ID(s):
1764176
PAR ID:
10337076
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International Mathematics Research Notices
Volume:
2022
Issue:
8
ISSN:
1073-7928
Page Range / eLocation ID:
6157 to 6185
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The spectral decomposition of graph adjacency matrices is an essential ingredient in the design of graph signal processing (GSP) techniques. When the adjacency matrix has multi-dimensional eigenspaces, it is desirable to base GSP constructions on a particular eigenbasis that better reflects the graph’s symmetries. In this paper, we provide an explicit and detailed representation-theoretic account for the spectral decomposition of the adjacency matrix of a weighted Cayley graph. Our method applies to all weighted Cayley graphs, regardless of whether they are quasi-Abelian, and offers detailed descrip- tions of eigenvalues and eigenvectors derived from the coefficient functions of the representations of the underlying group. Next, we turn our attention to constructing frames on Cayley graphs. Frames are overcomplete spanning sets that ensure stable and potentially redundant systems for signal re- construction. We use our proposed eigenbases to build frames that are suitable for developing signal processing on Cayley graphs. These are the Frobenius–Schur frames and Cayley frames, for which we provide a characterization and a practical recipe for their construction. 
    more » « less
  2. null (Ed.)
    We introduce and study the problem of constructing geometric graphs that have few vertices and edges and that are universal for planar graphs or for some sub-class of planar graphs; a geometric graph is universal for a class H of planar graphs if it contains an embedding, i.e., a crossing-free drawing, of every graph in H . Our main result is that there exists a geometric graph with n vertices and O(nlogn) edges that is universal for n-vertex forests; this extends to the geometric setting a well-known graph-theoretic result by Chung and Graham, which states that there exists an n-vertex graph with O(nlogn) edges that contains every n-vertex forest as a subgraph. Our O(nlogn) bound on the number of edges is asymptotically optimal. We also prove that, for every h>0 , every n-vertex convex geometric graph that is universal for the class of the n-vertex outerplanar graphs has Ωh(n2−1/h) edges; this almost matches the trivial O(n2) upper bound given by the n-vertex complete convex geometric graph. Finally, we prove that there is an n-vertex convex geometric graph with n vertices and O(nlogn) edges that is universal for n-vertex caterpillars. 
    more » « less
  3. We introduce a notion of code sparsification that generalizes the notion of cut sparsification in graphs. For a (linear) code C ⊆ 𝔽nq of dimension k a (1 ± ɛ)-sparsification of size s is given by a weighted set S ⊆ [n] with |S| ≤ s such that for every codeword c ∈ C the projection c|s of c to the set S has (weighted) hamming weight which is a (1 ± ɛ) approximation of the hamming weight of c. We show that for every code there exists a (1 ± ɛ)-sparsification of size s = Õ(k log(q)/ɛ2). This immediately implies known results on graph and hypergraph cut sparsification up to polylogarithmic factors (with a simple unified proof) — the former follows from the well-known fact that cuts in a graph form a linear code over 𝔽2, while the latter is obtained by a simple encoding of hypergraph cuts. Further, by connections between the eigenvalues of the Laplacians of Cayley graphs over to the weights of codewords, we also give the first proof of the existence of spectral Cayley graph sparsifiers over by Cayley graphs, i.e., where we sparsify the set of generators to nearly-optimal size. Additionally, this work can be viewed as a continuation of a line of works on building sparsifiers for constraint satisfaction problems (CSPs); this result shows that there exist near-linear size sparsifiers for CSPs over 𝔽p-valued variables whose unsatisfying assignments can be expressed as the zeros of a linear equation modulo a prime p. As an application we give a full characterization of ternary Boolean CSPs (CSPs where the underlying predicate acts on three Boolean variables) that allow for near-linear size sparsification. This makes progress on a question posed by Kogan and Krauthgamer (ITCS 2015) asking which CSPs allow for near-linear size sparsifiers (in the number of variables). At the heart of our result is a codeword counting bound that we believe is of independent interest. Indeed, extending Karger's cut-counting bound (SODA 1993), we show a novel decomposition theorem of linear codes: we show that every linear code has a (relatively) small subset of coordinates such that after deleting those coordinates, the code on the remaining coordinates has a smooth upper bound on the number of codewords of small weight. Using the deleted coordinates in addition to a (weighted) random sample of the remaining coordinates now allows us to sparsify the whole code. The proof of this decomposition theorem extends Karger's proof (and the contraction method) in a clean way, while enabling the extensions listed above without any additional complexity in the proofs. 
    more » « less
  4. We introduce a notion of code sparsification that generalizes the notion of cut sparsification in graphs. For a (linear) code C ⊆ 𝔽nq of dimension k a (1 ± ɛ)-sparsification of size s is given by a weighted set S ⊆ [n] with |S| ≤ s such that for every codeword c ∈ C the projection c|s of c to the set S has (weighted) hamming weight which is a (1 ± ɛ) approximation of the hamming weight of c. We show that for every code there exists a (1 ± ɛ)-sparsification of size s = Õ(k log(q)/ɛ2). This immediately implies known results on graph and hypergraph cut sparsification up to polylogarithmic factors (with a simple unified proof) — the former follows from the well-known fact that cuts in a graph form a linear code over 𝔽2, while the latter is obtained by a simple encoding of hypergraph cuts. Further, by connections between the eigenvalues of the Laplacians of Cayley graphs over to the weights of codewords, we also give the first proof of the existence of spectral Cayley graph sparsifiers over by Cayley graphs, i.e., where we sparsify the set of generators to nearly-optimal size. Additionally, this work can be viewed as a continuation of a line of works on building sparsifiers for constraint satisfaction problems (CSPs); this result shows that there exist near-linear size sparsifiers for CSPs over 𝔽p-valued variables whose unsatisfying assignments can be expressed as the zeros of a linear equation modulo a prime p. As an application we give a full characterization of ternary Boolean CSPs (CSPs where the underlying predicate acts on three Boolean variables) that allow for near-linear size sparsification. This makes progress on a question posed by Kogan and Krauthgamer (ITCS 2015) asking which CSPs allow for near-linear size sparsifiers (in the number of variables). At the heart of our result is a codeword counting bound that we believe is of independent interest. Indeed, extending Karger's cut-counting bound (SODA 1993), we show a novel decomposition theorem of linear codes: we show that every linear code has a (relatively) small subset of coordinates such that after deleting those coordinates, the code on the remaining coordinates has a smooth upper bound on the number of codewords of small weight. Using the deleted coordinates in addition to a (weighted) random sample of the remaining coordinates now allows us to sparsify the whole code. The proof of this decomposition theorem extends Karger's proof (and the contraction method) in a clean way, while enabling the extensions listed above without any additional complexity in the proofs. 
    more » « less
  5. null (Ed.)
    Boolean functions play an important role in many different areas of computer science. The _local sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$$ on an input $$x\in\{0,1\}^n$$ is the number of coordinates whose flip changes the value of $f(x)$, i.e., the number of i's such that $$f(x)\not=f(x+e_i)$$, where $$e_i$$ is the $$i$$-th unit vector. The _sensitivity_ of a Boolean function is its maximum local sensitivity. In other words, the sensitivity measures the robustness of a Boolean function with respect to a perturbation of its input. Another notion that measures the robustness is block sensitivity. The _local block sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$ on an input $$x\in\{0,1\}^n$$ is the number of disjoint subsets $$I$$ of $$\{1,..,n\}$$ such that flipping the coordinates indexed by $$I$$ changes the value of $f(x)$$, and the _block sensitivity_ of $$f$$ is its maximum local block sensitivity. Since the local block sensitivity is at least the local sensitivity for any input $$x$$, the block sensitivity of $$f$$ is at least the sensitivity of $$f$$.The next example demonstrates that the block sensitivity of a Boolean function is not linearly bounded by its sensitivity. Fix an integer $$k\ge 2$$ and define a Boolean function $$f:\{0,1\}^{2k^2}\to\{0,1\}$$ as follows: the coordinates of $$x\in\{0,1\}^{2k^2}$$ are split into $$k$$ blocks of size $2k$ each and $f(x)=1$ if and only if at least one of the blocks contains exactly two entries equal to one and these entries are consecutive. While the sensitivity of the function $$f$$ is $2k$, its block sensitivity is $k^2$. The Sensitivity Conjecture, made by Nisan and Szegedy in 1992, asserts that the block sensitivity of a Boolean function is polynomially bounded by its sensivity. The example above shows that the degree of such a polynomial must be at least two.The Sensitivity Conjecture has been recently proven by Huang in [Annals of Mathematics 190 (2019), 949-955](https://doi.org/10.4007/annals.2019.190.3.6). He proved the following combinatorial statement that implies the conjecture (with the degree of the polynomial equal to four): any subset of more than half of the vertices of the $$n$$-dimensional cube $$\{0,1\}^n$$ induces a subgraph that contains a vertex with degree at least $$\sqrt{n}$$. The present article extends this result as follows: every Cayley graph with the vertex set $$\{0,1\}^n$$ and any generating set of size $$d$$ (the vertex set is viewed as a vector space over the binary field) satisfies that any subset of more than half of its vertices induces a subgraph that contains a vertex of degree at least $$\sqrt{d}$$. In particular, when the generating set consists of the $$n$$ unit vectors, the Cayley graph is the $$n$$-dimensional hypercube. 
    more » « less