skip to main content

Title: The spectrum problem for two multigraphs with four vertices and seven edges
Let $G$ be one of the two multigraphs obtained from $K_4-e$ by replacing two edges with a double-edge while maintaining a minimum degree of~2. We find necessary and sufficient conditions on $n$ and $\lambda$ for the existence of a $G$-decomposition of $^{\lambda}K_n$.
; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Journal of Combinatorial Mathematics and Combinatorial Computing
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The duality principle for group representations developed in Dutkay et al. (J Funct Anal 257:1133–1143, 2009), Han and Larson (Bull Lond Math Soc 40:685–695, 2008) exhibits a fact that the well-known duality principle in Gabor analysis is not an isolated incident but a more general phenomenon residing in the context of group representation theory. There are two other well-known fundamental properties in Gabor analysis: the biorthogonality and the fundamental identity of Gabor analysis. The main purpose of this this paper is to show that these two fundamental properties remain to be true for general projective unitary group representations. Moreover, we also present a general duality theorem which shows that that muti-frame generators meet super-frame generators through a dual commutant pair of group representations. Applying it to the Gabor representations, we obtain that $$\{\pi _{\Lambda }(m, n)g_{1} \oplus \cdots \oplus \pi _{\Lambda }(m, n)g_{k}\}_{m, n \in {\mathbb {Z}}^{d}}$$ { π Λ ( m , n ) g 1 ⊕ ⋯ ⊕ π Λ ( m , n ) g k } m , n ∈ Z d is a frame for $$L^{2}({\mathbb {R}}\,^{d})\oplus \cdots \oplus L^{2}({\mathbb {R}}\,^{d})$$ L 2 ( R d ) ⊕ ⋯ ⊕ L 2 ( Rmore »d ) if and only if $$\cup _{i=1}^{k}\{\pi _{\Lambda ^{o}}(m, n)g_{i}\}_{m, n\in {\mathbb {Z}}^{d}}$$ ∪ i = 1 k { π Λ o ( m , n ) g i } m , n ∈ Z d is a Riesz sequence, and $$\cup _{i=1}^{k} \{\pi _{\Lambda }(m, n)g_{i}\}_{m, n\in {\mathbb {Z}}^{d}}$$ ∪ i = 1 k { π Λ ( m , n ) g i } m , n ∈ Z d is a frame for $$L^{2}({\mathbb {R}}\,^{d})$$ L 2 ( R d ) if and only if $$\{\pi _{\Lambda ^{o}}(m, n)g_{1} \oplus \cdots \oplus \pi _{\Lambda ^{o}}(m, n)g_{k}\}_{m, n \in {\mathbb {Z}}^{d}}$$ { π Λ o ( m , n ) g 1 ⊕ ⋯ ⊕ π Λ o ( m , n ) g k } m , n ∈ Z d is a Riesz sequence, where $$\pi _{\Lambda }$$ π Λ and $$\pi _{\Lambda ^{o}}$$ π Λ o is a pair of Gabor representations restricted to a time–frequency lattice $$\Lambda $$ Λ and its adjoint lattice $$\Lambda ^{o}$$ Λ o in $${\mathbb {R}}\,^{d}\times {\mathbb {R}}\,^{d}$$ R d × R d .« less
  2. This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and super-resolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pre-training phase and a solving phase. In the former, a GAN with generator \begin{document}$ G $\end{document} capturing the data distribution of a given image set, and an AE network with encoder \begin{document}$ E $\end{document} that compresses images following the estimated distribution by \begin{document}$ G $\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image \begin{document}$ x = \mathcal{P}(x^*) $\end{document}, where \begin{document}$ x^* $\end{document} is the target unknown image, \begin{document}$ \mathcal{P} $\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image \begin{document}$ x $\end{document}more »in the compressed domain, i.e., given \begin{document}$ m = E(x) $\end{document}, the two latent spaces are unified via solving the optimization problem

    and the image \begin{document}$ x^* $\end{document} is recovered in a generative way via \begin{document}$ \hat{x}: = G(z^*)\approx x^* $\end{document}, where \begin{document}$ \lambda>0 $\end{document} is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.

    « less
  3. We present two graph quantities Psi(G, S) and Psi_2(G) which give constant factor estimates to the Dirichlet and Neumann eigenvalues, Lambda(G, S) and Lambda_2(G), respectively. Our techniques make use of a discrete Hardy-type inequality due to Muckenhoupt.
  4. Abstract

    An Eulerian walk (or Eulerian trail) is a walk (resp. trail) that visits every edge of a graphGat least (resp. exactly) once. This notion was first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. But what if Euler had to take a bus? In a temporal graph$$\varvec{(G,\lambda )}$$(G,λ), with$$\varvec{\lambda : E(G)}\varvec{\rightarrow } \varvec{2}^{\varvec{[\tau ]}}$$λ:E(G)2[τ], an edge$$\varvec{e}\varvec{\in } \varvec{E(G)}$$eE(G)is available only at the times specified by$$\varvec{\lambda (e)}\varvec{\subseteq } \varvec{[\tau ]}$$λ(e)[τ], in the same way the connections of the public transportation network of a city or of sightseeing tours are available only at scheduled times. In this paper, we deal with temporal walks, local trails, and trails, respectively referring to edge traversal with no constraints, constrained to not repeating the same edge in a single timestamp, and constrained to never repeating the same edge throughout the entire traversal. We show that, if the edges are always available, then deciding whether$$\varvec{(G,\lambda )}$$(G,λ)has a temporal walk or trail is polynomial, while deciding whether it has a local trail is$$\varvec{\texttt {NP}}$$NP-complete even if$$\varvec{\tau = 2}$$τ=2. In contrast, in the general case, solving any of these problems is$$\varvec{\texttt {NP}}$$NP-complete, even under very strict hypotheses. We finally give$$\varvec{\texttt {XP}}$$XPalgorithmsmore »parametrized by$$\varvec{\tau }$$τfor walks, and by$$\varvec{\tau +tw(G)}$$τ+tw(G)for trails and local trails, where$$\varvec{tw(G)}$$tw(G)refers to the treewidth of$$\varvec{G}$$G.

    « less
  5. The densest subgraph problem in a graph (\dsg), in the simplest form, is the following. Given an undirected graph $G=(V,E)$ find a subset $S \subseteq V$ of vertices that maximizes the ratio $|E(S)|/|S|$ where $E(S)$ is the set of edges with both endpoints in $S$. \dsg and several of its variants are well-studied in theory and practice and have many applications in data mining and network analysis. In this paper we study fast algorithms and structural aspects of \dsg via the lens of \emph{supermodularity}. For this we consider the densest supermodular subset problem (\dssp): given a non-negative supermodular function $f: 2^V \rightarrow \mathbb{R}_+$, maximize $f(S)/|S|$. For \dsg we describe a simple flow-based algorithm that outputs a $(1-\eps)$-approximation in deterministic $\tilde{O}(m/\eps)$ time where $m$ is the number of edges. Our algorithm is the first to have a near-linear dependence on $m$ and $1/\eps$ and improves previous methods based on an LP relaxation. It generalizes to hypergraphs, and also yields a faster algorithm for directed \dsg. Greedy peeling algorithms have been very popular for \dsg and several variants due to their efficiency, empirical performance, and worst-case approximation guarantees. We describe a simple peeling algorithm for \dssp and analyze its approximation guarantee inmore »a fashion that unifies several existing results. Boob et al.\ \cite{bgpstww-20} developed an \emph{iterative} peeling algorithm for \dsg which appears to work very well in practice, and made a conjecture about its convergence to optimality. We affirmatively answer their conjecture, and in fact prove that a natural generalization of their algorithm converges to a $(1-\eps)$-approximation for \emph{any} supermodular function $f$; the key to our proof is to consider an LP formulation that is derived via the \Lovasz extension of a supermodular function. For \dsg the bound on the number of iterations we prove is $O(\frac{\Delta \ln |V|}{\lambda^*}\cdot \frac{1}{\eps^2})$ where $\Delta$ is the maximum degree and $\lambda^*$ is the optimum value. Our work suggests that iterative peeling can be an effective heuristic for several objectives considered in the literature. Finally, we show that the $2$-approximation for densest-at-least-$k$ subgraph \cite{ks-09} extends to the supermodular setting. We also give a unified analysis of the peeling algorithm for this problem, and via this analysis derive an approximation guarantee for a generalization of \dssp to maximize $f(S)/g(|S|)$ for a concave function $g$.« less