 Award ID(s):
 1659815
 Publication Date:
 NSFPAR ID:
 10219977
 Journal Name:
 Journal of Combinatorial Mathematics and Combinatorial Computing
 Volume:
 114
 Page Range or eLocationID:
 3146
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract The duality principle for group representations developed in Dutkay et al. (J Funct Anal 257:1133–1143, 2009), Han and Larson (Bull Lond Math Soc 40:685–695, 2008) exhibits a fact that the wellknown duality principle in Gabor analysis is not an isolated incident but a more general phenomenon residing in the context of group representation theory. There are two other wellknown fundamental properties in Gabor analysis: the biorthogonality and the fundamental identity of Gabor analysis. The main purpose of this this paper is to show that these two fundamental properties remain to be true for general projective unitary group representations. Moreover, we also present a general duality theorem which shows that that mutiframe generators meet superframe generators through a dual commutant pair of group representations. Applying it to the Gabor representations, we obtain that $$\{\pi _{\Lambda }(m, n)g_{1} \oplus \cdots \oplus \pi _{\Lambda }(m, n)g_{k}\}_{m, n \in {\mathbb {Z}}^{d}}$$ { π Λ ( m , n ) g 1 ⊕ ⋯ ⊕ π Λ ( m , n ) g k } m , n ∈ Z d is a frame for $$L^{2}({\mathbb {R}}\,^{d})\oplus \cdots \oplus L^{2}({\mathbb {R}}\,^{d})$$ L 2 ( R d ) ⊕ ⋯ ⊕ L 2 ( Rmore »

This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and superresolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pretraining phase and a solving phase. In the former, a GAN with generator
capturing the data distribution of a given image set, and an AE network with encoder\begin{document}$ G $\end{document} that compresses images following the estimated distribution by\begin{document}$ E $\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image\begin{document}$ G $\end{document} , where\begin{document}$ x = \mathcal{P}(x^*) $\end{document} is the target unknown image,\begin{document}$ x^* $\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image\begin{document}$ \mathcal{P} $\end{document} more »\begin{document}$ x $\end{document} and the image
is recovered in a generative way via\begin{document}$ x^* $\end{document} , where\begin{document}$ \hat{x}: = G(z^*)\approx x^* $\end{document} is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.\begin{document}$ \lambda>0 $\end{document} 
We present two graph quantities Psi(G, S) and Psi_2(G) which give constant factor estimates to the Dirichlet and Neumann eigenvalues, Lambda(G, S) and Lambda_2(G), respectively. Our techniques make use of a discrete Hardytype inequality due to Muckenhoupt.

Abstract An Eulerian walk (or Eulerian trail) is a walk (resp. trail) that visits every edge of a graph
G at least (resp. exactly) once. This notion was first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. But what if Euler had to take a bus? In a temporal graph , with$$\varvec{(G,\lambda )}$$ $(G,\lambda )$ , an edge$$\varvec{\lambda : E(G)}\varvec{\rightarrow } \varvec{2}^{\varvec{[\tau ]}}$$ $\lambda :E\left(G\right)\to {2}^{\left[\tau \right]}$ is available only at the times specified by$$\varvec{e}\varvec{\in } \varvec{E(G)}$$ $e\in E\left(G\right)$ , in the same way the connections of the public transportation network of a city or of sightseeing tours are available only at scheduled times. In this paper, we deal with temporal walks, local trails, and trails, respectively referring to edge traversal with no constraints, constrained to not repeating the same edge in a single timestamp, and constrained to never repeating the same edge throughout the entire traversal. We show that, if the edges are always available, then deciding whether$$\varvec{\lambda (e)}\varvec{\subseteq } \varvec{[\tau ]}$$ $\lambda \left(e\right)\subseteq \left[\tau \right]$ has a temporal walk or trail is polynomial, while deciding whether it has a local trail is$$\varvec{(G,\lambda )}$$ $(G,\lambda )$ complete even if$$\varvec{\texttt {NP}}$$ $\mathrm{NP}$ . In contrast, in the general case, solving any of these problems is$$\varvec{\tau = 2}$$ $\tau =2$ complete, even under very strict hypotheses. We finally give$$\varvec{\texttt {NP}}$$ $\mathrm{NP}$ algorithmsmore »$$\varvec{\texttt {XP}}$$ $\mathrm{XP}$ 
The densest subgraph problem in a graph (\dsg), in the simplest form, is the following. Given an undirected graph $G=(V,E)$ find a subset $S \subseteq V$ of vertices that maximizes the ratio $E(S)/S$ where $E(S)$ is the set of edges with both endpoints in $S$. \dsg and several of its variants are wellstudied in theory and practice and have many applications in data mining and network analysis. In this paper we study fast algorithms and structural aspects of \dsg via the lens of \emph{supermodularity}. For this we consider the densest supermodular subset problem (\dssp): given a nonnegative supermodular function $f: 2^V \rightarrow \mathbb{R}_+$, maximize $f(S)/S$. For \dsg we describe a simple flowbased algorithm that outputs a $(1\eps)$approximation in deterministic $\tilde{O}(m/\eps)$ time where $m$ is the number of edges. Our algorithm is the first to have a nearlinear dependence on $m$ and $1/\eps$ and improves previous methods based on an LP relaxation. It generalizes to hypergraphs, and also yields a faster algorithm for directed \dsg. Greedy peeling algorithms have been very popular for \dsg and several variants due to their efficiency, empirical performance, and worstcase approximation guarantees. We describe a simple peeling algorithm for \dssp and analyze its approximation guarantee inmore »