Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Normalizing flows provide an elegant approach to generative modeling that allows for efficient sampling and exact density evaluation of unknown data distributions. However, current techniques have significant limitations in their expressivity when the data distribution is supported on a lowdimensional manifold or has a non-trivial topology. We introduce a novel statistical framework for learning a mixture of local normalizing flows as “chart maps” over the data manifold. Our framework augments the expressivity of recent approaches while preserving the signature property of normalizing flows, that they admit exact density evaluation. We learn a suitable atlas of charts for the data manifold via a vector quantized autoencoder (VQ-AE) and the distributions over them using a conditional flow. We validate experimentally that our probabilistic framework enables existing approaches to better model data distributions over complex manifolds.more » « less
-
Time-evolution of partial differential equations is fundamental for modeling several complex dynamical processes and events forecasting, but the operators associated with such problems are non-linear. We propose a Pad´e approximation based exponential neural operator scheme for efficiently learning the map between a given initial condition and the activities at a later time. The multiwavelets bases are used for space discretization. By explicitly embedding the exponential operators in the model, we reduce the training parameters and make it more data-efficient which is essential in dealing with scarce and noisy real-world datasets. The Pad´e exponential operator uses a recurrent structure with shared parameters to model the non-linearity compared to recent neural operators that rely on using multiple linear operator layers in succession. We show theoretically that the gradients associated with the recurrent Pad´e network are bounded across the recurrent horizon. We perform experiments on non-linear systems such as Korteweg-de Vries (KdV) and Kuramoto–Sivashinsky (KS) equations to show that the proposed approach achieves the best performance and at the same time is data-efficient. We also show that urgent real-world problems like epidemic forecasting (for example, COVID- 19) can be formulated as a 2D time-varying operator problem. The proposed Pad´e exponential operators yield better prediction results (53% (52%) better MAE than best neural operator (non-neural operator deep learning model)) compared to state-of-the-art forecasting models.more » « less
-
We address the problem of graph regression using graph convolutional neural networks and permutation invariant representation. Many graph neural network algorithms can be abstracted as a series of message passing functions between the nodes, ultimately producing a set of latent features for each node. Processing these latent features to produce a single estimate over the entire graph is dependent on how the nodes are ordered in the graph’s representation. We propose a permutation invariant mapping that produces graph representations that are invariant to any ordering of the nodes. This mapping can serve as a pivotal piece in leveraging graph convolutional networks for graph classification and graph regression problems. We tested out this method and validated our solution on the QM9 dataset.more » « less
-
Image reconstructions involving neural networks (NNs) are generally non-iterative and computationally efficient. However, without analytical expression describing the reconstruction process, the computation of noise propagation becomes difficult. Automated differentiation allows rapid computation of derivatives without an analytical expression. In this work, the feasibility of computing noise propagation with automated differentiation was investigated. The noise propagation of image reconstruction by the End-to-end variational-neural-network was estimated using automated differentiation and compared with Monte-Carlo simulation. The root-mean-square error (RMSE) map showed great agreement between automated differentiation and Monte-Carlo simulation over a wide range of SNRs.more » « less
-
We introduce an approximation and resulting method called MGRAPPA to allow high speed MRI scans robust to subject motion using prospective motion correction and GRAPPA. In experiments on both simulated data and in-vivo data, we observe high accuracy and robustness to subject movement in L2 (Frobenius) norm error including a 41% improvement in the in-vivo experiment.more » « less
-
null (Ed.)Abstract The duality principle for group representations developed in Dutkay et al. (J Funct Anal 257:1133–1143, 2009), Han and Larson (Bull Lond Math Soc 40:685–695, 2008) exhibits a fact that the well-known duality principle in Gabor analysis is not an isolated incident but a more general phenomenon residing in the context of group representation theory. There are two other well-known fundamental properties in Gabor analysis: the biorthogonality and the fundamental identity of Gabor analysis. The main purpose of this this paper is to show that these two fundamental properties remain to be true for general projective unitary group representations. Moreover, we also present a general duality theorem which shows that that muti-frame generators meet super-frame generators through a dual commutant pair of group representations. Applying it to the Gabor representations, we obtain that $$\{\pi _{\Lambda }(m, n)g_{1} \oplus \cdots \oplus \pi _{\Lambda }(m, n)g_{k}\}_{m, n \in {\mathbb {Z}}^{d}}$$ { π Λ ( m , n ) g 1 ⊕ ⋯ ⊕ π Λ ( m , n ) g k } m , n ∈ Z d is a frame for $$L^{2}({\mathbb {R}}\,^{d})\oplus \cdots \oplus L^{2}({\mathbb {R}}\,^{d})$$ L 2 ( R d ) ⊕ ⋯ ⊕ L 2 ( R d ) if and only if $$\cup _{i=1}^{k}\{\pi _{\Lambda ^{o}}(m, n)g_{i}\}_{m, n\in {\mathbb {Z}}^{d}}$$ ∪ i = 1 k { π Λ o ( m , n ) g i } m , n ∈ Z d is a Riesz sequence, and $$\cup _{i=1}^{k} \{\pi _{\Lambda }(m, n)g_{i}\}_{m, n\in {\mathbb {Z}}^{d}}$$ ∪ i = 1 k { π Λ ( m , n ) g i } m , n ∈ Z d is a frame for $$L^{2}({\mathbb {R}}\,^{d})$$ L 2 ( R d ) if and only if $$\{\pi _{\Lambda ^{o}}(m, n)g_{1} \oplus \cdots \oplus \pi _{\Lambda ^{o}}(m, n)g_{k}\}_{m, n \in {\mathbb {Z}}^{d}}$$ { π Λ o ( m , n ) g 1 ⊕ ⋯ ⊕ π Λ o ( m , n ) g k } m , n ∈ Z d is a Riesz sequence, where $$\pi _{\Lambda }$$ π Λ and $$\pi _{\Lambda ^{o}}$$ π Λ o is a pair of Gabor representations restricted to a time–frequency lattice $$\Lambda $$ Λ and its adjoint lattice $$\Lambda ^{o}$$ Λ o in $${\mathbb {R}}\,^{d}\times {\mathbb {R}}\,^{d}$$ R d × R d .more » « less
-
null (Ed.)In this paper, we consider the decomposition of positive semidefinite matrices as a sum of rank one matrices. We introduce and investigate the properties of various measures of optimality of such decompositions. For some classes of positive semidefinite matrices, we give explicitly these optimal decompositions. These classes include diagonally dominant matrices and certain of their generalizations, 2 × 2, and a class of 3 × 3 matrices.more » « less