Deep generative models have experienced great empirical successes in distribution learning. Many existing experiments have demonstrated that deep generative networks can efficiently generate high-dimensional complex data from a low-dimensional easy-to-sample distribution. However, this phenomenon can not be justified by existing theories. The widely held manifold hypothesis speculates that real-world data sets, such as natural images and signals, exhibit low-dimensional geometric structures. In this paper, we take such low-dimensional data structures into consideration by assuming that data distributions are supported on a low-dimensional manifold. We prove approximation and estimation theories of deep generative networks for estimating distributions on a low-dimensional manifold under the Wasserstein-1 loss. We show that the Wasserstein-1 loss converges to zero at a fast rate depending on the intrinsic dimension instead of the ambient data dimension. Our theory leverages the low-dimensional geometric structures in data sets and justifies the practical power of deep generative models. We require no smoothness assumptions on the data distribution which is desirable in practice. 
                        more » 
                        « less   
                    This content will become publicly available on November 19, 2025
                            
                            Adaptive Learning of the Latent Space of Wasserstein Generative AdversarialNetworks
                        
                    
    
            Generative models based on latent variables, such as generative adversarial networks (GANs) and variationalauto-encoders (VAEs), have gained lots of interests due to their impressive performance in many fields.However, many data such as natural images usually do not populate the ambient Euclidean space but insteadreside in a lower-dimensional manifold. Thus an inappropriate choice of the latent dimension fails to uncoverthe structure of the data, possibly resulting in mismatch of latent representations and poor generativequalities. Toward addressing these problems, we propose a novel framework called the latent WassersteinGAN (LWGAN) that fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsicdimension of the data manifold can be adaptively learned by a modified informative latent distribution. Weprove that there exist an encoder network and a generator network in such a way that the intrinsic dimensionof the learned encoding distribution is equal to the dimension of the data manifold. We theoreticallyestablish that our estimated intrinsic dimension is a consistent estimate of the true dimension of the datamanifold. Meanwhile, we provide an upper bound on the generalization error of LWGAN, implying that weforce the synthetic data distribution to be similar to the real data distribution from a population perspective.Comprehensive empirical experiments verify our framework and show that LWGAN is able to identify thecorrect intrinsic dimension under several scenarios, and simultaneously generate high-quality syntheticdata by sampling from the learned latent distribution. Supplementary materials for this article are availableonline, including a standardized description of the materials available for reproducing the work. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2316428
- PAR ID:
- 10618852
- Publisher / Repository:
- ASA
- Date Published:
- Journal Name:
- Journal of the American Statistical Association
- Volume:
- 120
- Issue:
- 550
- ISSN:
- 0162-1459
- Page Range / eLocation ID:
- 1254-1266
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            apid growth of high-dimensional datasets in fields such as single-cell RNA sequencing and spatial genomics has led to unprecedented opportunities for scientific discovery, but it also presents unique computational and statistical challenges. Traditional methods struggle with geometry-aware data generation, interpolation along meaningful trajectories, and transporting populations via feasible paths. To address these issues, we introduce Geometry-Aware Generative Autoencoder (GAGA), a novel framework that combines extensible manifold learning with generative modeling. GAGA constructs a neural network embedding space that respects the intrinsic geometries discovered by manifold learning and learns a novel warped Riemannian metric on the data space. This warped metric is derived from both the points on the data manifold and negative samples off the manifold, allowing it to characterize a meaningful geometry across the entire latent space. Using this metric, GAGA can uniformly sample points on the manifold, generate points along geodesics, and interpolate between populations across the learned manifold. GAGA shows competitive performance in simulated and real-world datasets, including a 30% improvement over SOTA in single-cell population-level trajectory inference.more » « less
- 
            This paper studies the unsupervised cross-domain translation problem by proposing a generative framework, in which the probability distribution of each domain is represented by a generative cooperative network that consists of an energy based model and a latent variable model. The use of generative cooperative network enables maximum likelihood learning of the domain model by MCMC teaching, where the energy-based model seeks to fit the data distribution of domain and distills its knowledge to the latent variable model via MCMC. Specifically, in the MCMC teaching process, the latent variable model parameterized by an encoder-decoder maps examples from the source domain to the target domain, while the energy-based model further refines the mapped results by Langevin revision such that the revised results match to the examples in the target domain in terms of the statistical properties, which are defined by the learned energy function. For the purpose of building up a correspondence between two unpaired domains, the proposed framework simultaneously learns a pair of cooperative networks with cycle consistency, accounting for a two-way translation between two domains, by alternating MCMC teaching. Experiments show that the proposed framework is useful for unsupervised image-to-image translation and unpaired image sequence translation.more » « less
- 
            We connect a large class of Generative Deep Networks (GDNs) with spline operators in order to derive their properties, limitations, and new opportunities. By characterizing the latent space partition, dimension and angularity of the generated manifold, we relate the manifold dimension and approximation error to the sample size. The manifold-per-region affine subspace defines a local coordinate basis; we provide necessary and sufficient conditions relating those basis vectors with disentanglement. We also derive the output probability density mapped onto the generated manifold in terms of the latent space density, which enables the computation of key statistics such as its Shannon entropy. This finding also enables the computation of the GDN likelihood, which provides a new mechanism for model comparison as well as providing a quality measure for (generated) samples under the learned distribution. We demonstrate how low entropy and/or multimodal distributions are not naturally modeled by DGNs and are a cause of training instabilities.more » « less
- 
            Empirical studies have demonstrated the effectiveness of (score-based) diffusion models in generating high-dimensional data, such as texts and images, which typically exhibit a low-dimensional manifold nature. These empirical successes raise the theoretical question of whether score-based diffusion models can optimally adapt to low-dimensional manifold structures. While recent work has validated the minimax optimality of diffusion models when the target distribution admits a smooth density with respect to the Lebesgue measure of the ambient data space, these findings do not fully account for the ability of diffusion models in avoiding the the curse of dimensionality when estimating high-dimensional distributions. This work considers two common classes of diffusion models: Langevin diffusion and forward-backward diffusion. We show that both models can adapt to the intrinsic manifold structure by showing that the convergence rate of the inducing distribution estimator depends only on the intrinsic dimension of the data. Moreover, our considered estimator does not require knowing or explicitly estimating the manifold. We also demonstrate that the forward-backward diffusion can achieve the minimax optimal rate under the Wasserstein metric when the target distribution possesses a smooth density with respect to the volume measure of the low-dimensional manifold.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
