Generating 3D graphs of symmetry-group equivariance is of intriguing potential in broad applications from machine vision to molecular discovery. Emerging approaches adopt diffusion generative models (DGMs) with proper re-engineering to capture 3D graph distributions. In this paper, we raise an orthogonal and fundamental question of in what (latent) space we should diffuse 3D graphs. ❶ We motivate the study with theoretical analysis showing that the performance bound of 3D graph diffusion can be improved in a latent space versus the original space, provided that the latent space is of (i) low dimensionality yet (ii) high quality (i.e., low reconstruction error) and DGMs have (iii) symmetry preservation as an inductive bias. ❷ Guided by the theoretical guidelines, we propose to perform 3D graph diffusion in a low-dimensional latent space, which is learned through cascaded 2D–3D graph autoencoders for low-error reconstruction and symmetry-group invariance. The overall pipeline is dubbed latent 3D graph diffusion. ❸ Motivated by applications in molecular discovery, we further extend latent 3D graph diffusion to conditional generation given SE(3)-invariant attributes or equivariant 3D objects. ❹ We also demonstrate empirically that out-of-distribution conditional generation can be further improved by regularizing the latent space via graph self-supervised learning. We validate through comprehensive experiments that our method generates 3D molecules of higher validity / drug-likeliness and comparable or better conformations / energetics, while being an order of magnitude faster in training. Codes are released at https://github.com/Shen-Lab/LDM-3DG.
more »
« less
This content will become publicly available on February 28, 2026
Latent Diffusion Shield - Mitigating Malicious Use of Diffusion Models Through Latent Space Adversarial Perturbations
- Award ID(s):
- 2152908
- PAR ID:
- 10645163
- Publisher / Repository:
- IEEE
- Date Published:
- Page Range / eLocation ID:
- 1350 to 1358
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Oh, Alice; Naumann, Tristan; Globerson, Amir; Saenko, Kate; Hardt, Moritz; Levine, Sergey (Ed.)Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view diffusion and existing language models as complementary. We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders. We then demonstrate that continuous diffusion models can be learned in the latent space of the language autoencoder, enabling us to sample continuous latent representations that can be decoded into natural language with the pretrained decoder. We validate the effectiveness of our approach for unconditional, class-conditional, and sequence-to-sequence language generation. We demonstrate across multiple diverse data sets that our latent language diffusion models are significantly more effective than previous diffusion language models. Our code is available at https://github.com/justinlovelace/latent-diffusion-for-language .more » « less
-
The unsupervised anomaly detection problem holds great importance but remains challenging to address due to the myriad of data possibilities in our daily lives. Currently, distinct models are trained for different scenarios. In this work, we introduce a reconstruction-based anomaly detection structure built on the Latent Space Denoising Diffusion Probabilistic Model (LDM). This structure effectively detects anomalies in multi-class situations. When normal data comprises multiple object categories, existing reconstruction models often learn identical patterns. This leads to the successful reconstruction of both normal and anomalous data based on these patterns, resulting in the inability to distinguish anomalous data. To address this limitation, we implemented the LDM model. Its process of adding noise effectively disrupts identical patterns. Additionally, this advanced image generation model can generate images that deviate from the input. We have further proposed a classification model that compares the input with the reconstruction results, tapping into the generative power of the LDM model. Our structure has been tested on the MNIST and CIFAR-10 datasets, where it surpassed the performance of state-of-the-art reconstruction-based anomaly detection models.more » « less
An official website of the United States government
