Abstract Reconstructing images from the Event Horizon Telescope (EHT) observations of M87*, the supermassive black hole at the center of the galaxy M87, depends on a prior to impose desired image statistics. However, given the impossibility of directly observing black holes, there is no clear choice for a prior. We present a framework for flexibly designing a range of priors, each bringing different biases to the image reconstruction. These priors can be weak (e.g., impose only basic natural-image statistics) or strong (e.g., impose assumptions of black hole structure). Our framework uses Bayesian inference with score-based priors, which are data-driven priors arising from a deep generative model that can learn complicated image distributions. Using our Bayesian imaging approach with sophisticated data-driven priors, we can assess how visual features and uncertainty of reconstructed images change depending on the prior. In addition to simulated data, we image the real EHT M87* data and discuss how recovered features are influenced by the choice of prior.
more »
« less
Deep generative models for galaxy image simulations
ABSTRACT Image simulations are essential tools for preparing and validating the analysis of current and future wide-field optical surveys. However, the galaxy models used as the basis for these simulations are typically limited to simple parametric light profiles, or use a fairly limited amount of available space-based data. In this work, we propose a methodology based on deep generative models to create complex models of galaxy morphologies that may meet the image simulation needs of upcoming surveys. We address the technical challenges associated with learning this morphology model from noisy and point spread function (PSF)-convolved images by building a hybrid Deep Learning/physical Bayesian hierarchical model for observed images, explicitly accounting for the PSF and noise properties. The generative model is further made conditional on physical galaxy parameters, to allow for sampling new light profiles from specific galaxy populations. We demonstrate our ability to train and sample from such a model on galaxy postage stamps from the HST/ACS COSMOS survey, and validate the quality of the model using a range of second- and higher order morphology statistics. Using this set of statistics, we demonstrate significantly more realistic morphologies using these deep generative models compared to conventional parametric models. To help make these generative models practical tools for the community, we introduce galsim-hub, a community-driven repository of generative models, and a framework for incorporating generative models within the galsim image simulation software.
more »
« less
- Award ID(s):
- 2020295
- PAR ID:
- 10293510
- Date Published:
- Journal Name:
- Monthly Notices of the Royal Astronomical Society
- Volume:
- 504
- Issue:
- 4
- ISSN:
- 0035-8711
- Page Range / eLocation ID:
- 5543 to 5555
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
ABSTRACT Machine learning models can greatly improve the search for strong gravitational lenses in imaging surveys by reducing the amount of human inspection required. In this work, we test the performance of supervised, semi-supervised, and unsupervised learning algorithms trained with the ResNetV2 neural network architecture on their ability to efficiently find strong gravitational lenses in the Deep Lens Survey (DLS). We use galaxy images from the survey, combined with simulated lensed sources, as labeled data in our training data sets. We find that models using semi-supervised learning along with data augmentations (transformations applied to an image during training, e.g. rotation) and Generative Adversarial Network (GAN) generated images yield the best performance. They offer 5 – 10 times better precision across all recall values compared to supervised algorithms. Applying the best performing models to the full 20 deg2 DLS survey, we find 3 Grade-A lens candidates within the top 17 image predictions from the model. This increases to 9 Grade-A and 13 Grade-B candidates when 1 per cent (∼2500 images) of the model predictions are visually inspected. This is ≳ 10 × the sky density of lens candidates compared to current shallower wide-area surveys (such as the Dark Energy Survey), indicating a trove of lenses awaiting discovery in upcoming deeper all-sky surveys. These results suggest that pipelines tasked with finding strong lens systems can be highly efficient, minimizing human effort. We additionally report spectroscopic confirmation of the lensing nature of two Grade-A candidates identified by our model, further validating our methods.more » « less
-
Deep generative models have enabled the automated synthesis of high-quality data for diverse applications. However, the most effective generative models are specialized to data from a single domain (e.g., images or text). Real-world applications such as healthcare require multi-modal data from multiple domains (e.g., both images and corresponding text), which are difficult to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an End-to-end MultImodal X-ray genERative model (EMIXER) for jointly synthesizing x-ray images and corresponding free-text reports, all conditional on diagnosis labels. EMIXER is an conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables self-supervision to leverage vast amount of unlabeled data. Extensive experiments with real X-ray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of a variety of supervised tasks including COVID-19 X-ray classification with very limited samples. The quality of generated images and reports are also confirmed by radiologists. We quantitatively show that EMIXER generated synthetic datasets can augment X-ray image classification, report generation models to achieve 5.94% and 6.9% improvement on models trained only on real data samples. Taken together, our results highlight the promise of state of generative models to advance clinical machine learning.more » « less
-
Limited-Angle Computed Tomography (LACT) is a nondestructive 3D imaging technique used in a variety of applications ranging from security to medicine. The limited angle coverage in LACT is often a dominant source of severe artifacts in the reconstructed images, making it a challenging imaging inverse problem. Diffusion models are a recent class of deep generative models for synthesizing realistic images using image denoisers. In this work, we present DOLCE as the first framework for integrating conditionally-trained diffusion models and explicit physical measurement models for solving imaging inverse problems. DOLCE achieves the SOTA performance in highly ill-posed LACT by alternating between the data-fidelity and sampling updates of a diffusion model conditioned on the transformed sinogram. We show through extensive experimentation that unlike existing methods, DOLCE can synthesize high-quality and structurally coherent 3D volumes by using only 2D conditionally pre-trained diffusion models. We further show on several challenging real LACT datasets that the same pretrained DOLCE model achieves the SOTA performance on drastically different types of images.more » « less
-
Abstract By fitting observed data with predicted seismograms, least‐squares migration (LSM) computes a generalized inverse for a subsurface reflectivity model, which can improve image resolution and reduce artifacts caused by incomplete acquisition. However, the large computational cost of LSM required for simulations and migrations limits its wide applications for large‐scale imaging problems. Using point‐spread function (PSF) deconvolution, we present an efficient and stable high‐resolution imaging method. The PSFs are first computed on a coarse grid using local ray‐based Gaussian beam Born modeling and migration. Then, we interpolate the PSFs onto a fine‐image grid and apply a high‐dimensional Gaussian function to attenuate artifacts far away from the PSF centers. With 2D/3D partition of unity, we decompose the traditional adjoint migration results into local images with the same window size as the PSFs. Then, these local images are deconvolved by the PSFs in the wavenumber domain to reduce the effects of the band‐limited source function and compensate for irregular subsurface illumination. The final assembled image is obtained by applying the inverse of the partitions for the deconvolved local images. Numerical examples for both synthetic and field data demonstrate that the proposed PSF deconvolution can significantly improve image resolution and amplitudes for deep structures, while not being sensitive to velocity errors as the data‐domain LSM.more » « less
An official website of the United States government

