Abstract The sparse interferometric coverage of the Event Horizon Telescope (EHT) poses a significant challenge for both reconstruction and model fitting of black hole images.PRIMOis a new principal components analysis-based algorithm for image reconstruction that uses the results of high-fidelity general relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows as a training set. This allows the reconstruction of images that are consistent with the interferometric data and that live in the space of images that is spanned by the simulations.PRIMOfollows Monte Carlo Markov Chains to fit a linear combination of principal components derived from an ensemble of simulated images to interferometric data. We show thatPRIMOcan efficiently and accurately reconstruct synthetic EHT data sets for several simulated images, even when the simulation parameters are significantly different from those of the image ensemble that was used to generate the principal components. The resulting reconstructions achieve resolution that is consistent with the performance of the array and do not introduce significant biases in image features such as the diameter of the ring of emission. 
                        more » 
                        « less   
                    
                            
                            A Red-noise Eigenbasis for the Reconstruction of Blobby Images
                        
                    
    
            Abstract We demonstrate the use of an eigenbasis that is derived from principal component analysis (PCA) applied on an ensemble of random-noise images that have a “red” power spectrum; i.e., a spectrum that decreases smoothly from large to small spatial scales. The pattern of the resulting eigenbasis allows for the reconstruction of images with a broad range of image morphologies. In particular, we show that this general eigenbasis can be used to efficiently reconstruct images that resemble possible astronomical sources for interferometric observations, even though the images in the original ensemble used to generate the PCA basis are significantly different from the astronomical images. We further show that the efficiency and fidelity of the image reconstructions depends only weakly on the particular parameters of the red-noise power spectrum used to generate the ensemble of images. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1903847
- PAR ID:
- 10363808
- Publisher / Repository:
- DOI PREFIX: 10.3847
- Date Published:
- Journal Name:
- The Astrophysical Journal
- Volume:
- 927
- Issue:
- 1
- ISSN:
- 0004-637X
- Format(s):
- Medium: X Size: Article No. 111
- Size(s):
- Article No. 111
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Astronomical source deblending is the process of separating the contribution of individual stars or galaxies (sources) to an image comprised of multiple, possibly overlapping sources. Astronomical sources display a wide range of sizes and brightnesses and may show substantial overlap in images. Astronomical imaging data can further challenge off-the-shelf computer vision algorithms owing to its high dynamic range, low signal-to-noise ratio, and unconventional image format. These challenges make source deblending an open area of astronomical research, and in this work, we introduce a new approach called Partial-Attribution Instance Segmentation that enables source detection and deblending in a manner tractable for deep learning models. We provide a novel neural network implementation as a demonstration of the method.more » « less
- 
            Yap, Pew-Thian (Ed.)Diffusion weighted imaging (DWI) with multiple, high b-values is critical for extracting tissue microstructure measurements; however, high b-value DWI images contain high noise levels that can overwhelm the signal of interest and bias microstructural measurements. Here, we propose a simple denoising method that can be applied to any dataset, provided a low-noise, single-subject dataset is acquired using the same DWI sequence. The denoising method uses a one-dimensional convolutional neural network (1D-CNN) and deep learning to learn from a low-noise dataset, voxel-by-voxel. The trained model can then be applied to high-noise datasets from other subjects. We validated the 1D-CNN denoising method by first demonstrating that 1D-CNN denoising resulted in DWI images that were more similar to the noise-free ground truth than comparable denoising methods, e.g., MP-PCA, using simulated DWI data. Using the same DWI acquisition but reconstructed with two common reconstruction methods, i.e. SENSE1 and sum-of-square, to generate a pair of low-noise and high-noise datasets, we then demonstrated that 1D-CNN denoising of high-noise DWI data collected from human subjects showed promising results in three domains: DWI images, diffusion metrics, and tractography. In particular, the denoised images were very similar to a low-noise reference image of that subject, more than the similarity between repeated low-noise images (i.e. computational reproducibility). Finally, we demonstrated the use of the 1D-CNN method in two practical examples to reduce noise from parallel imaging and simultaneous multi-slice acquisition. We conclude that the 1D-CNN denoising method is a simple, effective denoising method for DWI images that overcomes some of the limitations of current state-of-the-art denoising methods, such as the need for a large number of training subjects and the need to account for the rectified noise floor.more » « less
- 
            The sparse interferometric coverage of the Event Horizon Telescope (EHT) poses a significant challenge for both reconstruction and model fitting of black-hole images. PRIMO is a new principal components analysis-based algorithm for image reconstruction that uses the results of high-fidelity general relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows as a training set. This allows the reconstruction of images that are both consistent with the interferometric data and that live in the space of images that is spanned by the simulations. PRIMO follows Monty Carlo Markov Chains to fit a linear combination of principal components derived from an ensemble of simulated images to interferometric data. We show that PRIMO can efficiently and accurately reconstruct synthetic EHT data sets for several simulated images, even when the simulation parameters are significantly different from those of the image ensemble that was used to generate the principal components. The resulting reconstructions achieve resolution that is consistent with the performance of the array and do not introduce significant biases in image features such as the diameter of the ring of emission.more » « less
- 
            Abstract PurposeSynthetic digital mammogram (SDM) is a 2D image generated from digital breast tomosynthesis (DBT) and used as a substitute for a full‐field digital mammogram (FFDM) to reduce the radiation dose for breast cancer screening. The previous deep learning‐based method used FFDM images as the ground truth, and trained a single neural network to directly generate SDM images with similar appearances (e.g., intensity distribution, textures) to the FFDM images. However, the FFDM image has a different texture pattern from DBT. The difference in texture pattern might make the training of the neural network unstable and result in high‐intensity distortion, which makes it hard to decrease intensity distortion and increase perceptual similarity (e.g., generate similar textures) at the same time. Clinically, radiologists want to have a 2D synthesized image that feels like an FFDM image in vision and preserves local structures such as both mass and microcalcifications (MCs) in DBT because radiologists have been trained on reading FFDM images for a long time, while local structures are important for diagnosis. In this study, we proposed to use a deep convolutional neural network to learn the transformation to generate SDM from DBT. MethodTo decrease intensity distortion and increase perceptual similarity, a multi‐scale cascaded network (MSCN) is proposed to generate low‐frequency structures (e.g., intensity distribution) and high‐frequency structures (e.g., textures) separately. The MSCN consist of two cascaded sub‐networks: the first sub‐network is used to predict the low‐frequency part of the FFDM image; the second sub‐network is used to generate a full SDM image with textures similar to the FFDM image based on the prediction of the first sub‐network. The mean‐squared error (MSE) objective function is used to train the first sub‐network, termed low‐frequency network, to generate a low‐frequency SDM image. The gradient‐guided generative adversarial network's objective function is to train the second sub‐network, termed high‐frequency network, to generate a full SDM image with textures similar to the FFDM image. Results1646 cases with FFDM and DBT were retrospectively collected from the Hologic Selenia system for training and validation dataset, and 145 cases with masses or MC clusters were independently collected from the Hologic Selenia system for testing dataset. For comparison, the baseline network has the same architecture as the high‐frequency network and directly generates a full SDM image. Compared to the baseline method, the proposed MSCN improves the peak‐to‐noise ratio from 25.3 to 27.9 dB and improves the structural similarity from 0.703 to 0.724, and significantly increases the perceptual similarity. ConclusionsThe proposed method can stabilize the training and generate SDM images with lower intensity distortion and higher perceptual similarity.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
