Abstract BackgroundMagnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy. PurposeDeep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data. MethodsWe trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts. ResultsFor this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models. ConclusionsOur results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.
more »
« less
Influence of Rician Noise on Cardiac MR Image Segmentation Using Deep Learning
Precision in segmenting cardiac MR images is critical for accurately diagnosing cardiovascular diseases. Several deep learning models have been shown useful in segmenting the structure of the heart, such as atrium, ventricle and myocardium, in cardiac MR images. Given the diverse image quality in cardiac MRI scans from various clinical settings, it is currently uncertain how different levels of noise affect the precision of deep learning image segmentation. This uncertainty could potentially lead to bias in subsequent diagnoses. The goal of this study is to examine the effects of noise in cardiac MRI segmentation using deep learning. We employed the Automated Cardiac Diagnosis Challenge MRI dataset and augmented it with varying degrees of Rician noise during model training to test the model’s capability in segmenting heart structures. Three models, including TransUnet, SwinUnet, and Unet, were compared by calculating the SNR-Dice relations to evaluate the models’ noise resilience. Results show that the TransUnet model, which combines CNN and Transformer architectures, demonstrated superior noise resilience. Noise augmentation during model training improved the models’ noise resilience for segmentation. The findings under-score the critical role of deep learning models in adequately handling diverse noise conditions for the segmentation of heart structures in cardiac images.
more »
« less
- Award ID(s):
- 2200585
- PAR ID:
- 10516768
- Publisher / Repository:
- Springer Nature Switzerland
- Date Published:
- ISBN:
- 978-3-031-64812-0
- Page Range / eLocation ID:
- 223 to 232
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The segmentation of the ventricular wall and the blood pool in cardiac magnetic resonance imaging (MRI) has been inves- tigated for decades, given its important role for delineation of cardiac functioning and diagnosis of heart diseases. One of the major challenges is that the inner epicardium boundary is not always visible in the image domain, due to the mix- ture of blood and muscle structures, especially at the end of contraction, or systole. To address it, we propose a novel ap- proach for the cardiac segmentation in the short-axis (SAX) MRI: coupled deep neural networks and deformable models. First, a 2D U-Net is adopted for each magnetic resonance (MR) slice, and a 3D U-Net refines the segmentation results along the temporal dimension. Then, we propose a multi- component deformable model to extract accurate contours for both endo- and epicardium with global and local constraints. Finally, a partial blood classification is explored to estimate the presence of boundary pixels near the trabeculae and solid wall, and to avoid moving the endocardium boundary inward. Quantitative evaluation demonstrates the high accuracy, ro- bustness, and efficiency of our approach for the slices ac- quired at different locations and different cardiac phases.more » « less
-
This study explores the application of deep learning to the segmentation of DENSE cardiovascular magnetic resonance (CMR) images, which is an important step in the analysis of cardiac deformation and may help in the diagnosis of heart conditions. A self-adapting method based on the nnU-Net framework is introduced to enhance the accuracy of DENSE-MR image segmentation, with a particular focus on the left ventricle myocardium (LVM) and left ventricle cavity (LVC), by leveraging the phase information in the cine DENSE-MR images. Two models are built and compared: 1) ModelM, which uses only the magnitude of the DENSE-MR images; and 2) ModelMP, which incorporates magnitude and phase images. DENSE-MR images from 10 human volunteers processed using the DENSE-Analysis MATLAB toolbox were included in this study. The two models were trained using a 2D UNet-based architecture with a loss function combining the Dice similarity coefficient (DSC) and cross-entropy. The findings show the effectiveness of leveraging the phase information with ModelMP resulting in a higher DSC and improved image segmentation, especially in challenging cases, e.g., at early systole and with basal and apical slices.more » « less
-
Puyol Anton, E; Pop, M; Sermesant, M; Campello, V; Lalande, A; Lekadir, K; Suinesiaputra, A; Camara, O; Young, A (Ed.)Cardiac cine magnetic resonance imaging (CMRI) is the reference standard for assessing cardiac structure as well as function. However, CMRI data presents large variations among different centers, vendors, and patients with various cardiovascular diseases. Since typical deep-learning-based segmentation methods are usually trained using a limited number of ground truth annotations, they may not generalize well to unseen MR images, due to the variations between the training and testing data. In this study, we proposed an approach towards building a generalizable deep-learning-based model for cardiac structure segmentations from multi-vendor,multi-center and multi-diseases CMRI data. We used a novel combination of image augmentation and a consistency loss function to improve model robustness to typical variations in CMRI data. The proposed image augmentation strategy leverages un-labeled data by a) using CycleGAN to generate images in different styles and b) exchanging the low-frequency features of images from different vendors. Our model architecture was based on an attention-gated U-Net model that learns to focus on cardiac structures of varying shapes and sizes while suppressing irrelevant regions. The proposed augmentation and consistency training method demonstrated improved performance on CMRI images from new vendors and centers. When evaluated using CMRI data from 4 vendors and 6 clinical center, our method was generally able to produce accurate segmentations of cardiac structures.more » « less
-
Segmentation of echocardiograms plays an essential role in the quantitative analysis of the heart and helps diagnose cardiac diseases. In the recent decade, deep learning-based approaches have significantly improved the performance of echocardiogram segmentation. Most deep learning-based methods assume that the image to be processed is rectangular in shape. However, typically echocardiogram images are formed within a sector of a circle, with a significant region in the overall rectangular image where there is no data, a result of the ultrasound imaging methodology. This large non-imaging region can influence the training of deep neural networks. In this paper, we propose to use polar transformation to help train deep learning algorithms. Using the r-θ transformation, a significant portion of the non-imaging background is removed, allowing the neural network to focus on the heart image. The segmentation model is trained on both x-y and r-θ images. During inference, the predictions from the x-y and r-θ images are combined using max-voting. We verify the efficacy of our method on the CAMUS dataset with a variety of segmentation networks, encoder networks, and loss functions. The experimental results demonstrate the effectiveness and versatility of our proposed method for improving the segmentation results.more » « less
An official website of the United States government

