skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An nnU-Net Model to Enhance Segmentation of Cardiac Cine DENSE-MRI Using Phase Information
This study explores the application of deep learning to the segmentation of DENSE cardiovascular magnetic resonance (CMR) images, which is an important step in the analysis of cardiac deformation and may help in the diagnosis of heart conditions. A self-adapting method based on the nnU-Net framework is introduced to enhance the accuracy of DENSE-MR image segmentation, with a particular focus on the left ventricle myocardium (LVM) and left ventricle cavity (LVC), by leveraging the phase information in the cine DENSE-MR images. Two models are built and compared: 1) ModelM, which uses only the magnitude of the DENSE-MR images; and 2) ModelMP, which incorporates magnitude and phase images. DENSE-MR images from 10 human volunteers processed using the DENSE-Analysis MATLAB toolbox were included in this study. The two models were trained using a 2D UNet-based architecture with a loss function combining the Dice similarity coefficient (DSC) and cross-entropy. The findings show the effectiveness of leveraging the phase information with ModelMP resulting in a higher DSC and improved image segmentation, especially in challenging cases, e.g., at early systole and with basal and apical slices.  more » « less
Award ID(s):
2205043
PAR ID:
10537732
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-8373-7
Page Range / eLocation ID:
670 to 673
Subject(s) / Keyword(s):
Image segmentation nnU-Net Magnetic resonance imaging DENSE MRI Phase and magnitude data.
Format(s):
Medium: X
Location:
Orlando, FL, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Computational fluid dynamics (CFD) modeling of left ventricle (LV) flow combined with patient medical imaging data has shown great potential in obtaining patient-specific hemodynamics information for functional assessment of the heart. A typical model construction pipeline usually starts with segmentation of the LV by manual delineation followed by mesh generation and registration techniques using separate software tools. However, such approaches usually require significant time and human efforts in the model generation process, limiting large-scale analysis. In this study, we propose an approach toward fully automating the model generation process for CFD simulation of LV flow to significantly reduce LV CFD model generation time. Our modeling framework leverages a novel combination of techniques including deep-learning based segmentation, geometry processing, and image registration to reliably reconstruct CFD-suitable LV models with little-to-no user intervention.1 We utilized an ensemble of two-dimensional (2D) convolutional neural networks (CNNs) for automatic segmentation of cardiac structures from three-dimensional (3D) patient images and our segmentation approach outperformed recent state-of-the-art segmentation techniques when evaluated on benchmark data containing both magnetic resonance (MR) and computed tomography(CT) cardiac scans. We demonstrate that through a combination of segmentation and geometry processing, we were able to robustly create CFD-suitable LV meshes from segmentations for 78 out of 80 test cases. Although the focus on this study is on image-to-mesh generation, we demonstrate the feasibility of this framework in supporting LV hemodynamics modeling by performing CFD simulations from two representative time-resolved patient-specific image datasets. 
    more » « less
  2. Precision in segmenting cardiac MR images is critical for accurately diagnosing cardiovascular diseases. Several deep learning models have been shown useful in segmenting the structure of the heart, such as atrium, ventricle and myocardium, in cardiac MR images. Given the diverse image quality in cardiac MRI scans from various clinical settings, it is currently uncertain how different levels of noise affect the precision of deep learning image segmentation. This uncertainty could potentially lead to bias in subsequent diagnoses. The goal of this study is to examine the effects of noise in cardiac MRI segmentation using deep learning. We employed the Automated Cardiac Diagnosis Challenge MRI dataset and augmented it with varying degrees of Rician noise during model training to test the model’s capability in segmenting heart structures. Three models, including TransUnet, SwinUnet, and Unet, were compared by calculating the SNR-Dice relations to evaluate the models’ noise resilience. Results show that the TransUnet model, which combines CNN and Transformer architectures, demonstrated superior noise resilience. Noise augmentation during model training improved the models’ noise resilience for segmentation. The findings under-score the critical role of deep learning models in adequately handling diverse noise conditions for the segmentation of heart structures in cardiac images. 
    more » « less
  3. Bernard, Olivier; Clarysse, Patrick; Duchateau, Nicolas; Ohayon, Jacques; Viallon, Magalie. (Ed.)
    In this work we present a machine learning model to segment long axis magnetic resonance images of the left ventricle (LV) and address the challenges encountered when, in doing so, a small training dataset is used. Our approach is based on a heart locator and an anatomically guided UNet model in which the UNet is followed by a B-Spline head to condition training. The model is developed using transfer learning, which enabled the training and testing of the proposed strategy from a small swine dataset. The segmented LV cavity and myocardium in the long axis view show good agreement with ground truth segmentations at different cardiac phases based on the Dice similarity coefficient. In addition the model provides a measure of segmentations' uncertainty, which can then be incorporated while developing LV computational models and indices of cardiac performance based on the segmented images. Finally, several challenges related to long axis, as opposed to short axis, image segmentation are highlighted, including proposed solutions. 
    more » « less
  4. Abstract Image segmentation of the liver is an important step in treatment planning for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a generalizable deep learning model to segment the liver on T1-weighted MR images. In particular, three distinct deep learning architectures (nnUNet, PocketNet, Swin UNETR) were considered using data gathered from six geographically different institutions. A total of 819 T1-weighted MR images were gathered from both public and internal sources. Our experiments compared each architecture’s testing performance when trained both intra-institutionally and inter-institutionally. Models trained using nnUNet and its PocketNet variant achieved mean Dice-Sorensen similarity coefficients>0.9 on both intra- and inter-institutional test set data. The performance of these models suggests that nnUNet and PocketNet liver segmentation models trained on a large and diverse collection of T1-weighted MR images would on average achieve good intra-institutional segmentation performance. 
    more » « less
  5. Abstract BackgroundMagnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy. PurposeDeep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data. MethodsWe trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts. ResultsFor this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models. ConclusionsOur results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation. 
    more » « less