skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep whole brain segmentation of 7T structural MRI
7T magnetic resonance imaging (MRI) has the potential to drive our understanding of human brain function through new contrast and enhanced resolution. Whole brain segmentation is a key neuroimaging technique that allows for region-by-region analysis of the brain. Segmentation is also an important preliminary step that provides spatial and volumetric information for running other neuroimaging pipelines. Spatially localized atlas network tiles (SLANT) is a popular 3D convolutional neural network (CNN) tool that breaks the whole brain segmentation task into localized sub-tasks. Each sub-task involves a specific spatial location handled by an independent 3D convolutional network to provide high resolution whole brain segmentation results. SLANT has been widely used to generate whole brain segmentations from structural scans acquired on 3T MRI. However, the use of SLANT for whole brain segmentation from structural 7T MRI scans has not been successful due to the inhomogeneous image contrast usually seen across the brain in 7T MRI. For instance, we demonstrate the mean percent difference of SLANT label volumes between a 3T scan-rescan is approximately 1.73%, whereas its 3T-7T scan-rescan counterpart has higher differences around 15.13%. Our approach to address this problem is to register the whole brain segmentation performed on 3T MRI to 7T MRI and use this information to finetune SLANT for structural 7T MRI. With the finetuned SLANT pipeline, we observe a lower mean relative difference in the label volumes of ~8.43% acquired from structural 7T MRI data. Dice similarity coefficient between SLANT segmentation on the 3T MRI scan and the after finetuning SLANT segmentation on the 7T MRI increased from 0.79 to 0.83 with p<0.01. These results suggest finetuning of SLANT is a viable solution for improving whole brain segmentation on high resolution 7T structural imaging.  more » « less
Award ID(s):
1840896
PAR ID:
10399611
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
SPIE.medical imaging
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Magnetic resonance elastography (MRE) is a non-invasive method for determining the mechanical response of tissues using applied harmonic deformation and motion-sensitive MRI. MRE studies of the human brain are typically performed at conventional field strengths, with a few attempts at the ultra-high field strength, 7T, reporting increased spatial resolution with partial brain coverage. Achieving high-resolution human brain scans using 7T MRE presents unique challenges of decreased octahedral shear strain-based signal-to-noise ratio (OSS-SNR) and lower shear wave motion sensitivity. In this study, we establish high resolution MRE at 7T with a custom 2D multi-slice single-shot spin-echo echo-planar imaging sequence, using the Gadgetron advanced image reconstruction framework, applying Marchenko–Pastur Principal component analysis denoising, and using nonlinear viscoelastic inversion. These techniques allowed us to calculate the viscoelastic properties of the whole human brain at 1.1 mm isotropic imaging resolution with high OSS-SNR and repeatability. Using phantom models and 7T MRE data of eighteen healthy volunteers, we demonstrate the robustness and accuracy of our method at high-resolution while quantifying the feasible tradeoff between resolution, OSS-SNR, and scan time. Using these post-processing techniques, we significantly increased OSS-SNR at 1.1 mm resolution with whole-brain coverage by approximately 4-fold and generated elastograms with high anatomical detail. Performing high-resolution MRE at 7T on the human brain can provide information on different substructures within brain tissue based on their mechanical properties, which can then be used to diagnose pathologies (e.g. Alzheimer’s disease), indicate disease progression, or better investigate neurodegeneration effects or other relevant brain disorders,in vivo. 
    more » « less
  2. Abstract BackgroundMagnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy. PurposeDeep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data. MethodsWe trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts. ResultsFor this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models. ConclusionsOur results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation. 
    more » « less
  3. This analysis utilizes a residual 3D convolutional neural network (equipped with 4 attention layers). The core contributions of our work are twofold: firstly, we have innovatively integrated clinical expertise into the initialization of the attention layer’s weights through whole-brain segmentation technique; secondly, we have employed various state-of-the-art model interpretation techniques. These techniques effectively annotate influential brain regions and demonstrate promising results within neuroimaging analysis, as reflected in the key metrics and outcomes. Our findings underscore the potential of deep learning in neuroimaging, especially highlighting the critical role of comprehensive brain segmentation in enhancing diagnostic accuracy. 
    more » « less
  4. Abstract From birth to 5 years of age, brain structure matures and evolves alongside emerging cognitive and behavioral abilities. In relating concurrent cognitive functioning and measures of brain structure, a major challenge that has impeded prior investigation of their time‐dynamic relationships is the sparse and irregular nature of most longitudinal neuroimaging data. We demonstrate how this problem can be addressed by applying functional concurrent regression models (FCRMs) to longitudinal cognitive and neuroimaging data. The application of FCRM in neuroimaging is illustrated with longitudinal neuroimaging and cognitive data acquired from a large cohort (n= 210) of healthy children, 2–48 months of age. Quantifying white matter myelination by using myelin water fraction (MWF) as imaging metric derived from MRI scans, application of this methodology reveals an early period (200–500 days) during which whole brain and regional white matter structure, as quantified by MWF, is positively associated with cognitive ability, while we found no such association for whole brain white matter volume. Adjusting for baseline covariates including socioeconomic status as measured by maternal education (SES‐ME), infant feeding practice, gender, and birth weight further reveals an increasing association between SES‐ME and cognitive development with child age. These results shed new light on the emerging patterns of brain and cognitive development, indicating that FCRM provides a useful tool for investigating these evolving relationships. 
    more » « less
  5. Abstract Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields such as non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults’ T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE. 
    more » « less