skip to main content


Title: Deep whole brain segmentation of 7T structural MRI
7T magnetic resonance imaging (MRI) has the potential to drive our understanding of human brain function through new contrast and enhanced resolution. Whole brain segmentation is a key neuroimaging technique that allows for region-by-region analysis of the brain. Segmentation is also an important preliminary step that provides spatial and volumetric information for running other neuroimaging pipelines. Spatially localized atlas network tiles (SLANT) is a popular 3D convolutional neural network (CNN) tool that breaks the whole brain segmentation task into localized sub-tasks. Each sub-task involves a specific spatial location handled by an independent 3D convolutional network to provide high resolution whole brain segmentation results. SLANT has been widely used to generate whole brain segmentations from structural scans acquired on 3T MRI. However, the use of SLANT for whole brain segmentation from structural 7T MRI scans has not been successful due to the inhomogeneous image contrast usually seen across the brain in 7T MRI. For instance, we demonstrate the mean percent difference of SLANT label volumes between a 3T scan-rescan is approximately 1.73%, whereas its 3T-7T scan-rescan counterpart has higher differences around 15.13%. Our approach to address this problem is to register the whole brain segmentation performed on 3T MRI to 7T MRI and use this information to finetune SLANT for structural 7T MRI. With the finetuned SLANT pipeline, we observe a lower mean relative difference in the label volumes of ~8.43% acquired from structural 7T MRI data. Dice similarity coefficient between SLANT segmentation on the 3T MRI scan and the after finetuning SLANT segmentation on the 7T MRI increased from 0.79 to 0.83 with p<0.01. These results suggest finetuning of SLANT is a viable solution for improving whole brain segmentation on high resolution 7T structural imaging.  more » « less
Award ID(s):
1840896
PAR ID:
10399611
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
SPIE.medical imaging
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Magnetic resonance elastography (MRE) is a non-invasive method for determining the mechanical response of tissues using applied harmonic deformation and motion-sensitive MRI. MRE studies of the human brain are typically performed at conventional field strengths, with a few attempts at the ultra-high field strength, 7T, reporting increased spatial resolution with partial brain coverage. Achieving high-resolution human brain scans using 7T MRE presents unique challenges of decreased octahedral shear strain-based signal-to-noise ratio (OSS-SNR) and lower shear wave motion sensitivity. In this study, we establish high resolution MRE at 7T with a custom 2D multi-slice single-shot spin-echo echo-planar imaging sequence, using the Gadgetron advanced image reconstruction framework, applying Marchenko–Pastur Principal component analysis denoising, and using nonlinear viscoelastic inversion. These techniques allowed us to calculate the viscoelastic properties of the whole human brain at 1.1 mm isotropic imaging resolution with high OSS-SNR and repeatability. Using phantom models and 7T MRE data of eighteen healthy volunteers, we demonstrate the robustness and accuracy of our method at high-resolution while quantifying the feasible tradeoff between resolution, OSS-SNR, and scan time. Using these post-processing techniques, we significantly increased OSS-SNR at 1.1 mm resolution with whole-brain coverage by approximately 4-fold and generated elastograms with high anatomical detail. Performing high-resolution MRE at 7T on the human brain can provide information on different substructures within brain tissue based on their mechanical properties, which can then be used to diagnose pathologies (e.g. Alzheimer’s disease), indicate disease progression, or better investigate neurodegeneration effects or other relevant brain disorders,in vivo.

     
    more » « less
  2. Abstract Background

    Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy.

    Purpose

    Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data.

    Methods

    We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts.

    Results

    For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models.

    Conclusions

    Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.

     
    more » « less
  3. Abstract

    Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields such as non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults’ T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

     
    more » « less
  4. Objective and Impact Statement . We propose an automated method of predicting Normal Pressure Hydrocephalus (NPH) from CT scans. A deep convolutional network segments regions of interest from the scans. These regions are then combined with MRI information to predict NPH. To our knowledge, this is the first method which automatically predicts NPH from CT scans and incorporates diffusion tractography information for prediction. Introduction . Due to their low cost and high versatility, CT scans are often used in NPH diagnosis. No well-defined and effective protocol currently exists for analysis of CT scans for NPH. Evans’ index, an approximation of the ventricle to brain volume using one 2D image slice, has been proposed but is not robust. The proposed approach is an effective way to quantify regions of interest and offers a computational method for predicting NPH. Methods . We propose a novel method to predict NPH by combining regions of interest segmented from CT scans with connectome data to compute features which capture the impact of enlarged ventricles by excluding fiber tracts passing through these regions. The segmentation and network features are used to train a model for NPH prediction. Results . Our method outperforms the current state-of-the-art by 9 precision points and 29 recall points. Our segmentation model outperforms the current state-of-the-art in segmenting the ventricle, gray-white matter, and subarachnoid space in CT scans. Conclusion . Our experimental results demonstrate that fast and accurate volumetric segmentation of CT brain scans can help improve the NPH diagnosis process, and network properties can increase NPH prediction accuracy. 
    more » « less
  5. Purpose

    To enable rapid imaging with a scan time–efficient 3D cones trajectory with a deep‐learning off‐resonance artifact correction technique.

    Methods

    A residual convolutional neural network to correct off‐resonance artifacts (Off‐ResNet) was trained with a prospective study of pediatric MRA exams. Each exam acquired a short readout scan (1.18 ms ± 0.38) and a long readout scan (3.35 ms ± 0.74) at 3 T. Short readout scans, with longer scan times but negligible off‐resonance blurring, were used as reference images and augmented with additional off‐resonance for supervised training examples. Long readout scans, with greater off‐resonance artifacts but shorter scan time, were corrected by autofocus and Off‐ResNet and compared with short readout scans by normalized RMS error, structural similarity index, and peak SNR. Scans were also compared by scoring on 8 anatomical features by two radiologists, using analysis of variance with post hoc Tukey's test and two one‐sided t‐tests. Reader agreement was determined with intraclass correlation.

    Results

    The total scan time for long readout scans was on average 59.3% shorter than short readout scans. Images from Off‐ResNet had superior normalized RMS error, structural similarity index, and peak SNR compared with uncorrected images across ±1 kHz off‐resonance (P< .01). The proposed method had superior normalized RMS error over −677 Hz to +1 kHz and superior structural similarity index and peak SNR over ±1 kHz compared with autofocus (P< .01). Radiologic scoring demonstrated that long readout scans corrected with Off‐ResNet were noninferior to short readout scans (P< .05).

    Conclusion

    The proposed method can correct off‐resonance artifacts from rapid long‐readout 3D cones scans to a noninferior image quality compared with diagnostically standard short readout scans.

     
    more » « less