skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: Fully Automatic Knee Bone Detection and Segmentation on Three-Dimensional MRI
In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset.  more » « less
Award ID(s):
1723420 1723429
NSF-PAR ID:
10384796
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Diagnostics
Volume:
12
Issue:
1
ISSN:
2075-4418
Page Range / eLocation ID:
123
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields such as non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults’ T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

     
    more » « less
  2. null (Ed.)
    Osteoarthritis (OA) is the most common form of arthritis and can often occur in the knee. While convolutional neural networks (CNNs) have been widely used to study medical images, the application of a 3-dimensional (3D) CNN in knee OA diagnosis is limited. This study utilizes a 3D CNN model to analyze sequences of knee magnetic resonance (MR) images to perform knee OA classification. An advantage of using 3D CNNs is the ability to analyze the whole sequence of 3D MR images as a single unit as opposed to a traditional 2D CNN, which examines one image at a time. Therefore, 3D features could be extracted from adjacent slices, which may not be detectable from a single 2D image. The input data for each knee were a sequence of double-echo steady-state (DESS) MR images, and each knee was labeled by the Kellgren and Lawrence (KL) grade of severity at levels 0–4. In addition to the 5-category KL grade classification, we further examined a 2-category classification that distinguishes non-OA (KL ≤ 1) from OA (KL ≥ 2) knees. Clinically, diagnosing a patient with knee OA is the ultimate goal of assigning a KL grade. On a dataset with 1100 knees, the 3D CNN model that classifies knees with and without OA achieved an accuracy of 86.5% on the validation set and 83.0% on the testing set. We further conducted a comparative study between MRI and X-ray. Compared with a CNN model using X-ray images trained from the same group of patients, the proposed 3D model with MR images achieved higher accuracy in both the 5-category classification (54.0% vs. 50.0%) and the 2-category classification (83.0% vs. 77.0%). The result indicates that MRI, with the application of a 3D CNN model, has greater potential to improve diagnosis accuracy for knee OA clinically than the currently used X-ray methods. 
    more » « less
  3. While gliomas have become the most common cancerous brain tumors, manual diagnoses from 3D MRIs are time-consuming and possibly inconsistent when conducted by different radiotherapists, which leads to the pressing demand for automatic segmentation of brain tumors. State-of-the-art approaches employ FCNs to automatically segment the MRI scans. In particular, 3D U-Net has achieved notable performance and motivated a series of subsequent works. However, their significant size and heavy computation have impeded their actual deployment. Although there exists a body of literature on the compression of CNNs using low-precision representations, they either focus on storage reduction without computational improvement or cause severe performance degradation. In this article, we propose a CNN training algorithm that approximates weights and activations using non-negative integers along with trained affine mapping functions. Moreover, our approach allows the dot-product operations to be performed in an integer-arithmetic manner and defers the floating-point decoding and encoding phases until the end of layers. Experimental results on BraTS 2018 show that our trained affine mapping approach achieves near full-precision dice accuracy with 8-bit weights and activations. In addition, we achieve a dice accuracy within 0.005 and 0.01 of the full-precision counterparts when using 4-bit and 2-bit precisions, respectively. 
    more » « less
  4. The segmentation of the ventricular wall and the blood pool in cardiac magnetic resonance imaging (MRI) has been inves- tigated for decades, given its important role for delineation of cardiac functioning and diagnosis of heart diseases. One of the major challenges is that the inner epicardium boundary is not always visible in the image domain, due to the mix- ture of blood and muscle structures, especially at the end of contraction, or systole. To address it, we propose a novel ap- proach for the cardiac segmentation in the short-axis (SAX) MRI: coupled deep neural networks and deformable models. First, a 2D U-Net is adopted for each magnetic resonance (MR) slice, and a 3D U-Net refines the segmentation results along the temporal dimension. Then, we propose a multi- component deformable model to extract accurate contours for both endo- and epicardium with global and local constraints. Finally, a partial blood classification is explored to estimate the presence of boundary pixels near the trabeculae and solid wall, and to avoid moving the endocardium boundary inward. Quantitative evaluation demonstrates the high accuracy, ro- bustness, and efficiency of our approach for the slices ac- quired at different locations and different cardiac phases. 
    more » « less
  5. This paper studied the changing pattern of knee cartilage using 3D knee magnetic resonance (MR) images over a 12-month period. As a pilot study, we focused on the medial tibia compartment of the knee joint. To quantify the thickness of cartilage in this compartment, we utilized two methods: one was measurement through manual segmentation of cartilage on each slice of the 3D MR sequence; the other was measurement through cartilage damage index (CDI), which quantified the thickness on a few informative locations on cartilage. We employed the artificial neural networks (ANNs) to model the changing pattern of cartilage thickness. The input feature space was composed of the thickness information at a cartilage location as well as its neighborhood from baseline year data. The output categories were ‘changed’ and ‘no-change’, based on the thickness difference at the same location between the baseline year and the 12-month follow-up data. Different ANN models were trained by using CDI features and manual segmentation features. Further, for each type of feature, individual models were trained at different subregions of the medial tibia compartment, i.e., the bottom part, the middle part, the upper part, and the whole. Based on the experiment results, we found that CDI features generated better prediction performance than manual segmentation, on both whole medial tibia compartment and any subregion. For CDI, the best performance in term of AUC was obtained using the central CDI locations (AUC = 0.766), while the best performance for manual segmentation was obtained using all slices of the 3D MR sequence (AUC = 0.656). As experiment results showed, the CDI method demonstrated a stronger pattern of cartilage change than the manual segmentation method, which required up to 6-hour manual delineation of all MRI slices. The result should be further validated by extending the experiment to other compartments. 
    more » « less