skip to main content


Title: Identification of Knee Cartilage Changing Pattern
This paper studied the changing pattern of knee cartilage using 3D knee magnetic resonance (MR) images over a 12-month period. As a pilot study, we focused on the medial tibia compartment of the knee joint. To quantify the thickness of cartilage in this compartment, we utilized two methods: one was measurement through manual segmentation of cartilage on each slice of the 3D MR sequence; the other was measurement through cartilage damage index (CDI), which quantified the thickness on a few informative locations on cartilage. We employed the artificial neural networks (ANNs) to model the changing pattern of cartilage thickness. The input feature space was composed of the thickness information at a cartilage location as well as its neighborhood from baseline year data. The output categories were ‘changed’ and ‘no-change’, based on the thickness difference at the same location between the baseline year and the 12-month follow-up data. Different ANN models were trained by using CDI features and manual segmentation features. Further, for each type of feature, individual models were trained at different subregions of the medial tibia compartment, i.e., the bottom part, the middle part, the upper part, and the whole. Based on the experiment results, we found that CDI features generated better prediction performance than manual segmentation, on both whole medial tibia compartment and any subregion. For CDI, the best performance in term of AUC was obtained using the central CDI locations (AUC = 0.766), while the best performance for manual segmentation was obtained using all slices of the 3D MR sequence (AUC = 0.656). As experiment results showed, the CDI method demonstrated a stronger pattern of cartilage change than the manual segmentation method, which required up to 6-hour manual delineation of all MRI slices. The result should be further validated by extending the experiment to other compartments.  more » « less
Award ID(s):
1723420
NSF-PAR ID:
10169685
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Applied Sciences
Volume:
9
Issue:
17
ISSN:
2076-3417
Page Range / eLocation ID:
3469
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset. 
    more » « less
  2. Abstract Background We aimed to determine if composite structural measures of knee osteoarthritis (KOA) progression on magnetic resonance (MR) imaging can predict the radiographic onset of accelerated knee osteoarthritis. Methods We used data from a nested case-control study among participants from the Osteoarthritis Initiative without radiographic KOA at baseline. Participants were separated into three groups based on radiographic disease progression over 4 years: 1) accelerated (Kellgren-Lawrence grades [KL] 0/1 to 3/4), 2) typical (increase in KL, excluding accelerated osteoarthritis), or 3) no KOA (no change in KL). We assessed tibiofemoral cartilage damage (four regions: medial/lateral tibia/femur), bone marrow lesion (BML) volume (four regions: medial/lateral tibia/femur), and whole knee effusion-synovitis volume on 3 T MR images with semi-automated programs. We calculated two MR-based composite scores. Cumulative damage was the sum of standardized cartilage damage. Disease activity was the sum of standardized volumes of effusion-synovitis and BMLs. We focused on annual images from 2 years before to 2 years after radiographic onset (or a matched time for those without knee osteoarthritis). To determine between group differences in the composite metrics at all time points, we used generalized linear mixed models with group (3 levels) and time (up to 5 levels). For our prognostic analysis, we used multinomial logistic regression models to determine if one-year worsening in each composite metric change associated with future accelerated knee osteoarthritis (odds ratios [OR] based on units of 1 standard deviation of change). Results Prior to disease onset, the accelerated KOA group had greater average disease activity compared to the typical and no KOA groups and this persisted up to 2 years after disease onset. During a pre-radiographic disease period, the odds of developing accelerated KOA were greater in people with worsening disease activity [versus typical KOA OR (95% confidence interval [CI]): 1.58 (1.08 to 2.33); versus no KOA: 2.39 (1.55 to 3.71)] or cumulative damage [versus typical KOA: 1.69 (1.14 to 2.51); versus no KOA: 2.11 (1.41 to 3.16)]. Conclusions MR-based disease activity and cumulative damage metrics may be prognostic markers to help identify people at risk for accelerated onset and progression of knee osteoarthritis. 
    more » « less
  3. Background

    Deep learning (DL)‐based automatic segmentation models can expedite manual segmentation yet require resource‐intensive fine‐tuning before deployment on new datasets. The generalizability of DL methods to new datasets without fine‐tuning is not well characterized.

    Purpose

    Evaluate the generalizability of DL‐based models by deploying pretrained models on independent datasets varying by MR scanner, acquisition parameters, and subject population.

    Study Type

    Retrospective based on prospectively acquired data.

    Population

    Overall test dataset: 59 subjects (26 females); Study 1: 5 healthy subjects (zero females), Study 2: 8 healthy subjects (eight females), Study 3: 10 subjects with osteoarthritis (eight females), Study 4: 36 subjects with various knee pathology (10 females).

    Field Strength/Sequence

    A 3‐T, quantitative double‐echo steady state (qDESS).

    Assessment

    Four annotators manually segmented knee cartilage. Each reader segmented one of four qDESS datasets in the test dataset. Two DL models, one trained on qDESS data and another on Osteoarthritis Initiative (OAI)‐DESS data, were assessed. Manual and automatic segmentations were compared by quantifying variations in segmentation accuracy, volume, and T2 relaxation times for superficial and deep cartilage.

    Statistical Tests

    Dice similarity coefficient (DSC) for segmentation accuracy. Lin's concordance correlation coefficient (CCC), Wilcoxon rank‐sum tests, root‐mean‐squared error‐coefficient‐of‐variation to quantify manual vs. automatic T2 and volume variations. Bland–Altman plots for manual vs. automatic T2 agreement. APvalue < 0.05 was considered statistically significant.

    Results

    DSCs for the qDESS‐trained model, 0.79–0.93, were higher than those for the OAI‐DESS‐trained model, 0.59–0.79. T2 and volume CCCs for the qDESS‐trained model, 0.75–0.98 and 0.47–0.95, were higher than respective CCCs for the OAI‐DESS‐trained model, 0.35–0.90 and 0.13–0.84. Bland–Altman 95% limits of agreement for superficial and deep cartilage T2 were lower for the qDESS‐trained model, ±2.4 msec and ±4.0 msec, than the OAI‐DESS‐trained model, ±4.4 msec and ±5.2 msec.

    Data Conclusion

    The qDESS‐trained model may generalize well to independent qDESS datasets regardless of MR scanner, acquisition parameters, and subject population.

    Evidence Level

    1

    Technical Efficacy

    Stage 1

     
    more » « less
  4. null (Ed.)
    Osteoarthritis (OA) is the most common form of arthritis and can often occur in the knee. While convolutional neural networks (CNNs) have been widely used to study medical images, the application of a 3-dimensional (3D) CNN in knee OA diagnosis is limited. This study utilizes a 3D CNN model to analyze sequences of knee magnetic resonance (MR) images to perform knee OA classification. An advantage of using 3D CNNs is the ability to analyze the whole sequence of 3D MR images as a single unit as opposed to a traditional 2D CNN, which examines one image at a time. Therefore, 3D features could be extracted from adjacent slices, which may not be detectable from a single 2D image. The input data for each knee were a sequence of double-echo steady-state (DESS) MR images, and each knee was labeled by the Kellgren and Lawrence (KL) grade of severity at levels 0–4. In addition to the 5-category KL grade classification, we further examined a 2-category classification that distinguishes non-OA (KL ≤ 1) from OA (KL ≥ 2) knees. Clinically, diagnosing a patient with knee OA is the ultimate goal of assigning a KL grade. On a dataset with 1100 knees, the 3D CNN model that classifies knees with and without OA achieved an accuracy of 86.5% on the validation set and 83.0% on the testing set. We further conducted a comparative study between MRI and X-ray. Compared with a CNN model using X-ray images trained from the same group of patients, the proposed 3D model with MR images achieved higher accuracy in both the 5-category classification (54.0% vs. 50.0%) and the 2-category classification (83.0% vs. 77.0%). The result indicates that MRI, with the application of a 3D CNN model, has greater potential to improve diagnosis accuracy for knee OA clinically than the currently used X-ray methods. 
    more » « less
  5. Abstract Background

    Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy.

    Purpose

    Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data.

    Methods

    We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts.

    Results

    For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models.

    Conclusions

    Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.

     
    more » « less