skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 1, 2025

Title: Unaligned Hip Radiograph Assessment Utilizing Convolutional Neural Networks for the Assessment of Developmental Dysplasia of the Hip
Abstract Developmental dysplasia of the hip (DDH) is a condition in which the acetabular socket inadequately contains the femoral head (FH). If left untreated, DDH can result in degenerative changes in the hip joint. Several imaging techniques are used for DDH assessment. In radiographs, the acetabular index (ACIN), center-edge angle, Sharp's angle (SA), and migration percentage (MP) metrics are used to assess DDH. Determining these metrics is time-consuming and repetitive. This study uses a convolutional neural network (CNN) to identify radiographic measurements and improve traditional methods of identifying DDH. The dataset consisted of 60 subject radiographs rotated along the craniocaudal and mediolateral axes 25 times, generating 1500 images. A CNN detection algorithm was used to identify key radiographic metrics for the diagnosis of DDH. The algorithm was able to detect the metrics with reasonable accuracy in comparison to the manually computed metrics. The CNN performed well on images with high contrast margins between bone and soft tissues. In comparison, the CNN was not able to identify some critical points for metric calculation on a few images that had poor definition due to low contrast between bone and soft tissues. This study shows that CNNs can efficiently measure clinical parameters to assess DDH on radiographs with high contrast margins between bone and soft tissues with purposeful rotation away from an ideal image. Results from this study could help inform and broaden the existing bank of information on using CNNs for radiographic measurement and medical condition prediction.  more » « less
Award ID(s):
2238859
PAR ID:
10506982
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
American Society of Mechanical Engineers Digital Collection
Date Published:
Journal Name:
Journal of Engineering and Science in Medical Diagnostics and Therapy
Volume:
7
Issue:
4
ISSN:
2572-7958
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Reconstructing the behavior of extinct species is challenging, particularly for those with no living analogues. However, damage preserved as paleopathologies on bone can record how an animal moved in life, potentially reflecting behavioral patterns. Here, we assess hypothesized etiologies of pathology in a pelvis and associated right femur of aSmilodon fatalissaber-toothed cat, one of the best-studied species from the Pleistocene-age Rancho La Brea asphalt seeps, California, USA, using visualization by computed tomography (CT). The pelvis exhibits massive destruction of the right hip socket that was interpreted, for nearly a century, to have developed from trauma and infection. CT imaging reveals instead that the pathological distortions characterize chronic remodeling that began at birth and led to degeneration of the joint over the animal’s life. These results suggest that this individual suffered from hip dysplasia, a congenital condition common in domestic dogs and cats. This individual reached adulthood but could not have hunted properly nor defended territory on its own, likely relying on a social group for feeding and protection. While extant social felids are rare, these fossils and others with similar pathologies are consistent with a spectrum of social strategies inSmilodonsupported by a predominance of previous studies. 
    more » « less
  2. Abstract Background We aimed to determine if composite structural measures of knee osteoarthritis (KOA) progression on magnetic resonance (MR) imaging can predict the radiographic onset of accelerated knee osteoarthritis. Methods We used data from a nested case-control study among participants from the Osteoarthritis Initiative without radiographic KOA at baseline. Participants were separated into three groups based on radiographic disease progression over 4 years: 1) accelerated (Kellgren-Lawrence grades [KL] 0/1 to 3/4), 2) typical (increase in KL, excluding accelerated osteoarthritis), or 3) no KOA (no change in KL). We assessed tibiofemoral cartilage damage (four regions: medial/lateral tibia/femur), bone marrow lesion (BML) volume (four regions: medial/lateral tibia/femur), and whole knee effusion-synovitis volume on 3 T MR images with semi-automated programs. We calculated two MR-based composite scores. Cumulative damage was the sum of standardized cartilage damage. Disease activity was the sum of standardized volumes of effusion-synovitis and BMLs. We focused on annual images from 2 years before to 2 years after radiographic onset (or a matched time for those without knee osteoarthritis). To determine between group differences in the composite metrics at all time points, we used generalized linear mixed models with group (3 levels) and time (up to 5 levels). For our prognostic analysis, we used multinomial logistic regression models to determine if one-year worsening in each composite metric change associated with future accelerated knee osteoarthritis (odds ratios [OR] based on units of 1 standard deviation of change). Results Prior to disease onset, the accelerated KOA group had greater average disease activity compared to the typical and no KOA groups and this persisted up to 2 years after disease onset. During a pre-radiographic disease period, the odds of developing accelerated KOA were greater in people with worsening disease activity [versus typical KOA OR (95% confidence interval [CI]): 1.58 (1.08 to 2.33); versus no KOA: 2.39 (1.55 to 3.71)] or cumulative damage [versus typical KOA: 1.69 (1.14 to 2.51); versus no KOA: 2.11 (1.41 to 3.16)]. Conclusions MR-based disease activity and cumulative damage metrics may be prognostic markers to help identify people at risk for accelerated onset and progression of knee osteoarthritis. 
    more » « less
  3. In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset. 
    more » « less
  4. Abstract AI-based algorithms are emerging in many meteorological applications that produce imagery as output, including for global weather forecasting models. However, the imagery produced by AI algorithms, especially by convolutional neural networks (CNNs), is often described as too blurry to look realistic, partly because CNNs tend to represent uncertainty as blurriness. This blurriness can be undesirable since it might obscure important meteorological features. More complex AI models, such as Generative AI models, produce images that appear to be sharper. However, improved sharpness may come at the expense of a decline in other performance criteria, such as standard forecast verification metrics. To navigate any trade-off between sharpness and other performance metrics it is important to quantitatively assess those other metrics along with sharpness. While there is a rich set of forecast verification metrics available for meteorological images, none of them focus on sharpness. This paper seeks to fill this gap by 1) exploring a variety of sharpness metrics from other fields, 2) evaluating properties of these metrics, 3) proposing the new concept of Gaussian Blur Equivalence as a tool for their uniform interpretation, and 4) demonstrating their use for sample meteorological applications, including a CNN that emulates radar imagery from satellite imagery (GREMLIN) and an AI-based global weather forecasting model (GraphCast). 
    more » « less
  5. null (Ed.)
    Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets. These models achieve the current state of the art, but they have difficulties generalizing when applied to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose "GainTuning", in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the "Gain") of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive denoising in a scientific application, in which a CNN is trained on synthetic data, and tested on real transmission-electron-microscope images. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios. 
    more » « less