skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Ability of artificial intelligence to identify self-reported race in chest x-ray using pixel intensity counts
Purpose Prior studies show convolutional neural networks predicting self-reported race using x-rays of chest, hand and spine, chest computed tomography, and mammogram. We seek an understanding of the mechanism that reveals race within x-ray images, investigating the possibility that race is not predicted using the physical structure in x-ray images but is embedded in the grayscale pixel intensities. Approach Retrospective full year 2021, 298,827 AP/PA chest x-ray images from 3 academic health centers across the United States and MIMIC-CXR, labeled by self-reported race, were used in this study. The image structure is removed by summing the number of each grayscale value and scaling to percent per image (PPI). The resulting data are tested using multivariate analysis of variance (MANOVA) with Bonferroni multiple-comparison adjustment and class-balanced MANOVA. Machine learning (ML) feed-forward networks (FFN) and decision trees were built to predict race (binary Black or White and binary Black or other) using only grayscale value counts. Stratified analysis by body mass index, age, sex, gender, patient type, make/model of scanner, exposure, and kilovoltage peak setting was run to study the impact of these factors on race prediction following the same methodology. Results MANOVA rejects the null hypothesis that classes are the same with 95% confidence (F 7.38, P < 0.0001) and balanced MANOVA (F 2.02, P < 0.0001). The best FFN performance is limited [area under the receiver operating characteristic (AUROC) of 69.18%]. Gradient boosted trees predict self-reported race using grayscale PPI (AUROC 77.24%). Conclusions Within chest x-rays, pixel intensity value counts alone are statistically significant indicators and enough for ML classification tasks of patient self-reported race.  more » « less
Award ID(s):
1928481
PAR ID:
10488056
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Society of Photo-Optical Instrumentation Engineers (SPIE)
Date Published:
Journal Name:
Journal of Medical Imaging
Volume:
10
Issue:
06
ISSN:
2329-4302
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Coronavirus Disease 2019 (COVID-19) is caused by severe acute respiratory syndrome coronavirus 2 virus (SARS-CoV-2). The virus transmits rapidly; it has a basic reproductive number (R0) of 2.2-2.7. In March 2020, the World Health Organization declared the COVID-19 outbreak a pandemic. COVID-19 is currently affecting more than 200 countries with 6M active cases. An effective testing strategy for COVID-19 is crucial to controlling the outbreak but the demand for testing surpasses the availability of test kits that use Reverse Transcription Polymerase Chain Reaction (RT-PCR). In this paper, we present a technique to screen for COVID-19 using artificial intelligence. Our technique takes only seconds to screen for the presence of the virus in a patient. We collected a dataset of chest X-ray images and trained several popular deep convolution neural network-based models (VGG, MobileNet, Xception, DenseNet, InceptionResNet) to classify the chest X-rays. Unsatisfied with these models, we then designed and built a Residual Attention Network that was able to screen COVID-19 with a testing accuracy of 98% and a validation accuracy of 100%. A feature maps visual of our model show areas in a chest X-ray which are important for classification. Our work can help to increase the adaptation of AI-assisted applications in clinical practice. The code and dataset used in this project are available at https://github.com/vishalshar/covid-19-screening-using-RAN-on-X-ray-images. 
    more » « less
  2. Abstract Optical chiral imaging, as an important tool in chemical and biological analysis, has recently undergone a revolution with the development of chiral metamaterials and metasurfaces. However, the existing chiral imaging approaches based on metamaterials or metasurfaces can only display binary images with 1 bit pixel depth having either black or white pixels. Here, the unique chiral grayscale imaging based on plasmonic metasurfaces of stepped V‐shaped nanoapertures is reported with both high circular dichroism and large polarization linearity in transmission. By interlacing two subarrays of chiral nanoaperture enantiomers into one metasurface, two specific linear polarization profiles are independently generated in transmission under different incident handedness, which can then be converted into two distinct intensity profiles for demonstrating spin‐controlled grayscale images with 8 bit pixel depth. The proposed chiral grayscale imaging approach with subwavelength spatial resolution and high data density provides a versatile platform for many future applications in image encryption and decryption, dynamic display, advanced chiroptical sensing, and optical information processing. 
    more » « less
  3. Purpose: Limited studies exploring concrete methods or approaches to tackle and enhance model fairness in the radiology domain. Our proposed AI model utilizes supervised contrastive learning to minimize bias in CXR diagnosis. Materials and Methods: In this retrospective study, we evaluated our proposed method on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXR images from 27,796 patients collected as of April 20, 2023 for COVID-19 diagnosis, and the NIH Chest X-ray (NIH-CXR) dataset with 112,120 CXR images from 30,805 patients collected between 1992 and 2015. In the NIH-CXR dataset, thoracic abnormalities include atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, or hernia. Our proposed method utilizes supervised contrastive learning with carefully selected positive and negative samples to generate fair image embeddings, which are fine-tuned for subsequent tasks to reduce bias in chest X-ray (CXR) diagnosis. We evaluated the methods using the marginal AUC difference (δ mAUC). Results: The proposed model showed a significant decrease in bias across all subgroups when compared to the baseline models, as evidenced by a paired T-test (p<0.0001). The δ mAUC obtained by our method were 0.0116 (95\% CI, 0.0110-0.0123), 0.2102 (95% CI, 0.2087-0.2118), and 0.1000 (95\% CI, 0.0988-0.1011) for sex, race, and age on MIDRC, and 0.0090 (95\% CI, 0.0082-0.0097) for sex and 0.0512 (95% CI, 0.0512-0.0532) for age on NIH-CXR, respectively. Conclusion: Employing supervised contrastive learning can mitigate bias in CXR diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. 
    more » « less
  4. Abstract Purpose. To investigate the relationship between spatial parotid dose and the risk of xerostomia in patients undergoing head-and-neck cancer radiotherapy, using machine learning (ML) methods.Methods. Prior to conducting voxel-based ML analysis of the spatial dose, two steps were taken: (1) The parotid dose was standardized through deformable image registration to a reference patient; (2) Bilateral parotid doses were regrouped into contralateral and ipsilateral portions depending on their proximity to the gross tumor target. Individual dose voxels were input into six commonly used ML models, which were tuned with ten-fold cross validation: random forest (RF), ridge regression (RR), support vector machine (SVM), extra trees (ET), k-nearest neighbor (kNN), and naïve Bayes (NB). Binary endpoints from 240 patients were used for model training and validation: 0 (N = 119) for xerostomia grades 0 or 1, and 1 (N = 121) for grades 2 or higher. Model performance was evaluated using multiple metrics, including accuracy, F1score, areas under the receiver operating characteristics curves (auROC), and area under the precision–recall curves (auPRC). Dose voxel importance was assessed to identify local dose patterns associated with xerostomia risk.Results. Four models, including RF, SVM, ET, and NB, yielded average auROCs and auPRCs greater than 0.60 from ten-fold cross-validation on the training data, except for a lower auROC from NB. The first three models, along with kNN, demonstrated higher accuracy and F1scores. A bootstrapping analysis confirmed test uncertainty. Voxel importance analysis from kNN indicated that the posterior portion of the ipsilateral gland was more predictive of xerostomia, but no clear patterns were identified from the other models.Conclusion. Voxel doses as predictors of xerostomia were confirmed with some ML classifiers, but no clear regional patterns could be established among these classifiers, except kNN. Further research with a larger patient dataset is needed to identify conclusive patterns. 
    more » « less
  5. Manual examination of chest x-rays is a time consuming process that involves significant effort by expert radiologists. Recent work attempts to alleviate this problem by developing learning-based automated chest x-ray analysis systems that map images to multi-label diagnoses using deep neural net- works. These methods are often treated as black boxes, or they output attention maps but don’t explain why the attended areas are important. Given data consisting of a frontal-view x-ray, a set of natural language findings, and one or more diagnostic impressions, we propose a deep neural network model that during training simultaneously 1) constructs a topic model which clusters key terms from the findings into meaningful groups, 2) predicts the presence of each topic for a given input image based on learned visual features, and 3) uses an image’s predicted topic encoding as features to predict one or more diagnoses. Since the net learns the topic model jointly with the classifier, it gives us a powerful tool for understanding which semantic concepts the net might be ex- ploiting when making diagnoses, and since we constrain the net to predict topics based on expert-annotated reports, the net automatically encodes some higher-level expert knowledge about how to make diagnoses. 
    more » « less