skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: EXPLOITING VISUAL AND REPORT-BASED INFORMATION FOR CHEST X-RAY ANALYSIS BY JOINTLY LEARNING VISUAL CLASSIFIERS AND TOPIC MODELS
Manual examination of chest x-rays is a time consuming process that involves significant effort by expert radiologists. Recent work attempts to alleviate this problem by developing learning-based automated chest x-ray analysis systems that map images to multi-label diagnoses using deep neural net- works. These methods are often treated as black boxes, or they output attention maps but don’t explain why the attended areas are important. Given data consisting of a frontal-view x-ray, a set of natural language findings, and one or more diagnostic impressions, we propose a deep neural network model that during training simultaneously 1) constructs a topic model which clusters key terms from the findings into meaningful groups, 2) predicts the presence of each topic for a given input image based on learned visual features, and 3) uses an image’s predicted topic encoding as features to predict one or more diagnoses. Since the net learns the topic model jointly with the classifier, it gives us a powerful tool for understanding which semantic concepts the net might be ex- ploiting when making diagnoses, and since we constrain the net to predict topics based on expert-annotated reports, the net automatically encodes some higher-level expert knowledge about how to make diagnoses.  more » « less
Award ID(s):
1747778
PAR ID:
10105312
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the spread of COVID-19, significantly more patients have required medical diagnosis to determine whether they are a carrier of the virus. COVID-19 can lead to the development of pneumonia in the lungs, which can be captured in X-Ray and CT scans of the patient's chest. The abundance of X-Ray and CT image data available can be used to develop a high-performing computer vision model able to identify and classify instances of pneumonia present in medical scans. Predictions made by these deep learning models can increase the confidence of diagnoses made by analyzing minute features present in scans exhibiting COVID-19 pneumonia, often unnoticeable to the human eye. Furthermore, rather than teaching clinicians about the mathematics behind deep learning and heat maps, we introduce novel methods of explainable artificial intelligence (XAI) with the goal to annotate instances of pneumonia in medical scans exactly as radiologists do to inform other radiologists, clinicians, and interns about patterns and findings. This project explores methods to train and optimize state-of-the-art deep learning models on COVID-19 pneumonia medical scans and apply explainability algorithms to generate annotated explanations of model predictions that are useful to clinicians and radiologists in analyzing these images. 
    more » « less
  2. Pneumonia is a high mortality disease that kills 50, 000 people in the United States each year. Children under the age of 5 and older population over the age of 65 are susceptible to serious cases of pneumonia. The United States spend billions of dollars fighting pneumonia-related infections every year. Early detection and intervention are crucial in treating pneumonia related infections. Since chest x-ray is one of the simplest and cheapest methods to diagnose pneumonia, we propose a deep learning algorithm based on convolutional neural networks to identify and classify pneumonia cases from these images. For all three models implemented, we obtained varying classification results and accuracy. Based on the results, we obtained better prediction with average accuracy of (68%) and average specificity of (69%) in contrast to the current state-of-the-art accuracy that is (51%) using the Visual Geometry Group (VGG16 also called OxfordNet), which is a convolutional neural network architecture developed by the Visual Geometry Group of Oxford. By implementing more novel lung segmentation techniques, reducing over fitting, and adding more learning layers, the proposed model has the potential to predict at higher accuracy than human specialists and will help subsidies and reduce the cost of diagnosis across the globe. 
    more » « less
  3. Abstract During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745–0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients. 
    more » « less
  4. null (Ed.)
    Coronavirus Disease 2019 (COVID-19) is caused by severe acute respiratory syndrome coronavirus 2 virus (SARS-CoV-2). The virus transmits rapidly; it has a basic reproductive number (R0) of 2.2-2.7. In March 2020, the World Health Organization declared the COVID-19 outbreak a pandemic. COVID-19 is currently affecting more than 200 countries with 6M active cases. An effective testing strategy for COVID-19 is crucial to controlling the outbreak but the demand for testing surpasses the availability of test kits that use Reverse Transcription Polymerase Chain Reaction (RT-PCR). In this paper, we present a technique to screen for COVID-19 using artificial intelligence. Our technique takes only seconds to screen for the presence of the virus in a patient. We collected a dataset of chest X-ray images and trained several popular deep convolution neural network-based models (VGG, MobileNet, Xception, DenseNet, InceptionResNet) to classify the chest X-rays. Unsatisfied with these models, we then designed and built a Residual Attention Network that was able to screen COVID-19 with a testing accuracy of 98% and a validation accuracy of 100%. A feature maps visual of our model show areas in a chest X-ray which are important for classification. Our work can help to increase the adaptation of AI-assisted applications in clinical practice. The code and dataset used in this project are available at https://github.com/vishalshar/covid-19-screening-using-RAN-on-X-ray-images. 
    more » « less
  5. null (Ed.)
    Model explainability is essential for the creation of trustworthy Machine Learning models in healthcare. An ideal explanation resembles the decision-making process of a domain expert and is expressed using concepts or terminology that is meaningful to the clinicians. To provide such explanation, we first associate the hidden units of the classifier to clinically relevant concepts. We take advantage of radiology reports accompanying the chest X-ray images to define concepts. We discover sparse associations between concepts and hidden units using a linear sparse logistic regression. To ensure that the identified units truly influence the classifier’s outcome, we adopt tools from Causal Inference literature and, more specifically, mediation analysis through counterfactual interventions. Finally, we construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule, expressed to the radiologist. We evaluated our approach on a large chest x-ray dataset, where our model produces a global explanation consistent with clinical knowledge. 
    more » « less