skip to main content

Title: Generating Explanations for Chest Medical Scan Pneumonia Predictions
With the spread of COVID-19, significantly more patients have required medical diagnosis to determine whether they are a carrier of the virus. COVID-19 can lead to the development of pneumonia in the lungs, which can be captured in X-Ray and CT scans of the patient's chest. The abundance of X-Ray and CT image data available can be used to develop a high-performing computer vision model able to identify and classify instances of pneumonia present in medical scans. Predictions made by these deep learning models can increase the confidence of diagnoses made by analyzing minute features present in scans exhibiting COVID-19 pneumonia, often unnoticeable to the human eye. Furthermore, rather than teaching clinicians about the mathematics behind deep learning and heat maps, we introduce novel methods of explainable artificial intelligence (XAI) with the goal to annotate instances of pneumonia in medical scans exactly as radiologists do to inform other radiologists, clinicians, and interns about patterns and findings. This project explores methods to train and optimize state-of-the-art deep learning models on COVID-19 pneumonia medical scans and apply explainability algorithms to generate annotated explanations of model predictions that are useful to clinicians and radiologists in analyzing these images.
Award ID(s):
Publication Date:
Journal Name:
COVID Information Commons
Sponsoring Org:
National Science Foundation
More Like this
  1. The newly discovered Coronavirus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. The rapid outbreak of this disease has overwhelmed health care infrastructures and arises the need to allocate medical equipment and resources more efficiently. The early diagnosis of this disease will lead to the rapid separation of COVID-19 and non-COVID cases, which will be helpful for health care authorities to optimize resource allocation plans and early prevention of the disease. In this regard, a growing number of studies are investigating the capabilitymore »of deep learning for early diagnosis of COVID-19. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully automated CT-based framework for identification of COVID-19 positive cases referred to as the “COVID-FACT”. COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentations of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82 % , a sensitivity of 94.55 % , a specificity of 86.04 % , and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts.« less
  2. Abstract

    During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745–0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist cliniciansmore »in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.

    « less
  3. Computer vision techniques always had played a salient role in numerous medical fields, especially in image diagnosis. Amidst a global pandemic situation, one of the archetypal methods assisting healthcare professionals in diagnosing various types of lung cancers, heart diseases, and COVID-19 infection is the Computed Tomography (CT) medical imaging technique. Segmentation of Lung and Infection with high accuracy in COVID-19 CT scans can play a vital role in the prognosis and diagnosis of a mass population of infected patients. Most of the existing works are predominately based on large private data sets that are practically impossible to obtain during amore »pandemic situation. Moreover, it is difficult to compare the segmentation methods as the data set are obtained in various geographical areas and developed and implemented in different environments. To help the current global pandemic situation, we are proposing a highly data-efficient method that gets trained on 20 expert annotated COVID-19 cases. To increase the efficiency rate further, the proposed model has been implemented on NVIDIA - Jetson Nano (System-on-Chip) to completely exploit the GPU performance for a medical application machine learning module. To compare the results, we tested the performance with conventional U-Net architecture and calculated the performance metrics. The proposed state-of-art method proves better than the conventional architecture delivering a Dice Similarity Coefficient of 99%.« less
  4. X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of severalmore »filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.« less
  5. Swarm Intelligence (SI) is a biological phenomenon in which groups of organisms amplify their combined intelligence by forming real-time systems. It has been studied for decades in fish schools, bird flocks, and bee swarms. Recent advances in networking and AI technologies have enabled distributed human groups to form closed-loop systems modeled after natural swarms. The process is referred to as Artificial Swarm Intelligence (ASI) and has been shown to significantly amplify group intelligence. The present research applies ASI technology to the field of medicine, exploring if small groups of networked radiologists can improve their diagnostic accuracy when reviewing chest X-raysmore »for the presence of pneumonia by “thinking together” as an ASI system. Data was collected for individual diagnoses as well as for diagnoses made by the group working as a real-time ASI system. Diagnoses were also collected using a state-of-the-art deep learning system developed by Stanford University School of Medicine. Results showed that a small group of networked radiologists, when working as a real-time closed-loop ASI system, was significantly more accurate than the individuals on their own, reducing errors by 33%, as well as significantly more accurate (22%) than a state- of-the-art software-only solution using deep learning.« less