skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: CCS-GAN: COVID-19 CT Scan Generation and Classification with Very Few Positive Training Images
We present a novel algorithm that is able to generate deep synthetic COVID-19 pneumonia CT scan slices using a very small sample of positive training images in tandem with a larger number of normal images. This generative algorithm produces images of sufficient accuracy to enable a DNN classifier to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID-19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19-positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. Furthermore, algorithms to produce deep synthetic images with smaller data volumes have the added benefit of reducing the barriers of data sharing between healthcare institutions. We present the cycle-consistent segmentation-generative adversarial network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19-positive CT scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19.  more » « less
Award ID(s):
2051800
PAR ID:
10507570
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
Journal of Digital Imaging
Volume:
36
Issue:
4
ISSN:
1618-727X
Page Range / eLocation ID:
1376 to 1389
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The newly discovered Coronavirus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. The rapid outbreak of this disease has overwhelmed health care infrastructures and arises the need to allocate medical equipment and resources more efficiently. The early diagnosis of this disease will lead to the rapid separation of COVID-19 and non-COVID cases, which will be helpful for health care authorities to optimize resource allocation plans and early prevention of the disease. In this regard, a growing number of studies are investigating the capability of deep learning for early diagnosis of COVID-19. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully automated CT-based framework for identification of COVID-19 positive cases referred to as the “COVID-FACT”. COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentations of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82 % , a sensitivity of 94.55 % , a specificity of 86.04 % , and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts. 
    more » « less
  2. IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are tasked with tracking patients in whom IVCF are placed. Due to their small size and location deep within the abdomen it is common for patients to forget that they have an IVCF. Therefore, there is a significant delay for a new healthcare provider to become aware of the presence of a filter. Patients may have an abdominopelvic CT scan for many reasons and, fortunately, IVCF are clearly visible on these scans. In this research a deep learning model capable of segmenting IVCF from CT scan slices along the axial plane is developed. The model achieved a Dice score of 0.82 for training over 372 CT scan slices. The segmentation model is then integrated with a prediction algorithm capable of flagging an entire CT scan as having IVCF. The prediction algorithm utilizing the segmentation model achieved a 92.22% accuracy at detecting IVCF in the scans. 
    more » « less
  3. Introduction Multi-series CT (MSCT) scans, including non-contrast CT (NCCT), CT Perfusion (CTP), and CT Angiography (CTA), are widely used in acute stroke imaging. While each scan has its advantage in disease diagnosis, the varying image resolution of different series hinders the ability of the radiologist to discern subtle suspicious findings. Besides, higher image quality requires high radiation doses, leading to increases in health risks such as cataract formation and cancer induction. Thus, it is highly crucial to develop an approach to improve MSCT resolution and to lower radiation exposure. Hypothesis MSCT imaging of the same patient is highly correlated in structural features, the transferring and integration of the shared and complementary information from different series are beneficial for achieving high image quality. Methods We propose TL-GAN, a learning-based method by using Transfer Learning (TL) and Generative Adversarial Network (GAN) to reconstruct high-quality diagnostic images. Our TL-GAN method is evaluated on 4,382 images collected from nine patients’ MSCT scans, including 415 NCCT slices, 3,696 CTP slices, and 271 CTA slices. We randomly split the nine patients into a training set (4 patients), a validation set (2 patients), and a testing set (3 patients). In preprocessing, we remove the background and skull and visualize in brain window. The low-resolution images (1/4 of the original spatial size) are simulated by bicubic down-sampling. For training without TL, we train different series individually, and for with TL, we follow the scanning sequence (NCCT, CTP, and CTA) by finetuning. Results The performance of TL-GAN is evaluated by the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) index on 184 NCCT, 882 CTP, and 107 CTA test images. Figure 1 provides both visual (a-c) and quantity (d-f) comparisons. Through TL-GAN, there is a significant improvement with TL than without TL (training from scratch) for NCCT, CTP, and CTA images, respectively. These significances of performance improvement are evaluated by one-tailed paired t-tests (p < 0.05). We enlarge the regions of interest for detail visual comparisons. Further, we evaluate the CTP performance by calculating the perfusion maps, including cerebral blood flow (CBF) and cerebral blood volume (CBV). The visual comparison of the perfusion maps in Figure 2 demonstrate that TL-GAN is beneficial for achieving high diagnostic image quality, which are comparable to the ground truth images for both CBF and CBV maps. Conclusion Utilizing TL-GAN can effectively improve the image resolution for MSCT, provides radiologists more image details for suspicious findings, which is a practical solution for MSCT image quality enhancement. 
    more » « less
  4. Abstract Purpose This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings. Methods The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy. Results The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method’s robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans). Conclusions The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets—a common challenge in medical imaging. 
    more » « less
  5. Abstract. Accurate colon segmentation on abdominal CT scans is crucial for various clinical applications. In this work, we propose an accurate AQ1 approach to colon segmentation from abdomen CT scans. Our architecture incorporates 3D contextual information via sequential episodic training (SET). In each episode, we used two consecutive slices, in a CT scan, as support and query samples in addition to other slices that did not include colon regions as negative samples. Choosing consecutive slices is a proper assumption for support and query samples, as the anatomy of the body does not have abrupt changes. Unlike traditional few-shot segmentation (FSS) approaches, we use the episodic training strategy in a supervised manner. In addition, to improve the discriminability of the learned features of the model, an embedding space is developed using contrastive learning. To guide the contrastive learning process, we use AQ2 an initial labeling that is generated by a Markov random field (MRF)- based approach. Finally, in the inference phase, we first detect the rec tum, which can be accurately extracted using the MRF-based approach, and then apply the SET on the remaining slices. Experiments on our private dataset of 98 CT scans and a public dataset of 30 CT scans illustrate that the proposed FSS model achieves a remarkable validation dice coefficient (DC) of 97.3% (Jaccard index, JD 94. 5%) compared to the classical FSS approaches 82.1% (JD 70.3%). Our findings highlight the efficacy of sequential episodic training in accurate 3D medical imaging segmentation. The codes for the proposed models are available at https://github.com/Samir-Farag/ICPR2024. 
    more » « less