Medical imaging data annotation is expensive and time-consuming. Supervised deep learning approaches may encounter overfitting if trained with limited medical data, and further affect the robustness of computer-aided diagnosis (CAD) on CT scans collected by various scanner vendors. Additionally, the high false-positive rate in automatic lung nodule detection methods prevents their applications in daily clinical routine diagnosis. To tackle these issues, we first introduce a novel self-learning schema to train a pre-trained model by learning rich feature representatives from large-scale unlabeled data without extra annotation, which guarantees a consistent detection performance over novel datasets. Then, a 3D feature pyramid network ( 3DFPN ) is proposed for high-sensitivity nodule detection by extracting multi-scale features, where the weights of the backbone network are initialized by the pre-trained model and then fine-tuned in a supervised manner. Further, a High Sensitivity and Specificity ( HS 2 ) network is proposed to reduce false positives by tracking the appearance changes among continuous CT slices on Location History Images (LHI) for the detected nodule candidates. The proposed method’s performance and robustness are evaluated on several publicly available datasets, including LUNA16, SPIE-AAPM, LungTIME, and HMS. Our proposed detector achieves the state-of-the-art result of 90.6 % sensitivity at 1 / 8 false positive per scan on the LUNA16 dataset. The proposed framework’s generalizability has been evaluated on three additional datasets (i.e., SPIE-AAPM, LungTIME, and HMS) captured by different types of CT scanners. 
                        more » 
                        « less   
                    
                            
                            Multi-View Network for Colorectal Polyps Detection in CT Colonography
                        
                    
    
            Early diagnosis of colorectal polyps, before they turn into cancer, is one of the main keys to treatment. In this work, we propose a framework to help radiologists in reading CT scans and identifying candidate CT slices that have polyps. We propose a colorectal polyps detection approach which consists of two cascaded stages. In the first stage, a CNN-based model is trained and validated to detect polyps in axial CT slices. To narrow down the effective receptive field of the detector neurons, the colon regions are segmented and then fed into the network instead of the original CT slice. This drastically improves the detection and localization results, e.g., the mAP is increased by 36%. To reduce the false positives generated by the detector, in the second stage, we propose a multi-view network (MVN) that classifies polyp candidates. The proposed MVN classifier is trained using sagittal and coronal views corresponding to the detected axial views. The approach is tested in 50 CTC-annotated cases, and the experimental results confirm that after the classification stage, polyps can be detected with an AUC about 95.27%. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2124316
- PAR ID:
- 10643974
- Publisher / Repository:
- IEEE
- Date Published:
- Page Range / eLocation ID:
- 3051 to 3056
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to isolate the colon region from its background, and automatic polyp detection. Moreover, we evaluate the performance of the proposed framework on low-dose Computed Tomography (CT) scans. We build on our visualization approach, Fly-In (FI), which provides “filet”-like projections of the internal surface of the colon. The performance of the Fly-In approach confirms its ability with helping gastroenterologists, and it holds a great promise for combating CRC. In this work, these 2D projections of FI are fused with the 3D colon representation to generate new synthetic images. The synthetic images are used to train a RetinaNet model to detect polyps. The trained model has a 94% f1-score and 97% sensitivity. Furthermore, we study the effect of dose variation in CT scans on the performance of the the FI approach in polyp visualization. A simulation platform is developed for CTC visualization using FI, for regular CTC and low-dose CTC. This is accomplished using a novel AI restoration algorithm that enhances the Low-Dose CT images so that a 3D colon can be successfully reconstructed and visualized using the FI approach. Three senior board-certified radiologists evaluated the framework for the peak voltages of 30 KV, and the average relative sensitivities of the platform were 92%, whereas the 60 KV peak voltage produced average relative sensitivities of 99.5%.more » « less
- 
            null (Ed.)The newly discovered Coronavirus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. The rapid outbreak of this disease has overwhelmed health care infrastructures and arises the need to allocate medical equipment and resources more efficiently. The early diagnosis of this disease will lead to the rapid separation of COVID-19 and non-COVID cases, which will be helpful for health care authorities to optimize resource allocation plans and early prevention of the disease. In this regard, a growing number of studies are investigating the capability of deep learning for early diagnosis of COVID-19. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully automated CT-based framework for identification of COVID-19 positive cases referred to as the “COVID-FACT”. COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentations of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82 % , a sensitivity of 94.55 % , a specificity of 86.04 % , and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts.more » « less
- 
            Abstract. Accurate colon segmentation on abdominal CT scans is crucial for various clinical applications. In this work, we propose an accurate AQ1 approach to colon segmentation from abdomen CT scans. Our architecture incorporates 3D contextual information via sequential episodic training (SET). In each episode, we used two consecutive slices, in a CT scan, as support and query samples in addition to other slices that did not include colon regions as negative samples. Choosing consecutive slices is a proper assumption for support and query samples, as the anatomy of the body does not have abrupt changes. Unlike traditional few-shot segmentation (FSS) approaches, we use the episodic training strategy in a supervised manner. In addition, to improve the discriminability of the learned features of the model, an embedding space is developed using contrastive learning. To guide the contrastive learning process, we use AQ2 an initial labeling that is generated by a Markov random field (MRF)- based approach. Finally, in the inference phase, we first detect the rec tum, which can be accurately extracted using the MRF-based approach, and then apply the SET on the remaining slices. Experiments on our private dataset of 98 CT scans and a public dataset of 30 CT scans illustrate that the proposed FSS model achieves a remarkable validation dice coefficient (DC) of 97.3% (Jaccard index, JD 94. 5%) compared to the classical FSS approaches 82.1% (JD 70.3%). Our findings highlight the efficacy of sequential episodic training in accurate 3D medical imaging segmentation. The codes for the proposed models are available at https://github.com/Samir-Farag/ICPR2024.more » « less
- 
            null (Ed.)A widely-regarded approach in Printed Circuit Board (PCB) reverse engineering (RE) uses non-destructive Xray computed tomography (CT) to produce three-dimensional volumes with several slices of data corresponding to multi-layered PCBs. The noise sources specific to X-ray CT and variability from designers make it difficult to acquire the features needed for the RE process. Hence, these X-ray CT images require specialized image processing techniques to examine the various features of a single PCB to later be translated to a readable CAD format. Previously, we presented an approach where the Hough Circle Transform was used for initial feature detection, and then an iterative false positive removal process was developed specifically for detecting vias on PCBs. Its performance was compared to an off-the-shelf application of the Mask Region-based Convolutional Network (M-RCNN). M-RCNN is an excellent deep learning approach that is able to localize and classify numerous objects of different scales within a single image. In this paper, we present a version of M-RCNN that is fine-tuned for via detection. Changes include polygon boundary annotations on the single X-ray images of vias for training and transfer learning to leverage the full potential of the network. We discuss the challenges of detecting vias using deep learning, our working solution, and our experimental procedure. Additionally, we provide a qualitative evaluation of our approach and use quantitative metrics to compare the proposed approach with the previous iterative one.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    