Title: Active Semi-Supervised Learning via Bayesian Experimental Design for Lung Cancer Classification Using Low Dose Computed Tomography Scans
We introduce an active, semisupervised algorithm that utilizes Bayesian experimental design to address the shortage of annotated images required to train and validate Artificial Intelligence (AI) models for lung cancer screening with computed tomography (CT) scans. Our approach incorporates active learning with semisupervised expectation maximization to emulate the human in the loop for additional ground truth labels to train, evaluate, and update the neural network models. Bayesian experimental design is used to intelligently identify which unlabeled samples need ground truth labels to enhance the model’s performance. We evaluate the proposed Active Semi-supervised Expectation Maximization for Computer aided diagnosis (CAD) tasks (ASEM-CAD) using three public CT scans datasets: the National Lung Screening Trial (NLST), the Lung Image Database Consortium (LIDC), and Kaggle Data Science Bowl 2017 for lung cancer classification using CT scans. ASEM-CAD can accurately classify suspicious lung nodules and lung cancer cases with an area under the curve (AUC) of 0.94 (Kaggle), 0.95 (NLST), and 0.88 (LIDC) with significantly fewer labeled images compared to a fully supervised model. This study addresses one of the significant challenges in early lung cancer screenings using low-dose computed tomography (LDCT) scans and is a valuable contribution towards the development and validation of deep learning algorithms for lung cancer screening and other diagnostic radiology examinations. more »« less
Zhong, Zisha; Kim, Yusung; Zhou, Leixin; Plichta, Kristin; Allen, Bryan; Buatti, John; Wu, Xiaodong
(, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018))
null
(Ed.)
Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method.
Alkabbany, Islam; Ali, Asem M.; Mohamed, Mostafa; Elshazly, Salwa M.; Farag, Aly
(, Sensors)
Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to isolate the colon region from its background, and automatic polyp detection. Moreover, we evaluate the performance of the proposed framework on low-dose Computed Tomography (CT) scans. We build on our visualization approach, Fly-In (FI), which provides “filet”-like projections of the internal surface of the colon. The performance of the Fly-In approach confirms its ability with helping gastroenterologists, and it holds a great promise for combating CRC. In this work, these 2D projections of FI are fused with the 3D colon representation to generate new synthetic images. The synthetic images are used to train a RetinaNet model to detect polyps. The trained model has a 94% f1-score and 97% sensitivity. Furthermore, we study the effect of dose variation in CT scans on the performance of the the FI approach in polyp visualization. A simulation platform is developed for CTC visualization using FI, for regular CTC and low-dose CTC. This is accomplished using a novel AI restoration algorithm that enhances the Low-Dose CT images so that a 3D colon can be successfully reconstructed and visualized using the FI approach. Three senior board-certified radiologists evaluated the framework for the peak voltages of 30 KV, and the average relative sensitivities of the platform were 92%, whereas the 60 KV peak voltage produced average relative sensitivities of 99.5%.
ImportanceScreening with low-dose computed tomography (CT) has been shown to reduce mortality from lung cancer in randomized clinical trials in which the rate of adherence to follow-up recommendations was over 90%; however, adherence to Lung Computed Tomography Screening Reporting & Data System (Lung-RADS) recommendations has been low in practice. Identifying patients who are at risk of being nonadherent to screening recommendations may enable personalized outreach to improve overall screening adherence. ObjectiveTo identify factors associated with patient nonadherence to Lung-RADS recommendations across multiple screening time points. Design, Setting, and ParticipantsThis cohort study was conducted at a single US academic medical center across 10 geographically distributed sites where lung cancer screening is offered. The study enrolled individuals who underwent low-dose CT screening for lung cancer between July 31, 2013, and November 30, 2021. ExposuresLow-dose CT screening for lung cancer. Main Outcomes and MeasuresThe main outcome was nonadherence to follow-up recommendations for lung cancer screening, defined as failing to complete a recommended or more invasive follow-up examination (ie, diagnostic dose CT, positron emission tomography–CT, or tissue sampling vs low-dose CT) within 15 months (Lung-RADS score, 1 or 2), 9 months (Lung-RADS score, 3), 5 months (Lung-RADS score, 4A), or 3 months (Lung-RADS score, 4B/X). Multivariable logistic regression was used to identify factors associated with patient nonadherence to baseline Lung-RADS recommendations. A generalized estimating equations model was used to assess whether the pattern of longitudinal Lung-RADS scores was associated with patient nonadherence over time. ResultsAmong 1979 included patients, 1111 (56.1%) were aged 65 years or older at baseline screening (mean [SD] age, 65.3 [6.6] years), and 1176 (59.4%) were male. The odds of being nonadherent were lower among patients with a baseline Lung-RADS score of 1 or 2 vs 3 (adjusted odds ratio [AOR], 0.35; 95% CI, 0.25-0.50), 4A (AOR, 0.21; 95% CI, 0.13-0.33), or 4B/X, (AOR, 0.10; 95% CI, 0.05-0.19); with a postgraduate vs college degree (AOR, 0.70; 95% CI, 0.53-0.92); with a family history of lung cancer vs no family history (AOR, 0.74; 95% CI, 0.59-0.93); with a high age-adjusted Charlson Comorbidity Index score (≥4) vs a low score (0 or 1) (AOR, 0.67; 95% CI, 0.46-0.98); in the high vs low income category (AOR, 0.79; 95% CI, 0.65-0.98); and referred by physicians from pulmonary or thoracic-related departments vs another department (AOR, 0.56; 95% CI, 0.44-0.73). Among 830 eligible patients who had completed at least 2 screening examinations, the adjusted odds of being nonadherent to Lung-RADS recommendations at the following screening were increased in patients with consecutive Lung-RADS scores of 1 to 2 (AOR, 1.38; 95% CI, 1.12-1.69). Conclusions and RelevanceIn this retrospective cohort study, patients with consecutive negative lung cancer screening results were more likely to be nonadherent with follow-up recommendations. These individuals are potential candidates for tailored outreach to improve adherence to recommended annual lung cancer screening.
Zhong, Zisha; Kim, Yusung; Zhou, Leixin; Plichta, Kristin; Allen, Bryan; Buatti, John; Wu, Xiaodong
(, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018))
null
(Ed.)
Positron emission tomography and computed tomography (PET-CT) plays a critically important role in modern cancer therapy. In this paper, we focus on automated tumor delineation on PET-CT image pairs. Inspired by co-segmentation model, we develop a novel 3D image co-matting technique making use of the inner-modality information of PET and CT for matting. The obtained co-matting results are then incorporated in the graph-cut based PET-CT co-segmentation framework. Our comparative experiments on 32 PET-CT scan pairs of lung cancer patients demonstrate that the proposed 3D image co-matting technique can significantly improve the quality of cost images for the co-segmentation, resulting in highly accurate tumor segmentation on both PET and CT scan pairs.
Interstitial lung disease (ILD) causes pulmonary fibrosis. The correct classification of ILD plays a crucial role in the diagnosis and treatment process. In this research work, we propose a lung nodules recognition method based on a deep convolutional neural network (DCNN) and global features, which can be used for computer-aided diagnosis (CAD) of global features of lung nodules. Firstly, a DCNN is constructed based on the characteristics and complexity of lung computerized tomography (CT) images. Then we discussed the effects of different iterations on the recognition results and influence of different model structures on the global features of lung nodules. We also incorporated the improvement of convolution kernel size, feature dimension, and network depth. Thirdly, the effects of different pooling methods, activation functions and training algorithms we proposed has been analyzed to demonstrate the advantages of the new strategy. Finally, the experimental results verify the feasibility of the proposed DCNN for CAD of global features of lung nodules, and the evaluation shown that our proposed method could achieve an outstanding results compare to state-of-arts.
@article{osti_10507572,
place = {Country unknown/Code not available},
title = {Active Semi-Supervised Learning via Bayesian Experimental Design for Lung Cancer Classification Using Low Dose Computed Tomography Scans},
url = {https://par.nsf.gov/biblio/10507572},
DOI = {10.3390/app13063752},
abstractNote = {We introduce an active, semisupervised algorithm that utilizes Bayesian experimental design to address the shortage of annotated images required to train and validate Artificial Intelligence (AI) models for lung cancer screening with computed tomography (CT) scans. Our approach incorporates active learning with semisupervised expectation maximization to emulate the human in the loop for additional ground truth labels to train, evaluate, and update the neural network models. Bayesian experimental design is used to intelligently identify which unlabeled samples need ground truth labels to enhance the model’s performance. We evaluate the proposed Active Semi-supervised Expectation Maximization for Computer aided diagnosis (CAD) tasks (ASEM-CAD) using three public CT scans datasets: the National Lung Screening Trial (NLST), the Lung Image Database Consortium (LIDC), and Kaggle Data Science Bowl 2017 for lung cancer classification using CT scans. ASEM-CAD can accurately classify suspicious lung nodules and lung cancer cases with an area under the curve (AUC) of 0.94 (Kaggle), 0.95 (NLST), and 0.88 (LIDC) with significantly fewer labeled images compared to a fully supervised model. This study addresses one of the significant challenges in early lung cancer screenings using low-dose computed tomography (LDCT) scans and is a valuable contribution towards the development and validation of deep learning algorithms for lung cancer screening and other diagnostic radiology examinations.},
journal = {Applied Sciences},
volume = {13},
number = {6},
publisher = {MDPI},
author = {Nguyen, Phuong and Rathod, Ankita and Chapman, David and Prathapan, Smriti and Menon, Sumeet and Morris, Michael and Yesha, Yelena},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.