The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 16 until 2:00 AM ET on Saturday, May 17 due to maintenance. We apologize for the inconvenience.
Explore Research Products in the PAR It may take a few hours for recently added research products to appear in PAR search results.
Title: 3D fully convolutional networks for co-segmentation of tumors on PET-CT images
Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method. more »« less
Zhong, Zisha; Kim, Yusung; Zhou, Leixin; Plichta, Kristin; Allen, Bryan; Buatti, John; Wu, Xiaodong
(, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018))
null
(Ed.)
Positron emission tomography and computed tomography (PET-CT) plays a critically important role in modern cancer therapy. In this paper, we focus on automated tumor delineation on PET-CT image pairs. Inspired by co-segmentation model, we develop a novel 3D image co-matting technique making use of the inner-modality information of PET and CT for matting. The obtained co-matting results are then incorporated in the graph-cut based PET-CT co-segmentation framework. Our comparative experiments on 32 PET-CT scan pairs of lung cancer patients demonstrate that the proposed 3D image co-matting technique can significantly improve the quality of cost images for the co-segmentation, resulting in highly accurate tumor segmentation on both PET and CT scan pairs.
Baek, Stephen; He, Yusen; Allen, Bryan G.; Buatti, John M.; Smith, Brian J.; Tong, Ling; Sun, Zhiyu; Wu, Jia; Diehn, Maximilian; Loo, Billy W.; et al
(, Scientific Reports)
Abstract Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment.
Wu, Xiaodong; Zhong, Zisha; Buatti, John; Bai, Junjie
(, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018))
null
(Ed.)
Deep networks have been used in a growing trend in medical image analysis with the remarkable progress in deep learning. In this paper, we formulate the multi-scale segmentation as a Markov Random Field (MRF) energy minimization problem in a deep network (graph), which can be efficiently and exactly solved by computing a minimum s-t cut in an appropriately constructed graph. The performance of the proposed method is assessed on the application of lung tumor segmentation in 38 mega-voltage cone-beam computed tomography datasets.
We introduce an active, semisupervised algorithm that utilizes Bayesian experimental design to address the shortage of annotated images required to train and validate Artificial Intelligence (AI) models for lung cancer screening with computed tomography (CT) scans. Our approach incorporates active learning with semisupervised expectation maximization to emulate the human in the loop for additional ground truth labels to train, evaluate, and update the neural network models. Bayesian experimental design is used to intelligently identify which unlabeled samples need ground truth labels to enhance the model’s performance. We evaluate the proposed Active Semi-supervised Expectation Maximization for Computer aided diagnosis (CAD) tasks (ASEM-CAD) using three public CT scans datasets: the National Lung Screening Trial (NLST), the Lung Image Database Consortium (LIDC), and Kaggle Data Science Bowl 2017 for lung cancer classification using CT scans. ASEM-CAD can accurately classify suspicious lung nodules and lung cancer cases with an area under the curve (AUC) of 0.94 (Kaggle), 0.95 (NLST), and 0.88 (LIDC) with significantly fewer labeled images compared to a fully supervised model. This study addresses one of the significant challenges in early lung cancer screenings using low-dose computed tomography (LDCT) scans and is a valuable contribution towards the development and validation of deep learning algorithms for lung cancer screening and other diagnostic radiology examinations.
ImportanceScreening with low-dose computed tomography (CT) has been shown to reduce mortality from lung cancer in randomized clinical trials in which the rate of adherence to follow-up recommendations was over 90%; however, adherence to Lung Computed Tomography Screening Reporting & Data System (Lung-RADS) recommendations has been low in practice. Identifying patients who are at risk of being nonadherent to screening recommendations may enable personalized outreach to improve overall screening adherence. ObjectiveTo identify factors associated with patient nonadherence to Lung-RADS recommendations across multiple screening time points. Design, Setting, and ParticipantsThis cohort study was conducted at a single US academic medical center across 10 geographically distributed sites where lung cancer screening is offered. The study enrolled individuals who underwent low-dose CT screening for lung cancer between July 31, 2013, and November 30, 2021. ExposuresLow-dose CT screening for lung cancer. Main Outcomes and MeasuresThe main outcome was nonadherence to follow-up recommendations for lung cancer screening, defined as failing to complete a recommended or more invasive follow-up examination (ie, diagnostic dose CT, positron emission tomography–CT, or tissue sampling vs low-dose CT) within 15 months (Lung-RADS score, 1 or 2), 9 months (Lung-RADS score, 3), 5 months (Lung-RADS score, 4A), or 3 months (Lung-RADS score, 4B/X). Multivariable logistic regression was used to identify factors associated with patient nonadherence to baseline Lung-RADS recommendations. A generalized estimating equations model was used to assess whether the pattern of longitudinal Lung-RADS scores was associated with patient nonadherence over time. ResultsAmong 1979 included patients, 1111 (56.1%) were aged 65 years or older at baseline screening (mean [SD] age, 65.3 [6.6] years), and 1176 (59.4%) were male. The odds of being nonadherent were lower among patients with a baseline Lung-RADS score of 1 or 2 vs 3 (adjusted odds ratio [AOR], 0.35; 95% CI, 0.25-0.50), 4A (AOR, 0.21; 95% CI, 0.13-0.33), or 4B/X, (AOR, 0.10; 95% CI, 0.05-0.19); with a postgraduate vs college degree (AOR, 0.70; 95% CI, 0.53-0.92); with a family history of lung cancer vs no family history (AOR, 0.74; 95% CI, 0.59-0.93); with a high age-adjusted Charlson Comorbidity Index score (≥4) vs a low score (0 or 1) (AOR, 0.67; 95% CI, 0.46-0.98); in the high vs low income category (AOR, 0.79; 95% CI, 0.65-0.98); and referred by physicians from pulmonary or thoracic-related departments vs another department (AOR, 0.56; 95% CI, 0.44-0.73). Among 830 eligible patients who had completed at least 2 screening examinations, the adjusted odds of being nonadherent to Lung-RADS recommendations at the following screening were increased in patients with consecutive Lung-RADS scores of 1 to 2 (AOR, 1.38; 95% CI, 1.12-1.69). Conclusions and RelevanceIn this retrospective cohort study, patients with consecutive negative lung cancer screening results were more likely to be nonadherent with follow-up recommendations. These individuals are potential candidates for tailored outreach to improve adherence to recommended annual lung cancer screening.
Zhong, Zisha, Kim, Yusung, Zhou, Leixin, Plichta, Kristin, Allen, Bryan, Buatti, John, and Wu, Xiaodong. 3D fully convolutional networks for co-segmentation of tumors on PET-CT images. Retrieved from https://par.nsf.gov/biblio/10285321. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) . Web. doi:10.1109/ISBI.2018.8363561.
Zhong, Zisha, Kim, Yusung, Zhou, Leixin, Plichta, Kristin, Allen, Bryan, Buatti, John, and Wu, Xiaodong.
"3D fully convolutional networks for co-segmentation of tumors on PET-CT images". 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (). Country unknown/Code not available. https://doi.org/10.1109/ISBI.2018.8363561.https://par.nsf.gov/biblio/10285321.
@article{osti_10285321,
place = {Country unknown/Code not available},
title = {3D fully convolutional networks for co-segmentation of tumors on PET-CT images},
url = {https://par.nsf.gov/biblio/10285321},
DOI = {10.1109/ISBI.2018.8363561},
abstractNote = {Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method.},
journal = {2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)},
author = {Zhong, Zisha and Kim, Yusung and Zhou, Leixin and Plichta, Kristin and Allen, Bryan and Buatti, John and Wu, Xiaodong},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.