Title: Improving tumor co-segmentation on PET-CT images with 3D co-matting
Positron emission tomography and computed tomography (PET-CT) plays a critically important role in modern cancer therapy. In this paper, we focus on automated tumor delineation on PET-CT image pairs. Inspired by co-segmentation model, we develop a novel 3D image co-matting technique making use of the inner-modality information of PET and CT for matting. The obtained co-matting results are then incorporated in the graph-cut based PET-CT co-segmentation framework. Our comparative experiments on 32 PET-CT scan pairs of lung cancer patients demonstrate that the proposed 3D image co-matting technique can significantly improve the quality of cost images for the co-segmentation, resulting in highly accurate tumor segmentation on both PET and CT scan pairs. more »« less
Zhong, Zisha; Kim, Yusung; Zhou, Leixin; Plichta, Kristin; Allen, Bryan; Buatti, John; Wu, Xiaodong
(, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018))
null
(Ed.)
Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method.
Baek, Stephen; He, Yusen; Allen, Bryan G.; Buatti, John M.; Smith, Brian J.; Tong, Ling; Sun, Zhiyu; Wu, Jia; Diehn, Maximilian; Loo, Billy W.; et al
(, Scientific Reports)
Abstract Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment.
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a significant advantage in assisting cancer diagnosis. To reduce the workload of manual segmentation and avoid unnecessary biopsies or surgeries, in this paper, we propose a novel end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify the abnormal areas of the kidney and diagnose the histological subtypes of renal cell carcinoma (RCC). The proposed framework first segments the kidney and renal mass regions by a 3D deep learning architecture (Res-UNet), followed by a dual-path classification network utilizing local and global features for the subtype prediction of the most common RCCs: clear cell, chromophobe, oncocytoma, papillary, and other RCC subtypes. To improve the robustness of the proposed framework on the dataset collected from various institutions, a weakly supervised learning schema is proposed to leverage the domain gap between various vendors via very few CT slice annotations. Our proposed diagnosis system can accurately segment the kidney and renal mass regions and predict tumor subtypes, outperforming existing methods on the KiTs19 dataset. Furthermore, cross-dataset validation results demonstrate the robustness of datasets collected from different institutions trained via the weakly supervised learning schema.
Hu R.; Cui J.; Li C.; Yu C.; Chen Y.; Liu H.
(, Physics in medicine and biology)
Objective. Dynamic positron emission tomography (PET) imaging, which can provide information on dynamic changes in physiological metabolism, is now widely used in clinical diagnosis and cancer treatment. However, the reconstruction from dynamic data is extremely challenging due to the limited counts received in individual frame, especially in ultra short frames. Recently, the unrolled modelbased deep learning methods have shown inspiring results for low-count PET image reconstruction with good interpretability. Nevertheless, the existing model-based deep learning methods mainly focus on the spatial correlations while ignore the temporal domain. Approach. In this paper, inspired by the learned primal dual (LPD) algorithm, we propose the spatio-temporal primal dual network (STPDnet) for dynamic low-count PET image reconstruction. Both spatial and temporal correlations are encoded by 3D convolution operators. The physical projection of PET is embedded in the iterative learning process of the network, which provides the physical constraints and enhances interpretability. Main results. The experiments of both simulation data and real rat scan data have shown that the proposed method can achieve substantial noise reduction in both temporal and spatial domains and outperform the maximum likelihood expectation maximization, spatio-temporal kernel method, LPD and FBPnet. Significance. Experimental results show STPDnet better reconstruction performance in the low count situation, which makes the proposed method particularly suitable in whole-body dynamic imaging and parametric PET imaging that require extreme short frames and usually suffer from high level of noise.
Magnetic particle imaging (MPI) is an emerging noninvasive molecular imaging modality with high sensitivity and specificity, exceptional linear quantitative ability, and potential for successful applications in clinical settings. Computed tomography (CT) is typically combined with the MPI image to obtain more anatomical information. Herein, a deep learning‐based approach for MPI‐CT image segmentation is presented. The dataset utilized in training the proposed deep learning model is obtained from a transgenic mouse model of breast cancer following administration of indocyanine green (ICG)‐conjugated superparamagnetic iron oxide nanoworms (NWs‐ICG) as the tracer. The NWs‐ICG particles progressively accumulate in tumors due to the enhanced permeability and retention (EPR) effect. The proposed deep learning model exploits the advantages of the multihead attention mechanism and the U‐Net model to perform segmentation on the MPI‐CT images, showing superb results. In addition, the model is characterized with a different number of attention heads to explore the optimal number for our custom MPI‐CT dataset.
Zhong, Zisha, Kim, Yusung, Zhou, Leixin, Plichta, Kristin, Allen, Bryan, Buatti, John, and Wu, Xiaodong. Improving tumor co-segmentation on PET-CT images with 3D co-matting. Retrieved from https://par.nsf.gov/biblio/10285320. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) . Web. doi:10.1109/ISBI.2018.8363560.
Zhong, Zisha, Kim, Yusung, Zhou, Leixin, Plichta, Kristin, Allen, Bryan, Buatti, John, and Wu, Xiaodong.
"Improving tumor co-segmentation on PET-CT images with 3D co-matting". 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (). Country unknown/Code not available. https://doi.org/10.1109/ISBI.2018.8363560.https://par.nsf.gov/biblio/10285320.
@article{osti_10285320,
place = {Country unknown/Code not available},
title = {Improving tumor co-segmentation on PET-CT images with 3D co-matting},
url = {https://par.nsf.gov/biblio/10285320},
DOI = {10.1109/ISBI.2018.8363560},
abstractNote = {Positron emission tomography and computed tomography (PET-CT) plays a critically important role in modern cancer therapy. In this paper, we focus on automated tumor delineation on PET-CT image pairs. Inspired by co-segmentation model, we develop a novel 3D image co-matting technique making use of the inner-modality information of PET and CT for matting. The obtained co-matting results are then incorporated in the graph-cut based PET-CT co-segmentation framework. Our comparative experiments on 32 PET-CT scan pairs of lung cancer patients demonstrate that the proposed 3D image co-matting technique can significantly improve the quality of cost images for the co-segmentation, resulting in highly accurate tumor segmentation on both PET and CT scan pairs.},
journal = {2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)},
author = {Zhong, Zisha and Kim, Yusung and Zhou, Leixin and Plichta, Kristin and Allen, Bryan and Buatti, John and Wu, Xiaodong},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.