Background: At the time of cancer diagnosis, it is crucial to accurately classify malignant gastric tumors and the possibility that patients will survive. Objective: This study aims to investigate the feasibility of identifying and applying a new feature extraction technique to predict the survival of gastric cancer patients. Methods: A retrospective dataset including the computed tomography (CT) images of 135 patients was assembled. Among them, 68 patients survived longer than three years. Several sets of radiomics features were extracted and were incorporated into a machine learning model, and their classification performance was characterized. To improve the classification performance, we further extracted another 27 texture and roughness parameters with 2484 superficial and spatial features to propose a new feature pool. This new feature set was added into the machine learning model and its performance was analyzed. To determine the best model for our experiment, Random Forest (RF) classifier, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naïve Bayes (NB) (four of the most popular machine learning models) were utilized. The models were trained and tested using the five-fold cross-validation method. Results: Using the area under ROC curve (AUC) as an evaluation index, the model that was generated using the new feature pool yields AUC = 0.98 ± 0.01, which was significantly higher than the models created using the traditional radiomics feature set (p < 0.04). RF classifier performed better than the other machine learning models. Conclusions: This study demonstrated that although radiomics features produced good classification performance, creating new feature sets significantly improved the model performance.
more »
« less
Radiomics on spatial‐temporal manifolds via Fokker–Planck dynamics
Abstract BackgroundDelta radiomics is a high‐throughput computational technique used to describe quantitative changes in serial, time‐series imaging by considering the relative change in radiomic features of images extracted at two distinct time points. Recent work has demonstrated a lack of prognostic signal of radiomic features extracted using this technique. We hypothesize that this lack of signal is due to the fundamental assumptions made when extracting features via delta radiomics, and that other methods should be investigated. PurposeThe purpose of this work was to show a proof‐of‐concept of a new radiomics paradigm for sparse, time‐series imaging data, where features are extracted from a spatial‐temporal manifold modeling the time evolution between images, and to assess the prognostic value on patients with oropharyngeal cancer (OPC). MethodsTo accomplish this, we developed an algorithm to mathematically describe the relationship between two images acquired at time and . These images serve as boundary conditions of a partial differential equation describing the transition from one image to the other. To solve this equation, we propagate the position and momentum of each voxel according to Fokker–Planck dynamics (i.e., a technique common in statistical mechanics). This transformation is driven by an underlying potential force uniquely determined by the equilibrium image. The solution generates a spatial‐temporal manifold (3 spatial dimensions + time) from which we define dynamic radiomic features. First, our approach was numerically verified by stochastically sampling dynamic Gaussian processes of monotonically decreasing noise. The transformation from high to low noise was compared between our Fokker–Planck estimation and simulated ground‐truth. To demonstrate feasibility and clinical impact, we applied our approach to18F‐FDG‐PET images to estimate early metabolic response of patients (n = 57) undergoing definitive (chemo)radiation for OPC. Images were acquired pre‐treatment and 2‐weeks intra‐treatment (after 20 Gy). Dynamic radiomic features capturing changes in texture and morphology were then extracted. Patients were partitioned into two groups based on similar dynamic radiomic feature expression via k‐means clustering and compared by Kaplan–Meier analyses with log‐rank tests (p < 0.05). These results were compared to conventional delta radiomics to test the added value of our approach. ResultsNumerical results confirmed our technique can recover image noise characteristics given sparse input data as boundary conditions. Our technique was able to model tumor shrinkage and metabolic response. While no delta radiomics features proved prognostic, Kaplan–Meier analyses identified nine significant dynamic radiomic features. The most significant feature was Gray‐Level‐Size‐Zone‐Matrix gray‐level variance (p = 0.011), which demonstrated prognostic improvement over its corresponding delta radiomic feature (p = 0.722). ConclusionsWe developed, verified, and demonstrated the prognostic value of a novel, physics‐based radiomics approach over conventional delta radiomics via data assimilation of quantitative imaging and differential equations.
more »
« less
- Award ID(s):
- 2106988
- PAR ID:
- 10530679
- Publisher / Repository:
- American Association of Physicists in Medicine
- Date Published:
- Journal Name:
- Medical Physics
- Volume:
- 51
- Issue:
- 5
- ISSN:
- 0094-2405
- Page Range / eLocation ID:
- 3334 to 3347
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Processing and modeling medical images have traditionally represented complex tasks requiring multidisciplinary collaboration. The advent of radiomics has assigned a central role to quantitative data analytics targeting medical image features algorithmically extracted from large volumes of images. Apart from the ultimate goal of supporting diagnostic, prognostic, and therapeutic decisions, radiomics is computationally attractive due to specific strengths: scalability, efficiency, and precision. Optimization is achieved by highly sophisticated statistical and machine learning algorithms, but it is especially deep learning that stands out as the leading inference approach. Various types of hybrid learning can be considered when building complex integrative approaches aimed to deliver gains in accuracy for both classification and prediction tasks. This perspective reviews some selected learning methods by focusing on both their significance for radiomics and their unveiled potential.more » « less
-
Abstract BackgroundRecent developments to segment and characterize the regions of interest (ROI) within medical images have led to promising shape analysis studies. However, the procedures to analyze the ROI are arbitrary and vary by study. A tool to translate the ROI to analyzable shape representations and features is greatly needed. ResultsWe developed SAFARI (shape analysis for AI-segmented images), an open-source package with a user-friendly online tool kit for ROI labelling and shape feature extraction of segmented maps, provided by AI-algorithms or manual segmentation. We demonstrated that half of the shape features extracted by SAFARI were significantly associated with survival outcomes in a case study on 143 consecutive patients with stage I–IV lung cancer and another case study on 61 glioblastoma patients. ConclusionsSAFARI is an efficient and easy-to-use toolkit for segmenting and analyzing ROI in medical images. It can be downloaded from the comprehensive R archive network (CRAN) and accessed athttps://lce.biohpc.swmed.edu/safari/.more » « less
-
Abstract Prostate cancer treatment decisions rely heavily on subjective visual interpretation [assigning Gleason patterns or International Society of Urological Pathology (ISUP) grade groups] of limited numbers of two‐dimensional (2D) histology sections. Under this paradigm, interobserver variance is high, with ISUP grades not correlating well with outcome for individual patients, and this contributes to the over‐ and undertreatment of patients. Recent studies have demonstrated improved prognostication of prostate cancer outcomes based on computational analyses of glands and nuclei within 2D whole slide images. Our group has also shown that the computational analysis of three‐dimensional (3D) glandular features, extracted from 3D pathology datasets of whole intact biopsies, can allow for improved recurrence prediction compared to corresponding 2D features. Here we seek to expand on these prior studies by exploring the prognostic value of 3D shape‐based nuclear features in prostate cancer (e.g. nuclear size, sphericity). 3D pathology datasets were generated using open‐top light‐sheet (OTLS) microscopy of 102 cancer‐containing biopsies extractedex vivofrom the prostatectomy specimens of 46 patients. A deep learning‐based workflow was developed for 3D nuclear segmentation within the glandular epithelium versus stromal regions of the biopsies. 3D shape‐based nuclear features were extracted, and a nested cross‐validation scheme was used to train a supervised machine classifier based on 5‐year biochemical recurrence (BCR) outcomes. Nuclear features of the glandular epithelium were found to be more prognostic than stromal cell nuclear features (area under the ROC curve [AUC] = 0.72 versus 0.63). 3D shape‐based nuclear features of the glandular epithelium were also more strongly associated with the risk of BCR than analogous 2D features (AUC = 0.72 versus 0.62). The results of this preliminary investigation suggest that 3D shape‐based nuclear features are associated with prostate cancer aggressiveness and could be of value for the development of decision‐support tools. © 2023 The Pathological Society of Great Britain and Ireland.more » « less
-
Abstract Prostate cancer treatment planning is largely dependent upon examination of core-needle biopsies. The microscopic architecture of the prostate glands forms the basis for prognostic grading by pathologists. Interpretation of these convoluted three-dimensional (3D) glandular structures via visual inspection of a limited number of two-dimensional (2D) histology sections is often unreliable, which contributes to the under- and overtreatment of patients. To improve risk assessment and treatment decisions, we have developed a workflow for nondestructive 3D pathology and computational analysis of whole prostate biopsies labeled with a rapid and inexpensive fluorescent analogue of standard hematoxylin and eosin (H&E) staining. This analysis is based on interpretable glandular features and is facilitated by the development of image translation–assisted segmentation in 3D (ITAS3D). ITAS3D is a generalizable deep learning–based strategy that enables tissue microstructures to be volumetrically segmented in an annotation-free and objective (biomarker-based) manner without requiring immunolabeling. As a preliminary demonstration of the translational value of a computational 3D versus a computational 2D pathology approach, we imaged 300 ex vivo biopsies extracted from 50 archived radical prostatectomy specimens, of which, 118 biopsies contained cancer. The 3D glandular features in cancer biopsies were superior to corresponding 2D features for risk stratification of patients with low- to intermediate-risk prostate cancer based on their clinical biochemical recurrence outcomes. The results of this study support the use of computational 3D pathology for guiding the clinical management of prostate cancer. Significance:An end-to-end pipeline for deep learning–assisted computational 3D histology analysis of whole prostate biopsies shows that nondestructive 3D pathology has the potential to enable superior prognostic stratification of patients with prostate cancer.more » « less