skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Computational Analysis of a Light-Weight SUVr Processing Technique for Neuroimaging Alzheimer’s Disease
The Standard Uptake Value (SUV) is conventionally calculated using the ratio of the injected PET radiotracer dose and subject body weight (Binj) . SUVs are used to obtain SUV ratios (SUVr), an important metric in many Alzheimer's Disease (AD) neuroimaging studies. However, SUVr can be obtained using only neuroimaging data, bypassing the need for Binj . This paper proposes the SUVr-LightWeight (SUVr-LW) algorithm which is not reliant on clinical data and instead focuses on PET intensity values. The SUVr-LW was evaluated using the Centiloid Project Florebetaben (FBB) subject cohort and reached a linear regression slope of 0.98, while the healthy control subjects produced a slope of 0.87.  more » « less
Award ID(s):
1920182 1551221
PAR ID:
10458805
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
2022 International Conference on Computational Science and Computational Intelligence (CSCI)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose To develop a method of biologically guided deep learning for post-radiation 18 FDG-PET image outcome prediction based on pre-radiation images and radiotherapy dose information. Methods Based on the classic reaction–diffusion mechanism, a novel biological model was proposed using a partial differential equation that incorporates spatial radiation dose distribution as a patient-specific treatment information variable. A 7-layer encoder–decoder-based convolutional neural network (CNN) was designed and trained to learn the proposed biological model. As such, the model could generate post-radiation 18 FDG-PET image outcome predictions with breakdown biological components for enhanced explainability. The proposed method was developed using 64 oropharyngeal patients with paired 18 FDG-PET studies before and after 20-Gy delivery (2 Gy/day fraction) by intensity-modulated radiotherapy (IMRT). In a two-branch deep learning execution, the proposed CNN learns specific terms in the biological model from paired 18 FDG-PET images and spatial dose distribution in one branch, and the biological model generates post-20-Gy 18 FDG-PET image prediction in the other branch. As in 2D execution, 718/233/230 axial slices from 38/13/13 patients were used for training/validation/independent test. The prediction image results in test cases were compared with the ground-truth results quantitatively. Results The proposed method successfully generated post-20-Gy 18 FDG-PET image outcome prediction with breakdown illustrations of biological model components. Standardized uptake value (SUV) mean values in 18 FDG high-uptake regions of predicted images (2.45 ± 0.25) were similar to ground-truth results (2.51 ± 0.33). In 2D-based Gamma analysis, the median/mean Gamma Index (<1) passing rate of test images was 96.5%/92.8% using the 5%/5 mm criterion; such result was improved to 99.9%/99.6% when 10%/10 mm was adopted. Conclusion The developed biologically guided deep learning method achieved post-20-Gy 18 FDG-PET image outcome predictions in good agreement with ground-truth results. With the breakdown biological modeling components, the outcome image predictions could be used in adaptive radiotherapy decision-making to optimize personalized plans for the best outcome in the future. 
    more » « less
  2. Abstract Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment. 
    more » « less
  3. Abstract. Clouds warm the surface in the longwave (LW), and this warming effect can be quantified through the surface LW cloud radiativeeffect (CRE). The global surface LW CRE has been estimated over more than2 decades using space-based radiometers (2000–2021) and over the 5-year period ending in 2011 using the combination of radar, lidar and space-basedradiometers. Previous work comparing these two types of retrievals has shown that the radiometer-based cloud amount has some bias over icy surfaces. Here we propose new estimates of the global surface LW CRE from space-based lidarobservations over the 2008–2020 time period. We show from 1D atmosphericcolumn radiative transfer calculations that surface LW CRE linearly decreases with increasing cloud altitude. These computations allow us toestablish simple parameterizations between surface LW CRE and five cloud properties that are well observed by the Cloud-Aerosol Lidar and InfraredPathfinder Satellite Observations (CALIPSO) space-based lidar: opaque cloud cover and altitude and thin cloud cover, altitude, and emissivity. We evaluate this new surface LWCRE–LIDAR product by comparing it to existingsatellite-derived products globally on instantaneous collocated data atfootprint scale and on global averages as well as to ground-based observations at specific locations. This evaluation shows good correlationsbetween this new product and other datasets. Our estimate appears to be animprovement over others as it appropriately captures the annual variabilityof the surface LW CRE over bright polar surfaces and it provides a datasetmore than 13 years long. 
    more » « less
  4. Abstract BackgroundIn Alzheimer’s Diseases (AD) research, multimodal imaging analysis can unveil complementary information from multiple imaging modalities and further our understanding of the disease. One application is to discover disease subtypes using unsupervised clustering. However, existing clustering methods are often applied to input features directly, and could suffer from the curse of dimensionality with high-dimensional multimodal data. The purpose of our study is to identify multimodal imaging-driven subtypes in Mild Cognitive Impairment (MCI) participants using a multiview learning framework based on Deep Generalized Canonical Correlation Analysis (DGCCA), to learn shared latent representation with low dimensions from 3 neuroimaging modalities. ResultsDGCCA applies non-linear transformation to input views using neural networks and is able to learn correlated embeddings with low dimensions that capture more variance than its linear counterpart, generalized CCA (GCCA). We designed experiments to compare DGCCA embeddings with single modality features and GCCA embeddings by generating 2 subtypes from each feature set using unsupervised clustering. In our validation studies, we found that amyloid PET imaging has the most discriminative features compared with structural MRI and FDG PET which DGCCA learns from but not GCCA. DGCCA subtypes show differential measures in 5 cognitive assessments, 6 brain volume measures, and conversion to AD patterns. In addition, DGCCA MCI subtypes confirmed AD genetic markers with strong signals that existing late MCI group did not identify. ConclusionOverall, DGCCA is able to learn effective low dimensional embeddings from multimodal data by learning non-linear projections. MCI subtypes generated from DGCCA embeddings are different from existing early and late MCI groups and show most similarity with those identified by amyloid PET features. In our validation studies, DGCCA subtypes show distinct patterns in cognitive measures, brain volumes, and are able to identify AD genetic markers. These findings indicate the promise of the imaging-driven subtypes and their power in revealing disease structures beyond early and late stage MCI. 
    more » « less
  5. Abstract BackgroundOropharyngeal cancer (OPC) exhibits varying responses to chemoradiation therapy, making treatment outcome prediction challenging. Traditional imaging‐based methods often fail to capture the spatial heterogeneity within tumors, which influences treatment resistance and disease progression. Advances in modeling techniques allow for more nuanced analysis of this heterogeneity, identifying distinct tumor regions, or habitats, that drive patient outcomes. PurposeTo interrogate the association between treatment‐induced changes in spatial heterogeneity and chemoradiation resistance of oropharyngeal cancer (OPC) based on a novel tumor habitat analysis. MethodsA mathematical model was used to estimate tumor time dynamics of patients with OPC based on the applied analysis of partial differential equations. The position and momentum of each voxel was propagated according to Fokker‐Planck dynamics, that is, a common model in statistical mechanics. The boundary conditions of the Fokker‐Planck equation were solved based on pre‐ and intra‐treatment (i.e., after 2 weeks of therapy)18F‐FDG‐PET SUV images of patients (n = 56) undergoing definitive (chemo)radiation for OPC as part of a previously conducted prospective clinical trial. Tumor‐specific time dynamics, measured based on the solution of the Fokker‐Planck equation, were generated for each patient. Tumor habitats (i.e., non‐overlapping subregions of the primary tumor) were identified by measuring vector similarity in voxel‐level time dynamics through a fuzzy c‐means clustering algorithm. The robustness of our habitat construction method was quantified using a mean silhouette metric to measure intra‐habitat variability. Fifty‐four habitat‐specific radiomic texture features were extracted from pre‐treatment SUV images and normalized by habitat volume. Univariate Kaplan‐Meier analyses were implemented as a feature selection method, where statistically significant features (p < 0.05, log‐rank) were used to construct a multivariate Cox proportional‐hazards model. Parameters from the resulting Cox model were then used to construct a risk score for each patient, based on habitat‐specific radiomic expression. The patient cohort was stratified by median risk score value and association with recurrence‐free survival (RFS) was evaluated via log‐rank tests. ResultsDynamic tumor habitat analysis partitioned the gross disease of each patient into three spatial subregions. Voxels within each habitat suggested differential response rates in different compartments of the tumor. The minimum mean silhouette value was 0.57 and maximum mean silhouette value was 0.8, where values above 0.7 indicated strong intra‐habitat consistency and values between 0.5 and 0.7 indicated reasonable intra‐habitat consistency. Nine radiomic texture features (three GLRLM, two GLCOM, and three GLSZM) and SUVmax were found to be prognostically significant and were used to build the multivariate Cox model. The resulting risk score was associated with RFS (p = 0.032). By contrast, potential confounding factors (primary tumor volume and mean SUV) were not significantly associated with RFS (p = 0.286 andp = 0.231, respectively). ConclusionWe interrogated spatial heterogeneity of oropharyngeal tumors through the application of a novel algorithm to identify spatial habitats on SUV images. Our habitat construction technique was shown to be robust and habitat‐specific feature spaces revealed distinct underlying radiomic expression patterns. Radiomic features were extracted from dynamic habitats and used to build a risk score which demonstrated prognostic value. 
    more » « less