Abstract Age-related macular degeneration (AMD) is the principal cause of blindness in developed countries, and its prevalence will increase to 288 million people in 2040. Therefore, automated grading and prediction methods can be highly beneficial for recognizing susceptible subjects to late-AMD and enabling clinicians to start preventive actions for them. Clinically, AMD severity is quantified by Color Fundus Photographs (CFP) of the retina, and many machine-learning-based methods are proposed for grading AMD severity. However, few models were developed to predict the longitudinal progression status, i.e. predicting future late-AMD risk based on the current CFP, which is more clinically interesting. In this paper, we propose a new deep-learning-based classification model (LONGL-Net) that can simultaneously grade the current CFP and predict the longitudinal outcome, i.e. whether the subject will be in late-AMD in the future time-point. We design a new temporal-correlation-structure-guided Generative Adversarial Network model that learns the interrelations of temporal changes in CFPs in consecutive time-points and provides interpretability for the classifier's decisions by forecasting AMD symptoms in the future CFPs. We used about 30,000 CFP images from 4,628 participants in the Age-Related Eye Disease Study. Our classifier showed average 0.905 (95% CI: 0.886–0.922) AUC and 0.762 (95% CI: 0.733–0.792) accuracy on the 3-class classification problem of simultaneously grading current time-point's AMD condition and predicting late AMD progression of subjects in the future time-point. We further validated our model on the UK Biobank dataset, where our model showed average 0.905 accuracy and 0.797 sensitivity in grading 300 CFP images. 
                        more » 
                        « less   
                    This content will become publicly available on December 1, 2025
                            
                            Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling
                        
                    
    
            Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classi cation approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment. Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging, modeling the time to disease from sequences of fundus photography images captured over long, irregular time periods. Using longitudinal imaging data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS), LTSA signi cantly outperformed a single-image baseline in 19/20 head-to- head comparisons on late AMD prognosis and 18/20 comparisons on POAG prognosis. A temporal attention analysis also suggested that, while the most recent image is typically the most in uential, prior imaging still provides additional prognostic value. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2306556
- PAR ID:
- 10614235
- Publisher / Repository:
- npj
- Date Published:
- Journal Name:
- npj Digital Medicine
- Volume:
- 7
- Issue:
- 1
- ISSN:
- 2398-6352
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Age-related macular degeneration (AMD) is the leading cause of irreversible blindness in developed countries. Identifying patients at high risk of progression to late AMD, the sight-threatening stage, is critical for clinical actions, including medical interventions and timely monitoring. Recently, deep-learning-based models have been developed and achieved superior performance for late AMD pre- diction. However, most existing methods are limited to the color fundus photography (CFP) from the last ophthalmic visit and do not include the longitudinal CFP history and AMD progression during the previous years’ visits. Patients in different AMD subphenotypes might have various speeds of progression in different stages of AMD disease. Capturing the progression information during the previous years’ visits might be useful for the prediction of AMD pro- gression. In this work, we propose a Contrastive-Attention-based Time-aware Long Short-Term Memory network (CAT-LSTM) to predict AMD progression. First, we adopt a convolutional neural network (CNN) model with a contrastive attention module (CA) to extract abnormal features from CFPs. Then we utilize a time-aware LSTM (T-LSTM) to model the patients’ history and consider the AMD progression information. The combination of disease pro- gression, genotype information, demographics, and CFP features are sent to T-LSTM. Moreover, we leverage an auto-encoder to represent temporal CFP sequences as fixed-size vectors and adopt k-means to cluster them into subphenotypes. We evaluate the pro- posed model based on real-world datasets, and the results show that the proposed model could achieve 0.925 on area under the receiver operating characteristic (AUROC) for 5-year late-AMD prediction and outperforms the state-of-the-art methods by more than 3%, which demonstrates the effectiveness of the proposed CAT-LSTM. After analyzing patient representation learned by an auto-encoder, we identify 3 novel subphenotypes of AMD patients with different characteristics and progression rates to late AMD, paving the way for improved personalization of AMD management. The code of CAT-LSTM can be found at GitHub .more » « less
- 
            Chest X-rays are commonly used for diagnosing and characterizing lung diseases, but the complex morphological patterns in radiographic appearances can challenge clinicians in making accurate diagnoses. To address this challenge, various learning methods have been developed for algorithm-aided disease detection and automated diagnosis. However, most existing methods fail to account for the heterogeneous variability in longitudinal imaging records and the presence of missing or inconsistent temporal data. In this paper, we propose a novel longitudinal learning framework that enriches inconsistent imaging data over sequential time points by leveraging 2D Principal Component Analysis (2D-PCA) and a robust adaptive loss function. We also derive an efficient solution algorithm that ensures both objective and sequence convergence for the non-convex optimization problem. Our experiments on the CheXpert dataset demonstrate improved performance in capturing indicative abnormalities in medical images and achieving satisfactory diagnoses. We believe that our method will be of significant interest to the research community working on medical image analysis.more » « less
- 
            We investigated the impact of age-related macular degeneration (AMD) on visual acuity and the visual white matter. We combined an adaptive cortical atlas and diffusion-weighted magnetic resonance imaging (dMRI) and tractography to separate optic radiation (OR) projections to different retinal eccentricities in human primary visual cortex. We exploited the known anatomical organization of the OR and clinically relevant data to segment the OR into three primary components projecting to fovea, mid- and far-periphery. We measured white matter tissue properties—fractional anisotropy, linearity, planarity, sphericity—along the aforementioned three components of the optic radiation to compare AMD patients and controls. We found differences in white matter properties specific to OR white matter fascicles projecting to primary visual cortex locations corresponding to the location of retinal damage (fovea). Additionally, we show that the magnitude of white matter properties in AMD patients’ correlates with visual acuity. In sum, we demonstrate a specific relation between visual loss, anatomical location of retinal damage and white matter damage in AMD patients. Importantly, we demonstrate that these changes are so profound that can be detected using magnetic resonance imaging data with clinical resolution. The conserved mapping between retinal and white matter damage suggests that retinal neurodegeneration might be a primary cause of white matter degeneration in AMD patients. The results highlight the impact of eye disease on brain tissue, a process that may become an important target to monitor during the course of treatment.more » « less
- 
            Background/Objectives: To investigate macular vascular biomarkers for the detection of primary open-angle glaucoma (POAG). Methods: A total of 56 POAG patients and 94 non-glaucomatous controls underwent optical coherence tomography angiography (OCTA) assessment of macular vessel density (VD) in the superficial (SCP), and deep (DCP) capillary plexus, foveal avascular zone (FAZ) area, perimeter, VD, choriocapillaris and outer retina flow area. POAG patients were classified for severity based on the Glaucoma Staging System 2 of Brusini. ANCOVA comparisons adjusted for age, sex, race, hypertension, diabetes, and areas under the receiver operating characteristic curves (AUCs) for POAG/control differentiation were compared using the DeLong method. Results: Global, hemispheric, and quadrant SCP VD was significantly lower in POAG patients in the whole image, parafovea, and perifovea (p < 0.001). No significant differences were found between POAG and controls for DCP VD, FAZ parameters, and the retinal and choriocapillaris flow area (p > 0.05). SCP VD in the whole image and perifovea were significantly lower in POAG patients in stage 2 than stage 0 (p < 0.001). The AUCs of SCP VD in the whole image (0.86) and perifovea (0.84) were significantly higher than the AUCs of all DCP VD (p < 0.05), FAZ parameters (p < 0.001), and retinal (p < 0.001) and choriocapillaris flow areas (p < 0.05). Whole image SCP VD was similar to the AUC of the global retinal nerve fiber layer (RNFL) (AUC = 0.89, p = 0.53) and ganglion cell complex (GCC) thickness (AUC = 0.83, p = 0.42). Conclusions: SCP VD is lower with increasing functional damage in POAG patients. The AUC for SCP VD was similar to RNFL and GCC using clinical diagnosis as the reference standard.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
