skip to main content


Title: A Dirty Multi-task Learning Method for Multi-modal Brain Imaging Genetics
Brain imaging genetics is an important research topic in brain science, which combines genetic variations and brain structures or functions to uncover the genetic basis of brain disorders. Imaging data collected by different technologies, measuring the same brain distinctly, might carry complementary but different information. Unfortunately, we do not know the extent to which phenotypic variance is shared among multiple imaging modalities, which might trace back to the complex genetic mechanism. In this study, we propose a novel dirty multi-task SCCA to analyze imaging genetics problems with multiple modalities of brain imaging quantitative traits (QTs) involved. The proposed method can not only identify the shared SNPs and QTs across multiple modalities, but also identify the modality-specific SNPs and QTs, showing a flexible capability of discovering the complex multi-SNP-multi-QT associations. Compared with the multi-view SCCA and multi-task SCCA, our method shows better canonical correlation coefficients and canonical weights on both synthetic and real neuroimaging genetic data. This demonstrates that the proposed dirty multi-task SCCA could be a meaningful and powerful alternative method in multi-modal brain imaging genetics.  more » « less
Award ID(s):
1837964
NSF-PAR ID:
10127251
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
International Conference on Medical Image Computing and Computer-Assisted Intervention
Volume:
11767
Page Range / eLocation ID:
447-455
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Brain imaging genetics studies the genetic basis of brain structures and functionalities via integrating genotypic data such as single nucleotide polymorphisms (SNPs) and imaging quantitative traits (QTs). In this area, both multi-task learning (MTL) and sparse canonical correlation analysis (SCCA) methods are widely used since they are superior to those independent and pairwise univariate analysis. MTL methods generally incorporate a few of QTs and could not select features from multiple QTs; while SCCA methods typically employ one modality of QTs to study its association with SNPs. Both MTL and SCCA are computational expensive as the number of SNPs increases. In this paper, we propose a novel multi-task SCCA (MTSCCA) method to identify bi-multivariate associations between SNPs and multi-modal imaging QTs. MTSCCA could make use of the complementary information carried by different imaging modalities. MTSCCA enforces sparsity at the group level via the G2,1-norm, and jointly selects features across multiple tasks for SNPs and QTs via the L2,1-norm. A fast optimization algorithm is proposed using the grouping information of SNPs. Compared with conventional SCCA methods, MTSCCA obtains better correlation coefficients and canonical weights patterns. In addition, MTSCCA runs very fast and easy-to-implement, indicating its potential power in genome-wide brain-wide imaging genetics. 
    more » « less
  2. Abstract Motivation

    Identifying the genetic basis of the brain structure, function and disorder by using the imaging quantitative traits (QTs) as endophenotypes is an important task in brain science. Brain QTs often change over time while the disorder progresses and thus understanding how the genetic factors play roles on the progressive brain QT changes is of great importance and meaning. Most existing imaging genetics methods only analyze the baseline neuroimaging data, and thus those longitudinal imaging data across multiple time points containing important disease progression information are omitted.

    Results

    We propose a novel temporal imaging genetic model which performs the multi-task sparse canonical correlation analysis (T-MTSCCA). Our model uses longitudinal neuroimaging data to uncover that how single nucleotide polymorphisms (SNPs) play roles on affecting brain QTs over the time. Incorporating the relationship of the longitudinal imaging data and that within SNPs, T-MTSCCA could identify a trajectory of progressive imaging genetic patterns over the time. We propose an efficient algorithm to solve the problem and show its convergence. We evaluate T-MTSCCA on 408 subjects from the Alzheimer’s Disease Neuroimaging Initiative database with longitudinal magnetic resonance imaging data and genetic data available. The experimental results show that T-MTSCCA performs either better than or equally to the state-of-the-art methods. In particular, T-MTSCCA could identify higher canonical correlation coefficients and capture clearer canonical weight patterns. This suggests that T-MTSCCA identifies time-consistent and time-dependent SNPs and imaging QTs, which further help understand the genetic basis of the brain QT changes over the time during the disease progression.

    Availability and implementation

    The software and simulation data are publicly available at https://github.com/dulei323/TMTSCCA.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less
  3. Abstract Background

    In Alzheimer’s Diseases (AD) research, multimodal imaging analysis can unveil complementary information from multiple imaging modalities and further our understanding of the disease. One application is to discover disease subtypes using unsupervised clustering. However, existing clustering methods are often applied to input features directly, and could suffer from the curse of dimensionality with high-dimensional multimodal data. The purpose of our study is to identify multimodal imaging-driven subtypes in Mild Cognitive Impairment (MCI) participants using a multiview learning framework based on Deep Generalized Canonical Correlation Analysis (DGCCA), to learn shared latent representation with low dimensions from 3 neuroimaging modalities.

    Results

    DGCCA applies non-linear transformation to input views using neural networks and is able to learn correlated embeddings with low dimensions that capture more variance than its linear counterpart, generalized CCA (GCCA). We designed experiments to compare DGCCA embeddings with single modality features and GCCA embeddings by generating 2 subtypes from each feature set using unsupervised clustering. In our validation studies, we found that amyloid PET imaging has the most discriminative features compared with structural MRI and FDG PET which DGCCA learns from but not GCCA. DGCCA subtypes show differential measures in 5 cognitive assessments, 6 brain volume measures, and conversion to AD patterns. In addition, DGCCA MCI subtypes confirmed AD genetic markers with strong signals that existing late MCI group did not identify.

    Conclusion

    Overall, DGCCA is able to learn effective low dimensional embeddings from multimodal data by learning non-linear projections. MCI subtypes generated from DGCCA embeddings are different from existing early and late MCI groups and show most similarity with those identified by amyloid PET features. In our validation studies, DGCCA subtypes show distinct patterns in cognitive measures, brain volumes, and are able to identify AD genetic markers. These findings indicate the promise of the imaging-driven subtypes and their power in revealing disease structures beyond early and late stage MCI.

     
    more » « less
  4. Abstract

    In the era of big data, where vast amounts of information are being generated and collected at an unprecedented rate, there is a pressing demand for innovative data-driven multi-modal fusion methods. These methods aim to integrate diverse neuroimaging perspectives to extract meaningful insights and attain a more comprehensive understanding of complex psychiatric disorders. However, analyzing each modality separately may only reveal partial insights or miss out on important correlations between different types of data. This is where data-driven multi-modal fusion techniques come into play. By combining information from multiple modalities in a synergistic manner, these methods enable us to uncover hidden patterns and relationships that would otherwise remain unnoticed. In this paper, we present an extensive overview of data-driven multimodal fusion approaches with or without prior information, with specific emphasis on canonical correlation analysis and independent component analysis. The applications of such fusion methods are wide-ranging and allow us to incorporate multiple factors such as genetics, environment, cognition, and treatment outcomes across various brain disorders. After summarizing the diverse neuropsychiatric magnetic resonance imaging fusion applications, we further discuss the emerging neuroimaging analyzing trends in big data, such as N-way multimodal fusion, deep learning approaches, and clinical translation. Overall, multimodal fusion emerges as an imperative approach providing valuable insights into the underlying neural basis of mental disorders, which can uncover subtle abnormalities or potential biomarkers that may benefit targeted treatments and personalized medical interventions.

     
    more » « less
  5. null (Ed.)
    During disaster events, emergency response teams need to draw up the response plan at the earliest possible stage. Social media platforms contain rich information which could help to assess the current situation. In this paper, a novel multi-task multimodal deep learning framework with automatic loss weighting is proposed. Our framework is able to capture the correlation among different concepts and data modalities. The proposed automatic loss weighting method can prevent the tedious manual weight tuning process and improve the model performance. Extensive experiments on a large-scale multimodal disaster dataset from Twitter are conducted to identify post-disaster humanitarian category and infrastructure damage level. The results show that by learning the shared latent space of multiple tasks with loss weighting, our model can outperform all single tasks. 
    more » « less