This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. We apply our proposed framework, which disentangles multimodal data into private and shared sets of features from pairs of structural (sMRI), functional (sFNC and ICA), and diffusion MRI data (FA maps). With our approach, we find that heterogeneity in schizophrenia is potentially a function of modality pairs. Results show (1) schizophrenia is highly multimodal and includes changes in specific networks, (2) non‐linear relationships with schizophrenia are observed when interpolating among shared latent dimensions, and (3) we observe a decrease in the modularity of functional connectivity and decreased visual‐sensorimotor connectivity for schizophrenia patients for the FA‐sFNC and sMRI‐sFNC modality pairs, respectively. Additionally, our results generally indicate decreased fractional corpus callosum anisotropy, and decreased spatial ICA map and voxel‐based morphometry strength in the superior frontal lobe as found in the FA‐sFNC, sMRI‐FA, and sMRI‐ICA modality pair clusters. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data which we hope challenges the reader to think differently about how modalities interact.
I applaud the authors on their innovative generalized independent component analysis (ICA) framework for neuroimaging data. Although ICA has enjoyed great popularity for the analysis of functional magnetic resonance imaging (fMRI) data, its applicability to other modalities has been limited because standard ICA algorithms may not be directly applicable to a diversity of data representations. This is particularly true for single‐subject structural neuroimaging, where only a single measurement is collected at each location in the brain. The ingenious idea of Wu
- NSF-PAR ID:
- 10397028
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Biometrics
- Volume:
- 78
- Issue:
- 3
- ISSN:
- 0006-341X
- Format(s):
- Medium: X Size: p. 1109-1112
- Size(s):
- p. 1109-1112
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Finding overcomplete latent representations of data has applications in data analysis, signal processing, machine learning, theoretical neuroscience and many other fields. In an overcomplete representation, the number of latent features exceeds the data dimensionality, which is useful when the data is undersampled by the measurements (compressed sensing or information bottlenecks in neural systems) or composed from multiple complete sets of linear features, each spanning the data space. Independent Components Analysis (ICA) is a linear technique for learning sparse latent representations, which typically has a lower computational cost than sparse coding, a linear generative model which requires an iterative, nonlinear inference step. While well suited for finding complete representations, we show that overcompleteness poses a challenge to existing ICA algorithms. Specifically, the coherence control used in existing ICA and other dictionary learning algorithms, necessary to prevent the formation of duplicate dictionary features, is ill-suited in the overcomplete case. We show that in the overcomplete case, several existing ICA algorithms have undesirable global minima that maximize coherence. We provide a theoretical explanation of these failures and, based on the theory, propose improved coherence control costs for overcomplete ICA algorithms. Further, by comparing ICA algorithms to the computationally more expensive sparse coding on synthetic data, we show that the limited applicability of overcomplete, linear inference can be extended with the proposed cost functions. Finally, when trained on natural images, we show that the coherence control biases the exploration of the data manifold, sometimes yielding suboptimal, coherent solutions. All told, this study contributes new insights into and methods for coherence control for linear ICA, some of which are applicable to many other nonlinear models.more » « less
-
Abstract There is growing evidence that rather than using a single brain imaging modality to study its association with physiological or symptomatic features, the field is paying more attention to fusion of multimodal information. However, most current multimodal fusion approaches that incorporate functional magnetic resonance imaging (fMRI) are restricted to second‐level 3D features, rather than the original 4D fMRI data. This trade‐off is that the valuable temporal information is not utilized during the fusion step. Here we are motivated to propose a novel approach called “parallel group ICA+ICA” that incorporates temporal fMRI information from group independent component analysis (GICA) into a parallel independent component analysis (ICA) framework, aiming to enable direct fusion of first‐level fMRI features with other modalities (e.g., structural MRI), which thus can detect linked functional network variability and structural covariations. Simulation results show that the proposed method yields accurate intermodality linkage detection regardless of whether it is strong or weak. When applied to real data, we identified one pair of significantly associated fMRI‐sMRI components that show group difference between schizophrenia and controls in both modalities, and this linkage can be replicated in an independent cohort. Finally, multiple cognitive domain scores can be predicted by the features identified in the linked component pair by our proposed method. We also show these multimodal brain features can predict multiple cognitive scores in an independent cohort. Overall, results demonstrate the ability of parallel GICA+ICA to estimate joint information from 4D and 3D data without discarding much of the available information up front, and the potential for using this approach to identify imaging biomarkers to study brain disorders.
-
Abstract Current biotechnologies can simultaneously measure multiple high-dimensional modalities (e.g., RNA, DNA accessibility, and protein) from the same cells. A combination of different analytical tasks (e.g., multi-modal integration and cross-modal analysis) is required to comprehensively understand such data, inferring how gene regulation drives biological diversity and functions. However, current analytical methods are designed to perform a single task, only providing a partial picture of the multi-modal data. Here, we present UnitedNet, an explainable multi-task deep neural network capable of integrating different tasks to analyze single-cell multi-modality data. Applied to various multi-modality datasets (e.g., Patch-seq, multiome ATAC + gene expression, and spatial transcriptomics), UnitedNet demonstrates similar or better accuracy in multi-modal integration and cross-modal prediction compared with state-of-the-art methods. Moreover, by dissecting the trained UnitedNet with the explainable machine learning algorithm, we can directly quantify the relationship between gene expression and other modalities with cell-type specificity. UnitedNet is a comprehensive end-to-end framework that could be broadly applicable to single-cell multi-modality biology. This framework has the potential to facilitate the discovery of cell-type-specific regulation kinetics across transcriptomics and other modalities.
-
Abstract There are a growing number of neuroimaging studies motivating joint structural and functional brain connectivity. Brain connectivity of different modalities provides insight into brain functional organization by leveraging complementary information, especially for brain disorders such as schizophrenia. In this paper, we propose a multi-modal independent component analysis (ICA) model that utilizes information from both structural and functional brain connectivity guided by spatial maps to estimate intrinsic connectivity networks (ICNs). Structural connectivity is estimated through whole-brain tractography on diffusion-weighted MRI (dMRI), while functional connectivity is derived from resting-state functional MRI (rs-fMRI). The proposed structural-functional connectivity and spatially constrained ICA (sfCICA) model estimates ICNs at the subject level using a multi-objective optimization framework. We evaluated our model using synthetic and real datasets (including dMRI and rs-fMRI from 149 schizophrenia patients and 162 controls). Multi-modal ICNs revealed enhanced functional coupling between ICNs with higher structural connectivity, improved modularity, and network distinction, particularly in schizophrenia. Statistical analysis of group differences showed more significant differences in the proposed model compared to the unimodal model. In summary, the sfCICA model showed benefits from being jointly informed by structural and functional connectivity. These findings suggest advantages in simultaneously learning effectively and enhancing connectivity estimates using structural connectivity.