skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis
Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The out- put of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our pro- posed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.  more » « less
Award ID(s):
2152961
PAR ID:
10396372
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
25th International Conference on Medical Image Computing and Computer Assisted Intervention
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a deep multimodal fusion network to fuse multiple modalities (face, iris, and fingerprint) for person identification. The proposed deep multimodal fusion algorithm consists of multiple streams of modality-specific Convolutional Neural Networks (CNNs), which are jointly optimized at multiple feature abstraction levels. Multiple features are extracted at several different convolutional layers from each modality-specific CNN for joint feature fusion, optimization, and classification. Features extracted at different convolutional layers of a modality-specific CNN represent the input at several different levels of abstract representations. We demonstrate that an efficient multimodal classification can be accomplished with a significant reduction in the number of network parameters by exploiting these multi-level abstract representations extracted from all the modality-specific CNNs. We demonstrate an increase in multimodal person identification performance by utilizing the proposed multi-level feature abstract representations in our multimodal fusion, rather than using only the features from the last layer of each modality-specific CNNs. We show that our deep multi-modal CNNs with multimodal fusion at several different feature level abstraction can significantly outperform the unimodal representation accuracy. We also demonstrate that the joint optimization of all the modality-specific CNNs excels the score and decision level fusions of independently optimized CNNs. 
    more » « less
  2. ABSTRACT

    With the increasing availability of large‐scale multimodal neuroimaging datasets, it is necessary to develop data fusion methods which can extract cross‐modal features. A general framework, multidataset independent subspace analysis (MISA), has been developed to encompass multiple blind source separation approaches and identify linked cross‐modal sources in multiple datasets. In this work, we utilized the multimodal independent vector analysis (MMIVA) model in MISA to directly identify meaningful linked features across three neuroimaging modalities—structural magnetic resonance imaging (MRI), resting state functional MRI and diffusion MRI—in two large independent datasets, one comprising of control subjects and the other including patients with schizophrenia. Results show several linked subject profiles (sources) that capture age‐associated decline, schizophrenia‐related biomarkers, sex effects, and cognitive performance. For sources associated with age, both shared and modality‐specific brain‐age deltas were evaluated for association with non‐imaging variables. In addition, each set of linked sources reveals a corresponding set of cross‐modal spatial patterns that can be studied jointly. We demonstrate that the MMIVA fusion model can identify linked sources across multiple modalities, and that at least one set of linked, age‐related sources replicates across two independent and separately analyzed datasets. The same set also presented age‐adjusted group differences, with schizophrenia patients indicating lower multimodal source levels. Linked sets associated with sex and cognition are also reported for the UK Biobank dataset.

     
    more » « less
  3. Abstract

    There are a growing number of neuroimaging studies motivating joint structural and functional brain connectivity. Brain connectivity of different modalities provides insight into brain functional organization by leveraging complementary information, especially for brain disorders such as schizophrenia. In this paper, we propose a multi-modal independent component analysis (ICA) model that utilizes information from both structural and functional brain connectivity guided by spatial maps to estimate intrinsic connectivity networks (ICNs). Structural connectivity is estimated through whole-brain tractography on diffusion-weighted MRI (dMRI), while functional connectivity is derived from resting-state functional MRI (rs-fMRI). The proposed structural-functional connectivity and spatially constrained ICA (sfCICA) model estimates ICNs at the subject level using a multi-objective optimization framework. We evaluated our model using synthetic and real datasets (including dMRI and rs-fMRI from 149 schizophrenia patients and 162 controls). Multi-modal ICNs revealed enhanced functional coupling between ICNs with higher structural connectivity, improved modularity, and network distinction, particularly in schizophrenia. Statistical analysis of group differences showed more significant differences in the proposed model compared to the unimodal model. In summary, the sfCICA model showed benefits from being jointly informed by structural and functional connectivity. These findings suggest advantages in simultaneously learning effectively and enhancing connectivity estimates using structural connectivity.

     
    more » « less
  4. The ability to estimate 3D human body pose and movement, also known as human pose estimation (HPE), enables many applications for home-based health monitoring, such as remote rehabilitation training. Several possible solutions have emerged using sensors ranging from RGB cameras, depth sensors, millimeter-Wave (mmWave) radars, and wearable inertial sensors. Despite previous efforts on datasets and benchmarks for HPE, few dataset exploits multiple modalities and focuses on home-based health monitoring. To bridge this gap, we present mRI1, a multi-modal 3D human pose estimation dataset with mmWave, RGB-D, and Inertial Sensors. Our dataset consists of over 160k synchronized frames from 20 subjects performing rehabilitation exercises and supports the benchmarks of HPE and action detection. We perform extensive experiments using our dataset and delineate the strength of each modality. We hope that the release of mRI can catalyze the research in pose estimation, multi-modal learning, and action understanding, and more importantly facilitate the applications of home-based health monitoring. 
    more » « less
  5. Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning. 
    more » « less