skip to main content


Title: MGRAPPA: Motion Corrected GRAPPA for MRI
We introduce an approximation and resulting method called MGRAPPA to allow high speed MRI scans robust to subject motion using prospective motion correction and GRAPPA. In experiments on both simulated data and in-vivo data, we observe high accuracy and robustness to subject movement in L2 (Frobenius) norm error including a 41% improvement in the in-vivo experiment.  more » « less
Award ID(s):
1816608
NSF-PAR ID:
10392297
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ISMRM
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Stroke commonly affects the ability of the upper extremities (UEs) to move normally. In clinical settings, identifying and measuring movement abnormality is challenging due to the imprecision and impracticality of available assessments. These challenges interfere with therapeutic tracking, communication, and treatment. We thus sought to develop an approach that blends precision and pragmatism, combining high-dimensional motion capture with out-of-distribution (OOD) detection. We used an array of wearable inertial measurement units to capture upper body motion in healthy and chronic stroke subjects performing a semi-structured, unconstrained 3D tabletop task. After data were labeled by human coders, we trained two deep learning models exclusively on healthy subject data to classify elemental movements (functional primitives). We tested these healthy subject-trained models on previously unseen healthy and stroke motion data. We found that model confidence, indexed by prediction probabilities, was generally high for healthy test data but significantly dropped when encountering OOD stroke data. Prediction probabilities worsened with more severe motor impairment categories and were directly correlated with individual impairment scores. Data inputs from the paretic UE, rather than trunk, most strongly influenced model confidence. We demonstrate for the first time that using OOD detection with high-dimensional motion data can reveal clinically meaningful movement abnormality in subjects with chronic stroke. 
    more » « less
  2. Bernard, O. ; Clarysse, P. ; Duchateau, N. ; Ohayon, J. ; Viallon, M (Ed.)
    Increased passive myocardial stiffness is implicated in the pathophysiology of many cardiac diseases, and its in vivo estimation can improve management of heart disease. MRI-driven computational constitutive modeling has been used extensively to evaluate passive myocardial stiffness. This approach requires subject-specific data that is best acquired with different MRI sequences: conventional cine (e.g. bSSFP), tagged MRI (or DENSE), and cardiac diffusion tensor imaging. However, due to the lack of comprehensive datasets and the challenge of incorporating multi-phase and single-phase disparate MRI data, no studies have combined in vivo cine bSSFP, tagged MRI, and cardiac diffusion tensor imaging to estimate passive myocardial stiffness. The objective of this work was to develop a personalized in silico left ventricular model to evaluate passive myocardial stiffness by integrating subject-specific geometric data derived from cine bSSFP, regional kinematics extracted from tagged MRI, and myocardial microstructure measured using in vivo cardiac diffusion tensor imaging. To demonstrate the feasibility of using a complete subject-specific imaging dataset for passive myocardial stiffness estimation, we calibrated a bulk stiffness parameter of a transversely isotropic exponential constitutive relation to match the local kinematic field extracted from tagged MRI. This work establishes a pipeline for developing subject-specific biomechanical ventricular models to probe passive myocardial mechanical behavior, using comprehensive cardiac imaging data from multiple in vivo MRI sequences. 
    more » « less
  3. Simultaneous visualization of the teeth and periodontium is of significant clinical interest for image-based monitoring of periodontal health. We recently reported the application of a dual-modality photoacoustic-ultrasound (PA-US) imaging system for resolving periodontal anatomy and periodontal pocket depths in humans. This work utilized a linear array transducer attached to a stepper motor to generate 3D images via maximum intensity projection. This prior work also used a medical head immobilizer to reduce artifacts during volume rendering caused by motion from the subject (e.g., breathing, minor head movements). However, this solution does not completely eliminate motion artifacts while also complicating the imaging procedure and causing patient discomfort. To address this issue, we report the implementation of an image registration technique to correctly align B-mode PA-US images and generate artifact-free 2D cross-sections. Application of the deshaking technique to PA phantoms revealed 80% similarity to the ground truth when shaking was intentionally applied during stepper motor scans. Images from handheld sweeps could also be deshaken using an LED PA-US scanner. Inex vivoporcine mandibles, pigmentation of the enamel was well-estimated within 0.1 mm error. The pocket depth measured in a healthy human subject was also in good agreement with our prior study. This report demonstrates that a modality-independent registration technique can be applied to clinically relevant PA-US scans of the periodontium to reduce operator burden of skill and subject discomfort while showing potential for handheld clinical periodontal imaging.

     
    more » « less
  4. Purpose

    We introduce and validate a scalable retrospective motion correction technique for brain imaging that incorporates a machine learning component into a model‐based motion minimization.

    Methods

    A convolutional neural network (CNN) trained to remove motion artifacts from 2D T2‐weighted rapid acquisition with refocused echoes (RARE) images is introduced into a model‐based data‐consistency optimization to jointly search for 2D motion parameters and the uncorrupted image. Our separable motion model allows for efficient intrashot (line‐by‐line) motion correction of highly corrupted shots, as opposed to previous methods which do not scale well with this refinement of the motion model. Final image generation incorporates the motion parameters within a model‐based image reconstruction. The method is tested in simulations and in vivo motion experiments of in‐plane motion corruption.

    Results

    While the convolutional neural network alone provides some motion mitigation (at the expense of introduced blurring), allowing it to guide the iterative joint‐optimization both improves the search convergence and renders the joint‐optimization separable. This enables rapid mitigation within shots in addition to between shots. For 2D in‐plane motion correction experiments, the result is a significant reduction of both image space root mean square error in simulations, and a reduction of motion artifacts in the in vivo motion tests.

    Conclusion

    The separability and convergence improvements afforded by the combined convolutional neural network+model‐based method shows the potential for meaningful postacquisition motion mitigation in clinical MRI.

     
    more » « less
  5. Abstract

    There is a growing research interest in quantifying blood flow distribution for the entire cerebral circulation to sharpen diagnosis and improve treatment options for cerebrovascular disease of individual patients. We present a methodology to reconstruct subject‐specific cerebral blood flow patterns in accordance with physiological and fluid mechanical principles and optimally informed byin vivoneuroimage data of cerebrovascular anatomy and arterial blood flow rates. We propose an inverse problem to infer blood flow distribution across the visible portion of the arterial network that best matches subject‐specific anatomy and a given set of volumetric flow measurements. The optimization technique also mitigates the effect of uncertainties by reconciling incomplete flow data and by dissipating unavoidable acquisition errors associated with medical imaging data.

     
    more » « less