skip to main content


Title: Volumetric Motion Magnification: Subtle Motion Extraction from 4D Data
Award ID(s):
1762809
NSF-PAR ID:
10290040
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Measurement
Volume:
176
Issue:
C
ISSN:
0263-2241
Page Range / eLocation ID:
109211
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose

    We introduce and validate a scalable retrospective motion correction technique for brain imaging that incorporates a machine learning component into a model‐based motion minimization.

    Methods

    A convolutional neural network (CNN) trained to remove motion artifacts from 2D T2‐weighted rapid acquisition with refocused echoes (RARE) images is introduced into a model‐based data‐consistency optimization to jointly search for 2D motion parameters and the uncorrupted image. Our separable motion model allows for efficient intrashot (line‐by‐line) motion correction of highly corrupted shots, as opposed to previous methods which do not scale well with this refinement of the motion model. Final image generation incorporates the motion parameters within a model‐based image reconstruction. The method is tested in simulations and in vivo motion experiments of in‐plane motion corruption.

    Results

    While the convolutional neural network alone provides some motion mitigation (at the expense of introduced blurring), allowing it to guide the iterative joint‐optimization both improves the search convergence and renders the joint‐optimization separable. This enables rapid mitigation within shots in addition to between shots. For 2D in‐plane motion correction experiments, the result is a significant reduction of both image space root mean square error in simulations, and a reduction of motion artifacts in the in vivo motion tests.

    Conclusion

    The separability and convergence improvements afforded by the combined convolutional neural network+model‐based method shows the potential for meaningful postacquisition motion mitigation in clinical MRI.

     
    more » « less
  2. In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications. 
    more » « less
  3. Kinematic motion analysis is widely used in health-care, sports medicine, robotics, biomechanics, sports science, etc. Motion capture systems are essential for motion analysis. There are three types of motion capture systems: marker-based capture, vision-based capture, and volumetric capture. Marker-based motion capture systems can achieve fairly accurate results but attaching markers to a body is inconvenient and time-consuming. Vision-based, marker-less motion capture systems are more desirable because of their non-intrusiveness and flexibility. Volumetric capture is a newer and more advanced marker-less motion capture system that can reconstruct realistic, full-body, animated 3D character models. But volumetric capture has rarely been used for motion analysis because volumetric motion data presents new challenges. We propose a new method for conducting kinematic motion analysis using volumetric capture data. This method consists of a three-stage pipeline. First, the motion is captured by a volumetric capture system. Then the volumetric capture data is processed using the Iterative Closest Points (ICP) algorithm to generate virtual markers that track the motion. Third, the motion tracking data is imported into the biomechanical analysis tool OpenSim for kinematic motion analysis. Our motion analysis method enables users to apply numerical motion analysis to the skeleton model in OpenSim while also studying the full-body, animated 3D model from different angles. It has the potential to provide more detailed and in-depth motion analysis for areas such as healthcare, sports science, and biomechanics. 
    more » « less
  4. null (Ed.)
  5. Construction tasks involve various activities composed of one or more body motions. It is essential to understand the dynamically changing behavior and state of construction workers to manage construction workers effectively with regards to their safety and productivity. While several research efforts have shown promising results in activity recognition, further research is still necessary to identify the best locations of motion sensors on a worker’s body by analyzing the recognition results for improving the performance and reducing the implementation cost. This study proposes a simulation-based evaluation of multiple motion sensors attached to workers performing typical construction tasks. A set of 17 inertial measurement unit (IMU) sensors is utilized to collect motion sensor data from an entire body. Multiple machine learning algorithms are utilized to classify the motions of the workers by simulating several scenarios with different combinations and features of the sensors. Through the simulations, each IMU sensor placed in different locations of a body is tested to evaluate its recognition accuracy toward the worker’s different activity types. Then, the effectiveness of sensor locations is measured regarding activity recognition performance to determine relative advantage of each location. Based on the results, the required number of sensors can be reduced maintaining the recognition performance. The findings of this study can contribute to the practical implementation of activity recognition using simple motion sensors to enhance the safety and productivity of individual workers. 
    more » « less