skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Approaches for Hybrid Coregistration of Marker-Based and Markerless Coordinates Describing Complex Body/Object Interactions
Full-body motion capture is essential for the study of body movement. Video-based, markerless, mocap systems are, in some cases, replacing marker-based systems, but hybrid systems are less explored. We develop methods for coregistration between 2D video and 3D marker positions when precise spatial relationships are not known a priori. We illustrate these methods on three-ball cascade juggling in which it was not possible to use marker-based tracking of the balls, and no tracking of the hands was possible due to occlusion. Using recorded video and motion capture, we aimed to transform 2D ball coordinates into 3D body space as well as recover details of hand motion. We proposed four linear coregistration methods that differ in how they optimize ball-motion constraints during hold and flight phases, using an initial estimate of hand position based on arm and wrist markers. We found that minimizing the error between ball and hand estimate was globally suboptimal, distorting ball flight trajectories. The best-performing method used gravitational constraints to transform vertical coordinates and ball-hold constraints to transform lateral coordinates. This method enabled an accurate description of ball flight as well as a reconstruction of wrist movements. We discuss these findings in the broader context of video/motion capture coregistration.  more » « less
Award ID(s):
2011716
PAR ID:
10439204
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Sensors
Volume:
23
Issue:
14
ISSN:
1424-8220
Page Range / eLocation ID:
6542
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Kinematic motion analysis is widely used in health-care, sports medicine, robotics, biomechanics, sports science, etc. Motion capture systems are essential for motion analysis. There are three types of motion capture systems: marker-based capture, vision-based capture, and volumetric capture. Marker-based motion capture systems can achieve fairly accurate results but attaching markers to a body is inconvenient and time-consuming. Vision-based, marker-less motion capture systems are more desirable because of their non-intrusiveness and flexibility. Volumetric capture is a newer and more advanced marker-less motion capture system that can reconstruct realistic, full-body, animated 3D character models. But volumetric capture has rarely been used for motion analysis because volumetric motion data presents new challenges. We propose a new method for conducting kinematic motion analysis using volumetric capture data. This method consists of a three-stage pipeline. First, the motion is captured by a volumetric capture system. Then the volumetric capture data is processed using the Iterative Closest Points (ICP) algorithm to generate virtual markers that track the motion. Third, the motion tracking data is imported into the biomechanical analysis tool OpenSim for kinematic motion analysis. Our motion analysis method enables users to apply numerical motion analysis to the skeleton model in OpenSim while also studying the full-body, animated 3D model from different angles. It has the potential to provide more detailed and in-depth motion analysis for areas such as healthcare, sports science, and biomechanics. 
    more » « less
  2. Simulating realistic butterfly motion has been a widely-known challenging problem in computer animation. Arguably, one of its main reasons is the difficulty of acquiring accurate flight motion of butterflies. In this paper we propose a practical yet effective, optical marker-based approach to capture and process the detailed motion of a flying butterfly. Specifically, we first capture the trajectories of the wings and thorax of a flying butterfly using optical marker based motion tracking. After that, our method automatically fills the positions of missing markers by exploiting the continuity and relevance of neighboring frames, and improves the quality of the captured motion via noise filtering with optimized parameter settings. Through comparisons with existing motion processing methods, we demonstrate the effectiveness of our approach to obtain accurate flight motions of butterflies. Furthermore, we created and will release a first-of-its-kind butterfly motion capture dataset to research community. 
    more » « less
  3. Skarnitzl, R. & (Ed.)
    While motion capture is rapidly becoming the gold standard for research on the intricacies of co-speech gesture and its relationship to speech, traditional marker-based motion capture technology is not always feasible, meaning researchers must code video data manually. We compare two methods for coding co-speech gestures of the hands and arms in video data of spontaneous speech: manual coding and semi-automated coding using OpenPose, a markerless motion capture software. We provide a comparison of the temporal alignment of gesture apexes based on video recordings of interviews with speakers of Medumba (Grassfields Bantu). Our results show a close correlation between the computationally calculated apexes and our hand-annotated apexes, suggesting that both methods are equally valid for coding video data. The use of markerless motion capture technology for gesture coding will enable more rapid coding of manual gestures, while still allowing 
    more » « less
  4. Full-body tracking in virtual reality improves presence, allows interaction via body postures, and facilitates better social expression among users. However, full-body tracking systems today require a complex setup fixed to the environment (e.g., multiple lighthouses/cameras) and a laborious calibration process, which goes against the desire to make VR systems more portable and integrated. We present HybridTrak, which provides accurate, real-time full-body tracking by augmenting inside-out1 upper-body VR tracking systems with a single external off-the-shelf RGB web camera. HybridTrak uses a full-neural solution to convert and transform users’ 2D full-body poses from the webcam to 3D poses leveraging the inside-out upper-body tracking data. We showed HybridTrak is more accurate than RGB or depth-based tracking methods on the MPI-INF-3DHP dataset. We also tested HybridTrak in the popular VRChat app and showed that body postures presented by HybridTrak are more distinguishable and more natural than a solution using an RGBD camera. 
    more » « less
  5. null (Ed.)
    Over the last few years, camera-based estimation of vital signs referred to as imaging photoplethysmography (iPPG) has garnered significant attention due to the relative simplicity, ease, unobtrusiveness and flexibility offered by such measurements. It is expected that iPPG may be integrated into a host of emerging applications in areas as diverse as autonomous cars, neonatal monitoring, and telemedicine. In spite of this potential, the primary challenge of non-contact camera-based measurements is the relative motion between the camera and the subjects. Current techniques employ 2D feature tracking to reduce the effect of subject and camera motion but they are limited to handling translational and in-plane motion. In this paper, we study, for the first-time, the utility of 3D face tracking to allow iPPG to retain robust performance even in presence of out-of-plane and large relative motions. We use a RGB-D camera to obtain 3D information from the subjects and use the spatial and depth information to fit a 3D face model and track the model over the video frames. This allows us to estimate correspondence over the entire video with pixel-level accuracy, even in the presence of out-of-plane or large motions. We then estimate iPPG from the warped video data that ensures per-pixel correspondence over the entire window-length used for estimation. Our experiments demonstrate improvement in robustness when head motion is large. 
    more » « less