skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Minimal Solvers for Mini-Loop Closures in 3D Multi-Scan Alignment
3D scan registration is a classical, yet a highly useful problem in the context of 3D sensors such as Kinect and Velodyne. While there are several existing methods, the techniques are usually incremental where adjacent scans are registered first to obtain the initial poses, followed by motion averaging and bundle-adjustment refinement. In this paper, we take a different approach and develop minimal solvers for jointly computing the initial poses of cameras in small loops such as 3-, 4-, and 5-cycles1. Note that the classical registration of 2 scans can be done using a minimum of 3 point matches to compute 6 degrees of relative motion. On the other hand, to jointly compute the 3D reg- istrations in n-cycles, we take 2 point matches between the first n−1 consecutive pairs (i.e., Scan 1 & Scan 2, . . . , and Scan n − 1 & Scan n) and 1 or 2 point matches between Scan 1 and Scan n. Overall, we use 5, 7, and 10 point matches for 3-, 4-, and 5-cycles, and recover 12, 18, and 24 degrees of transformation variables, respectively. Using simulations and real-data we show that the 3D registration using mini n-cycles are computationally efficient, and can provide alternate and better initial poses compared to standard pairwise methods.  more » « less
Award ID(s):
1764071
NSF-PAR ID:
10093739
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN:
2332-564X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Terrestrial lidar scans were captured using a BLK360 scanner (Leica Geosystems, Norcross, GA, USA) which has a range of 0.5 – 45 m and measurement rate up to 680,000 points s−1 at the high-resolution setting. A georeferenced, 3-D point cloud of the study site was generated from 12 scans, approximately 50 m apart in both horizontal directions. Scans were performed in orientations intended to maximize branch exposure to the scanner and to scan during optimal weather conditions to minimize occlusion of features due to noise or movement generated by wind. Scan co-registration was done in Leica Geosystem’s Cyclone Register 360 software using its Visual Simultaneous Localization and Mapping algorithm (Visual SLAM) and resulted in relatively low overall co-registration error ranging from 0.005-0.009 m. From this study site point cloud, manual straight-line measurements from the ground to the sensors were made using Leica’s Cyclone Register 360 software. 
    more » « less
  2. In recent years, LiDAR sensors have become pervasive in the solutions to localization tasks for autonomous systems. One key step in using LiDAR data for localization is the alignment of two LiDAR scans taken from different poses, a process called scan-matching or point cloud registration. Most existing algorithms for this problem are heuristic in nature and local, meaning they may not produce accurate results under poor initialization. Moreover, existing methods give no guarantee on the quality of their output, which can be detrimental for safety-critical tasks. In this paper, we analyze a simple algorithm for point cloud registration, termed PASTA. This algorithm is global and does not rely on point-to-point correspondences, which are typically absent in LiDAR data. Moreover, and to the best of our knowledge, we offer the first point cloud registration algorithm with provable error bounds. Finally, we illustrate the proposed algorithm and error bounds in simulation on a simple trajectory tracking task. 
    more » « less
  3. This publication documents 3D image stacks from HR-pQCT imaging of a femur diaphysis, as well as image stacks for two in-situ loaded fracture mechanics specimens observed with 3D X-ray microscopy. Imaging For HR-qQCT: HR-pQCT scans were acquired by Rachel Surowiec using an XtremeCT II scanner (SCANCO Medical AG, Bruttisellen, Switzerland) within the Musculoskeletal Function, Imaging and Tissue (MSK-FIT) Resource Core of the Indiana Center for Musculoskeletal Health’s Clinical Research Center (Indiana University, Indianapolis, IN). Scans are performed at 60.7 um resolution, a 68 kV, 1467 uA, 43 ms integration time, 1 frame averaging. Raw scans are ‘.RSQ’ file types. The ISQ file type were read into ImageJ using the Import-KHKs Scanco uCT ISQ file reader plug-in, and exported as bmp image stacks, image stacks are provided in two parts. Reconstructed images are rotated in dataviewer so that all bones are in the same orientation (prox/distal/anterior/posterior for the Femur). For in-situ fracture mechanics experiments: 3D scans were acquired by Glynn Gallaway using a 3-point bending rig for single edged notched bend specimens with a Deben CT5000N load cell (Deben, Bury St. Edmunds, UK) in a Zeiss XRADIA 510 Versa 3D X-Ray microscope (Carl Zeiss AG, Baden-Württemberg, Germany) at Purdue University. The 3-point bending frame had a span 20 mm with X-ray transparent, glassy carbon supports. To maintain hydration, the beam was wrapped in a plastic film slit at the notch. Displacements were applied at 0.1 mm/min. Load cell outputs were monitored and recorded. Displacements are held constant during image acquisitions. The first 3D image was obtained at the onset of non-linearity. Subsequently, the displacement was increased until a load increase of 10 N was observed, and another image was obtained. This sequence was repeated 6-times until peak load. 3D X-ray images were acquired with a resolution of 4.5 um, exposure time 5 sec., 801 projections, 120 kV, 10 W, 4 x objective, and a LE2 filter. X-ray projections were processed through XRADIA Scout-and-Scan Reconstructor. A recursive Gaussian smoothing filter (s=1 pixel) was applied to reduce image artifacts. Image stacks are exported as tiff files and provided individually for each load step and specimen. Two experiments are documented (beam 1 and beam 2). MaterialstThe diaphysis of a human (92-year-old, male) cadaveric femur was obtained through the Indiana University School of Medicine Anatomical Donation Program. 
    more » « less
  4. Recovering rigid registration between successive camera poses lies at the heart of 3D reconstruction, SLAM and visual odometry. Registration relies on the ability to compute discriminative 2D features in successive camera images for determining feature correspondences, which is very challenging in feature-poor environments, i.e. low-texture and/or low-light environments. In this paper, we aim to address the challenge of recovering rigid registration between successive camera poses in feature-poor environments in a Visual Inertial Odometry (VIO) setting. In addition to inertial sensing, we instrument a small aerial robot with an RGBD camera and propose a framework that unifies the incorporation of 3D geometric entities: points, lines, and planes. The tracked 3D geometric entities provide constraints in an Extended Kalman Filtering framework. We show that by directly exploiting 3D geometric entities, we can achieve improved registration. We demonstrate our approach on different texture-poor environments, with some containing only flat texture-less surfaces providing essentially no 2D features for tracking. In addition, we evaluate how the addition of different 3D geometric entities contributes to improved pose estimation by comparing an estimated pose trajectory to a ground truth pose trajectory obtained from a motion capture system. We consider computationally efficient methods for detecting 3D points, lines and planes, since our goal is to implement our approach on small mobile robots, such as drones. 
    more » « less
  5. Precision modeling of the hand internal musculoskeletal anatomy has been largely limited to individual poses, and has not been connected into continuous volumetric motion of the hand anatomy actuating across the hand's entire range of motion. This is for a good reason, as hand anatomy and its motion are extremely complex and cannot be predicted merely from the anatomy in a single pose. We give a method to simulate the volumetric shape of hand's musculoskeletal organs to any pose in the hand's range of motion, producing external hand shapes and internal organ shapes that match ground truth optical scans and medical images (MRI) in multiple scanned poses. We achieve this by combining MRI images in multiple hand poses with FEM multibody nonlinear elastoplastic simulation. Our system models bones, muscles, tendons, joint ligaments and fat as separate volumetric organs that mechanically interact through contact and attachments, and whose shape matches medical images (MRI) in the MRI-scanned hand poses. The match to MRI is achieved by incorporating pose-space deformation and plastic strains into the simulation. We show how to do this in a non-intrusive manner that still retains all the simulation benefits, namely the ability to prescribe realistic material properties, generalize to arbitrary poses, preserve volume and obey contacts and attachments. We use our method to produce volumetric renders of the internal anatomy of the human hand in motion, and to compute and render highly realistic hand surface shapes. We evaluate our method by comparing it to optical scans, and demonstrate that we qualitatively and quantitatively substantially decrease the error compared to previous work. We test our method on five complex hand sequences, generated either using keyframe animation or performance animation using modern hand tracking techniques. 
    more » « less