Terrestrial lidar scans were captured using a BLK360 scanner (Leica Geosystems, Norcross, GA, USA) which has a range of 0.5 – 45 m and measurement rate up to 680,000 points s−1 at the high-resolution setting. A georeferenced, 3-D point cloud of the study site was generated from 12 scans, approximately 50 m apart in both horizontal directions. Scans were performed in orientations intended to maximize branch exposure to the scanner and to scan during optimal weather conditions to minimize occlusion of features due to noise or movement generated by wind. Scan co-registration was done in Leica Geosystem’s Cyclone Register 360 software using its Visual Simultaneous Localization and Mapping algorithm (Visual SLAM) and resulted in relatively low overall co-registration error ranging from 0.005-0.009 m. From this study site point cloud, manual straight-line measurements from the ground to the sensors were made using Leica’s Cyclone Register 360 software.
more »
« less
LiDAR Point Cloud Registration with Formal Guarantees
In recent years, LiDAR sensors have become pervasive in the solutions to localization tasks for autonomous systems. One key step in using LiDAR data for localization is the alignment of two LiDAR scans taken from different poses, a process called scan-matching or point cloud registration. Most existing algorithms for this problem are heuristic in nature and local, meaning they may not produce accurate results under poor initialization. Moreover, existing methods give no guarantee on the quality of their output, which can be detrimental for safety-critical tasks. In this paper, we analyze a simple algorithm for point cloud registration, termed PASTA. This algorithm is global and does not rely on point-to-point correspondences, which are typically absent in LiDAR data. Moreover, and to the best of our knowledge, we offer the first point cloud registration algorithm with provable error bounds. Finally, we illustrate the proposed algorithm and error bounds in simulation on a simple trajectory tracking task.
more »
« less
- PAR ID:
- 10411917
- Date Published:
- Journal Name:
- 2022 IEEE 61st Conference on Decision and Control (CDC)
- Page Range / eLocation ID:
- 3462 to 3467
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Ishikawa, H.; Liu, CL.; Pajdla, T.; Shi, J. (Ed.)We propose a novel technique to register sparse 3D scans in the absence of texture. While existing methods such as KinectFusion or Iterative Closest Points (ICP) heavily rely on dense point clouds, this task is particularly challenging under sparse conditions without RGB data. Sparse texture-less data does not come with high-quality boundary signal, and this prohibits the use of correspondences from corners, junctions, or boundary lines. Moreover, in the case of sparse data, it is incorrect to assume that the same point will be captured in two consecutive scans. We take a different approach and first re-parameterize the point-cloud using a large number of line segments. In this re-parameterized data, there exists a large number of line intersection (and not correspondence) constraints that allow us to solve the registration task. We propose the use of a two-step alternating projection algorithm by formulating the registration as the simultaneous satisfaction of intersection and rigidity constraints. The proposed approach outperforms other top-scoring algorithms on both Kinect and LiDAR datasets. In Kinect, we can use 100X downsampled sparse data and still outperform competing methods operating on full-resolution data.more » « less
-
LiDAR-OSM-Based Vehicle Localization in GPS-Denied Environments by Using Constrained Particle FilterCross-modal vehicle localization is an important task for automated driving systems. This research proposes a novel approach based on LiDAR point clouds and OpenStreetMaps (OSM) via a constrained particle filter, which significantly improves the vehicle localization accuracy. The OSM modality provides not only a platform to generate simulated point cloud images, but also geometrical constraints (e.g., roads) to improve the particle filter’s final result. The proposed approach is deterministic without any learning component or need for labelled data. Evaluated by using the KITTI dataset, it achieves accurate vehicle pose tracking with a position error of less than 3 m when considering the mean error across all the sequences. This method shows state-of-the-art accuracy when compared with the existing methods based on OSM or satellite maps.more » « less
-
3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate pointlevel labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.more » « less
-
The integration of structure from motion (SFM) and unmanned aerial vehicle (UAV) technologies has allowed for the generation of very high-resolution three-dimensional (3D) point cloud data (up to millimeters) to detect and monitor surface changes. However, a bottleneck still exists in accurately and rapidly registering the point clouds at different times. The existing point cloud registration algorithms, such as the Iterative Closest Point (ICP) and the Fast Global Registration (FGR) method, were mainly developed for the registration of small and static point cloud data, and do not perform well when dealing with large point cloud data with potential changes over time. In particular, registering large data is computationally expensive, and the inclusion of changing objects reduces the accuracy of the registration. In this paper, we develop an AI-based workflow to ensure high-quality registration of the point clouds generated using UAV-collected photos. We first detect stable objects from the ortho-photo produced by the same set of UAV-collected photos to segment the point clouds of these objects. Registration is then performed only on the partial data with these stable objects. The application of this workflow using the UAV data collected from three erosion plots at the East Tennessee Research and Education Center indicates that our workflow outperforms the existing algorithms in both computational speed and accuracy. This AI-based workflow significantly improves computational efficiency and avoids the impact of changing objects for the registration of large point cloud data.more » « less
An official website of the United States government

