This paper addresses the problem of learning to complete a scene's depth from sparse depth points and images of indoor scenes. Specifically, we study the case in which the sparse depth is computed from a visual-inertial simultaneous localization and mapping (VI-SLAM) system. The resulting point cloud has low density, it is noisy, and has nonuniform spatial distribution, as compared to the input from active depth sensors, e.g., LiDAR or Kinect. Since the VI-SLAM produces point clouds only over textured areas, we compensate for the missing depth of the low-texture surfaces by leveraging their planar structures and their surface normals which is an important intermediate representation. The pre-trained surface normal network, however, suffers from large performance degradation when there is a significant difference in the viewing direction (especially the roll angle) of the test image as compared to the trained ones. To address this limitation, we use the available gravity estimate from the VI-SLAM to warp the input image to the orientation prevailing in the training dataset. This results in a significant performance gain for the surface normal estimate, and thus the dense depth estimates. Finally, we show that our method outperforms other state-of-the-art approaches both on training (ScanNet [1] andmore »
Polylidar3D-Fast Polygon Extraction from 3D Data
Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D’s versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.
- Award ID(s):
- 1738714
- Publication Date:
- NSF-PAR ID:
- 10291988
- Journal Name:
- Sensors
- Volume:
- 20
- Issue:
- 17
- Page Range or eLocation-ID:
- 4819
- ISSN:
- 1424-8220
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of political redistricting: given the locations of people in a geographical area (e.g. a US state), the goal is to decompose the area into subareas, called districts, so that the populations of the districts are as close as possible and the districts are ``compact'' and ``contiguous,'' to use the terms referred to in most US state constitutions and/or US Supreme Court rulings. We study a method that outputs a solution in which each district is the intersection of a convex polygon with the geographical area. The average number of sides per polygon is less than six. The polygons tend to be quite compact. Every two districts differ in population by at most one (so we call the solution balanced). In fact, the solution is a centroidal power diagram: each polygon has an associated center in ℝ³ such that * the projection of the center onto the plane z = 0 is the centroid of the locations of people assigned to the polygon, and * for each person assigned to that polygon, the polygon's center is closest among all centers. The polygons are convex because they are the intersections of 3D Voronoi cells with the plane. Themore »
-
Emerging autonomous driving systems require reliable perception of 3D surroundings. Unfortunately, current mainstream perception modalities, i.e., camera and Lidar, are vulnerable under challenging lighting and weather conditions. On the other hand, despite their all-weather operations, today's vehicle Radars are limited to location and speed detection. In this paper, we introduce MILLIPOINT, a practical system that advances the Radar sensing capability to generate 3D point clouds. The key design principle of MILLIPOINT lies in enabling synthetic aperture radar (SAR) imaging on low-cost commodity vehicle Radars. To this end, MILLIPOINT models the relation between signal variations and Radar movement, and enables self-tracking of Radar at wavelength-scale precision, thus realize coherent spatial sampling. Furthermore, MILLIPOINT solves the unique problem of specular reflection, by properly focusing on the targets with post-imaging processing. It also exploits the Radar's built-in antenna array to estimate the height of reflecting points, and eventually generate 3D point clouds. We have implemented MILLIPOINT on a commodity vehicle Radar. Our evaluation results show that MILLIPOINT effectively combats motion errors and specular reflections, and can construct 3D point clouds with much higher density and resolution compared with the existing vehicle Radar solutions.
-
In this review paper, we first provide comprehensive tutorials on two classical methods of polygon-based computer-generated holography: the traditional method (also called the fast-Fourier-transform-based method) and the analytical method. Indeed, other modern polygon-based methods build on the idea of the two methods. We will then present some selective methods with recent developments and progress and compare their computational reconstructions in terms of calculation speed and image quality, among other things. Finally, we discuss and propose a fast analytical method called the fast 3D affine transformation method, and based on the method, we present a numerical reconstruction of a computer-generated hologram (CGH) of a 3D surface consisting of 49,272 processed polygons of the face of a real person without the use of graphic processing units; to the best of our knowledge, this represents a state-of-the-art numerical result in polygon-based computed-generated holography. Finally, we also show optical reconstructions of such a CGH and another CGH of the Stanford bunny of 59,996 polygons with 31,724 processed polygons after back-face culling. We hope that this paper will bring out some of the essence of polygon-based computer-generated holography and provide some insights for future research.
-
Safety zones (SZs) are critical tools that can be used by wildland firefighters to avoid injury or fatality when engaging a fire. Effective SZs provide safe separation distance (SSD) from surrounding flames, ensuring that a fire’s heat cannot cause burn injury to firefighters within the SZ. Evaluating SSD on the ground can be challenging, and underestimating SSD can be fatal. We introduce a new online tool for mapping SSD based on vegetation height, terrain slope, wind speed, and burning condition: the Safe Separation Distance Evaluator (SSDE). It allows users to draw a potential SZ polygon and estimate SSD and the extent to which that SZ polygon may be suitable, given the local landscape, weather, and fire conditions. We begin by describing the algorithm that underlies SSDE. Given the importance of vegetation height for assessing SSD, we then describe an analysis that compares LANDFIRE Existing Vegetation Height and a recent Global Ecosystem Dynamics Investigation (GEDI) and Landsat 8 Operational Land Imager (OLI) satellite image-driven forest height dataset to vegetation heights derived from airborne lidar data in three areas of the Western US. This analysis revealed that both LANDFIRE and GEDI/Landsat tended to underestimate vegetation heights, which translates into an underestimation ofmore »