skip to main content


Title: Super-Resolution 3D Laser Scanning Based on Interval Arithmetic
Most 3D laser scanners are based on 3D optical triangulation algorithms, where the location of each 3D point is estimated as the intersection of a camera ray and a plane of light projected by a laser line generator. Since a physical laser line generator projects a sheet of light of finite thickness, inaccurate measurement and errors result from assuming that the plane of light is infinitesimally thin. We propose a new mathematical formulation for 3D optical triangulation based on interval arithmetic, where 3D points are only determined within certain bounds along the camera rays, and multiple measurements are used to tighten these bounds. We propose the Line Segment Cloud as an alternative surface representation to visualize the measurement errors within the proposed framework. We introduce the Iterative Line Segment Tightening algorithm to convert line segment clouds to point clouds, as a preprocessing step prior to surface reconstruction. We describe how to construct a low cost laser line 3D scanner, where the camera is fixed with respect to the object and the laser line generator is mounted on a high resolution motion platform. We describe a GPU-based implementation where the large number of captured images are processed in real time. Finally, we present some experimental results.  more » « less
Award ID(s):
1717355
PAR ID:
10174305
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Transactions on Instrumentation and Measurement
ISSN:
0018-9456
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents a novel technique to achieve autofocusing for a three-dimensional (3D) profilometry system with dual projectors. The proposed system uses a camera that is attached with an electronically focus-tunable lens (ETL) that allows dynamic change of camera’s focal plane such that the camera can focus on the object; the camera captures fringe patterns projected by each projector to establish corresponding points between two projectors, and two pre-calibrated projectors form triangulation for 3D reconstruction. We pre-calibrate the relationship between the depth and the current being used for each focal plane, perform a 3D shape measurement with an unknown focus level, and calculate the desired current value based on the initial 3D result. We developed a prototype system that can automatically focus on an object positioned between 450 mm to 850 mm.

     
    more » « less
  2. Robust and effective fruit detection and localization is essential for robotic harvesting systems. While extensive research efforts have been devoted to improving fruit detection, less emphasis has been placed on the fruit localization aspect, which is a crucial yet challenging task due to limited depth accuracy from existing sensor measurements in the natural orchard environment with variable lighting conditions and foliage/branch occlusions. In this paper, we present the system design and calibration of an Active LAser-Camera Scanner (ALACS), a novel perception module for robust and high-precision fruit localization. The hardware of the ALACS mainly consists of a red line laser, an RGB camera, and a linear motion slide, which are seamlessly integrated into an active scanning scheme where a dynamic-targeting laser-triangulation principle is employed. A high-fidelity extrinsic model is developed to pair the laser illumination and the RGB camera, enabling precise depth computation when the target is captured by both sensors. A random sample consensus-based robust calibration scheme is then designed to calibrate the model parameters based on collected data. Comprehensive evaluations are conducted to validate the system model and calibration scheme. The results show that the proposed calibration method can detect and remove data outliers to achieve robust parameter computation, and the calibrated ALACS system is able to achieve high-precision localization with the maximum depth measurement error being less than 4 mm at distance ranging from 0.6 to 1.2 m.

     
    more » « less
  3. null (Ed.)
    Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D’s versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy. 
    more » « less
  4. Abstract

    We present the first on-sky segmented primary mirror closed-loop piston control using a Zernike wavefront sensor (ZWFS) installed on the Keck II telescope. Segment cophasing errors are a primary contributor to contrast limits on Keck and will be necessary to correct for the next generation of space missions and ground-based extremely large telescopes, which will all have segmented primary mirrors. The goal of the ZWFS installed on Keck is to monitor and correct primary mirror cophasing errors in parallel with science observations. The ZWFS is ideal for measuring phase discontinuities such as segment cophasing errors and is one of the most sensitive WFSs, but has limited dynamic range. The vector-ZWFS at Keck works on the adaptive-optics-corrected wavefront and consists of a metasurface focal plane mask that imposes two different phase shifts on the core of the point-spread function to two orthogonal light polarizations, producing two pupil images. This design extends the dynamic range compared with the scalar ZWFS. The primary mirror segment pistons were controlled in closed loop using the ZWFS, improving the Strehl ratio on the NIRC2 science camera by up to 10 percentage points. We analyze the performance of the closed-loop tests, the impact on NIRC2 science data, and discuss the ZWFS measurements.

     
    more » « less
  5. Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensor being the Triangulation Light Curtains (LC). In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a Light Curtain’s laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a Light Curtain. 
    more » « less