In this article, a compressive sensing (CS) reconstruction algorithm is applied to data acquired from a nodding multi-beam Lidar system following a Lissajous-like trajectory. Multi-beam Lidar systems provide 3D depth information of the environment for applications in robotics, but the vertical resolution of these devices may be insufficient to identify objects, especially when the object is small and/or far from the robot. In order to overcome this issue, the Lidar can be nodded in order to obtain higher vertical resolution with the side-effect of increased scan time, especially when raster scan patterns are used. Such systems, especially when combined with nodding, also yield large volumes of data which may be difficult to store and mange on resource constrained systems. Using Lissajous-like nodding trajectories allows for the trade-off between scan time and horizontal and vertical resolutions through the choice of scan parameters. These patterns also naturally sub-sample the imaged area and the data can be further reduced by simply not collecting each data point along the trajectory. The final depth image must then be reconstructed from the sub-sampled data. In this article, a CS reconstruction algorithm is applied to data collected during a fast and therefore low-resolution Lissajous-like scan. Experiments and simulations show the feasibility of this method and compare its results to images produced from simple nearest-neighbor interpolation.
more »
« less
Point Pattern Estimators for Multi-Beam Lidar Scans
In this work, point pattern estimators are used to analyze the distribution of measurements from a multi-beam Lidar on a pitching platform. Multi-beam Lidars have high resolution in the horizontal plane, but poor vertical resolution. Placing the Lidar on a pitching base improves this resolution, but causes the distribution of measurements to be highly irregular. In this work, these measurement distributions are treated as point patterns and three estimators are used to quantity how measurements are spaced, which has implications in robotic detection of objects using Lidar sensors. These estimators are used to demonstrate how a pitching trajectory for the platform can be chosen to improve multiple performance criteria, such as increasing the likelihood of detection of an object, or adjusting how closely measurements should be spaced.
more »
« less
- Award ID(s):
- 1658696
- PAR ID:
- 10443530
- Date Published:
- Journal Name:
- 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)
- Page Range / eLocation ID:
- 335 to 340
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Object detection plays a pivotal in autonomous driving by enabling the vehicles to perceive and comprehend their environment, thereby making informed decisions for safe navigation. Camera data provides rich visual context and object recognition, while LiDAR data offers precise distance measurements and 3D mapping. Multi-modal object detection models are gaining prominence in incorporating these data types, which is essential for the comprehensive perception and situational awareness needed in autonomous vehicles. Although graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) are promising hardware options for this application, the complex knowledge required to efficiently adapt and optimize multi-modal detection models for FPGAs presents a significant barrier to their utilization on this versatile and efficient platform. In this work, we evaluate the performance of camera and LiDARbased detection models on GPU and FPGA hardware, aiming to provide a specialized understanding for translating multi-modal detection models to suit the unique architecture of heterogeneous hardware platforms in autonomous driving systems. We focus on critical metrics from both system and model performance aspects. Based on our quantitative implications, we propose foundational insights and guidance for the design of camera and LiDAR-based multi-modal detection models on diverse hardware platforms.more » « less
-
null (Ed.)Continuous advancements in LiDAR technology have enabled compelling wind turbulence measurements within the atmospheric boundary layer with range gates shorter than 20 m and sampling frequency of the order of 10 Hz. However, estimates of the radial velocity from the back-scattered laser beam are inevitably affected by an averaging process within each range gate, generally modeled as a convolution between the actual velocity projected along the LiDAR line-of-sight and a weighting function representing the energy distribution of the laser pulse along the range gate. As a result, the spectral energy of the turbulent velocity fluctuations is damped within the inertial sub-range with respective reduction of the velocity variance, and, thus, not allowing to take advantage of the achieved spatio-temporal resolution of the LiDAR technology. In this article, we propose to correct this turbulent energy damping on the LiDAR measurements by reversing the effect of a low-pass filter, which can be estimated directly from the LiDAR measurements. LiDAR data acquired from three different field campaigns are analyzed to describe the proposed technique, investigate the variability of the filter parameters and, for one dataset, assess the procedure for spectral LiDAR correction against sonic anemometer data. It is found that the order of the low-pass filter used for modeling the energy damping on the LiDAR velocity measurements has negligible effects on the correction of the second-order statistics of the wind velocity. In contrast, its cutoff frequency plays a significant role in the spectral correction encompassing the smoothing effects connected with the LiDAR gate length.more » « less
-
null (Ed.)High-resolution vehicle trajectory data can be used to generate a wide range of performance measures and facilitate many smart mobility applications for traffic operations and management. In this paper, a Longitudinal Scanline LiDAR-Camera model is explored for trajectory extraction at urban arterial intersections. The proposed model can efficiently detect vehicle trajectories under the complex, noisy conditions (e.g., hanging cables, lane markings, crossing traffic) typical of an arterial intersection environment. Traces within video footage are then converted into trajectories in world coordinates by matching a video image with a 3D LiDAR (Light Detection and Ranging) model through key infrastructure points. Using 3D LiDAR data will significantly improve the camera calibration process for real-world trajectory extraction. The pan-tilt-zoom effects of the traffic camera can be handled automatically by a proposed motion estimation algorithm. The results demonstrate the potential of integrating longitudinal-scanline-based vehicle trajectory detection and the 3D LiDAR point cloud to provide lane-by-lane high-resolution trajectory data. The resulting system has the potential to become a low-cost but reliable measure for future smart mobility systems.more » « less
-
This paper develops a cost-effective vehicle detection and tracking system based on fusion of a 2-D LIDAR and a monocular camera to protect electric micromobility devices, especially e-scooters, by predicting the real- time danger of a car- scooter collision. The cost and size disadvantages of 3-D LIDAR sensors make them an unsuitable choice for micromobility devices. Therefore, a 2-D RPLIDAR Mapper sensor is used. Although low-cost, this sensor comes with major shortcomings such as the narrow vertical field of view and its low density of data points. Due to these factors, the sensor does not have a robust output in outdoor applications, and the measurements keep jumping and sliding on the vehicle surface. To improve the performance of the LIDAR, a single monocular camera is fused with the LIDAR data not only to detect vehicles, but also to separately detect the front and side of a target vehicle and to find its corner. It is shown that this corner detection method is more accurate than strategies that are only based on the LIDAR data. The corner measurements are used in a high-gain observer to estimate the location, velocity, and orientation of the target vehicle. The developed system is implemented on a Ninebot e-scooter platform, and multiple experiments are performed to evaluate the performance of the algorithm.more » « less
An official website of the United States government

