Teleoperation is increasingly used in the operation of delivery robots and is beginning to be utilized for certain autonomous vehicle intervention applications. This paper addresses the challenges in teleoperation of an autonomous vehicle due to latencies in wireless communication between the remote vehicle and the teleoperator station. Camera video images and Lidar data are typically delayed during wireless transmission but are critical for proper display of the remote vehicle's real-time road environment to the teleoperator. Data collected with experiments in this project show that a 0.5 second delay in real-time display makes it extremely difficult for the teleoperator to control the remote vehicle. This problem is addressed in the paper by using a predictive display (PD) system which provides intermediate updates of the remote vehicle's environment while waiting for actual camera images. The predictive display utilizes estimated positions of the ego vehicle and of other vehicles on the road computed using model-based extended Kalman filters. A crucial result presented in the paper is that vehicle motion models need to be inertial rather than relative and so tracking of other vehicles requires accurate localization of the ego vehicle itself. An experimental study using 5 human teleoperators is conducted to compare teleoperation performance with and without predictive display. A 0.5 second time-delay in camera images makes it impossible to control the vehicle to stay in its lane on curved roads, but the use of the developed predictive display system enables safe remote vehicle control with almost as accurate a performance as the delay-free case.
more »
« less
Predictive Display for Teleoperation Based on Vector Fields Using Lidar-Camera Fusion
Abstract Teleoperation can enable human intervention to help handle instances of failure in autonomy thus allowing for much safer deployment of autonomous vehicle technology. Successful teleoperation requires recreating the environment around the remote vehicle using camera data received over wireless communication channels. This paper develops a new predictive display system to tackle the significant time delays encountered in receiving camera data over wireless networks. First, a new high gain observer is developed for estimating the position and orientation of the ego vehicle. The novel observer is shown to perform accurate state estimation using only GNSS and gyroscope sensor readings. A vector field method which fuses the delayed camera and Lidar data is then presented. This method uses sparse 3D points obtained from Lidar and transforms them using the state estimates from the high gain observer to generate a sparse vector field for the camera image. Polynomial based interpolation is then performed to obtain the vector field for the complete image which is then remapped to synthesize images for accurate predictive display. The method is evaluated on real-world experimental data from the nuScenes and KITTI datasets. The performance of the high gain observer is also evaluated and compared with that of the EKF. The synthesized images using the vector field based predictive display are compared with ground truth images using various image metrics and offer vastly improved performance compared to delayed images.
more »
« less
- Award ID(s):
- 2321531
- PAR ID:
- 10632378
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- International Journal of Computer Vision
- ISSN:
- 0920-5691
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper develops a cost-effective vehicle detection and tracking system based on fusion of a 2-D LIDAR and a monocular camera to protect electric micromobility devices, especially e-scooters, by predicting the real- time danger of a car- scooter collision. The cost and size disadvantages of 3-D LIDAR sensors make them an unsuitable choice for micromobility devices. Therefore, a 2-D RPLIDAR Mapper sensor is used. Although low-cost, this sensor comes with major shortcomings such as the narrow vertical field of view and its low density of data points. Due to these factors, the sensor does not have a robust output in outdoor applications, and the measurements keep jumping and sliding on the vehicle surface. To improve the performance of the LIDAR, a single monocular camera is fused with the LIDAR data not only to detect vehicles, but also to separately detect the front and side of a target vehicle and to find its corner. It is shown that this corner detection method is more accurate than strategies that are only based on the LIDAR data. The corner measurements are used in a high-gain observer to estimate the location, velocity, and orientation of the target vehicle. The developed system is implemented on a Ninebot e-scooter platform, and multiple experiments are performed to evaluate the performance of the algorithm.more » « less
-
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection.more » « less
-
null (Ed.)High-resolution vehicle trajectory data can be used to generate a wide range of performance measures and facilitate many smart mobility applications for traffic operations and management. In this paper, a Longitudinal Scanline LiDAR-Camera model is explored for trajectory extraction at urban arterial intersections. The proposed model can efficiently detect vehicle trajectories under the complex, noisy conditions (e.g., hanging cables, lane markings, crossing traffic) typical of an arterial intersection environment. Traces within video footage are then converted into trajectories in world coordinates by matching a video image with a 3D LiDAR (Light Detection and Ranging) model through key infrastructure points. Using 3D LiDAR data will significantly improve the camera calibration process for real-world trajectory extraction. The pan-tilt-zoom effects of the traffic camera can be handled automatically by a proposed motion estimation algorithm. The results demonstrate the potential of integrating longitudinal-scanline-based vehicle trajectory detection and the 3D LiDAR point cloud to provide lane-by-lane high-resolution trajectory data. The resulting system has the potential to become a low-cost but reliable measure for future smart mobility systems.more » « less
-
LiDAR-OSM-Based Vehicle Localization in GPS-Denied Environments by Using Constrained Particle FilterCross-modal vehicle localization is an important task for automated driving systems. This research proposes a novel approach based on LiDAR point clouds and OpenStreetMaps (OSM) via a constrained particle filter, which significantly improves the vehicle localization accuracy. The OSM modality provides not only a platform to generate simulated point cloud images, but also geometrical constraints (e.g., roads) to improve the particle filter’s final result. The proposed approach is deterministic without any learning component or need for labelled data. Evaluated by using the KITTI dataset, it achieves accurate vehicle pose tracking with a position error of less than 3 m when considering the mean error across all the sequences. This method shows state-of-the-art accuracy when compared with the existing methods based on OSM or satellite maps.more » « less
An official website of the United States government
