Abstract Advances in deep learning have revolutionized cyber‐physical applications, including the development of autonomous vehicles. However, real‐world collisions involving autonomous control of vehicles have raised significant safety concerns regarding the use of deep neural networks (DNNs) in safety‐critical tasks, particularly perception. The inherent unverifiability of DNNs poses a key challenge in ensuring their safe and reliable operation. In this work, we propose perception simplex ( ), a fault‐tolerant application architecture designed for obstacle detection and collision avoidance. We analyse an existing LiDAR‐based classical obstacle detection algorithm to establish strict bounds on its capabilities and limitations. Such analysis and verification have not been possible for deep learning‐based perception systems yet. By employing verifiable obstacle detection algorithms, identifies obstacle existence detection faults in the output of unverifiable DNN‐based object detectors. When faults with potential collision risks are detected, appropriate corrective actions are initiated. Through extensive analysis and software‐in‐the‐loop simulations, we demonstrate that provides deterministic fault tolerance against obstacle existence detection faults, establishing a robust safety guarantee.
more »
« less
Verifiable Obstacle Detection
Perception of obstacles remains a critical safety concern for autonomous vehicles. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Open source autonomous driving implementations show a perception pipeline with complex interdependent Deep Neural Networks. These networks are not fully verifiable, making them unsuitable for safety-critical tasks. In this work, we present a safety verification of an existing LiDAR based classical obstacle detection algorithm. We establish strict bounds on the capabilities of this obstacle detection algorithm. Given safety standards, such bounds allow for determining LiDAR sensor properties that would reliably satisfy the standards. Such analysis has as yet been unattainable for neural network based perception systems. We provide a rigorous analysis of the obstacle detection s
more »
« less
- Award ID(s):
- 1815891
- PAR ID:
- 10394076
- Date Published:
- Journal Name:
- Verifiable Obstacle Detection
- Page Range / eLocation ID:
- 61 to 72
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Integrating multimodal data such as RGB and LiDAR from multiple views significantly increases computational and communication demands, which can be challenging for resource-constrained autonomous agents while meeting the time-critical deadlines required for various mission-critical applications. To address this challenge, we propose CoOpTex, a collaborative task execution framework designed for cooperative perception in distributed autonomous systems (DAS). CoOpTex contribution is twofold: (a) CoOpTex fuses multiview RGB images to create a panoramic camera view for 2D object detection and utilizes 360° LiDAR for 3D object detection, improving accuracy with a lightweight Graph Neural Network (GNN) that integrates object coordinates from both perspectives, (b) To optimize task execution and meet the deadline, CoOpTex dynamically offloads computationally intensive image stitching tasks to auxiliary devices when available and adjusts frame capture rates for RGB frames based on device mobility and processing capabilities. We implement CoOpTex in real-time on static and mobile heterogeneous autonomous agents, which helps to significantly reduce deadline violations by 100% while improving frame rates for 2D detection by 2.2 times in stationary and 2 times in mobile conditions, demonstrating its effectiveness in enabling real-time cooperative perception.more » « less
-
In recent years, LiDAR sensors have become pervasive in the solutions to localization tasks for autonomous systems. One key step in using LiDAR data for localization is the alignment of two LiDAR scans taken from different poses, a process called scan-matching or point cloud registration. Most existing algorithms for this problem are heuristic in nature and local, meaning they may not produce accurate results under poor initialization. Moreover, existing methods give no guarantee on the quality of their output, which can be detrimental for safety-critical tasks. In this paper, we analyze a simple algorithm for point cloud registration, termed PASTA. This algorithm is global and does not rely on point-to-point correspondences, which are typically absent in LiDAR data. Moreover, and to the best of our knowledge, we offer the first point cloud registration algorithm with provable error bounds. Finally, we illustrate the proposed algorithm and error bounds in simulation on a simple trajectory tracking task.more » « less
-
Martín-Sacristán, David; Garcia-Roger, David (Ed.)With the recent 5G communication technology deployment, Cellular Vehicle-to-Everything (C-V2X) significantly enhances road safety by enabling real-time exchange of critical traffic information among vehicles, pedestrians, infrastructure, and networks. However, further research is required to address real-time application latency and communication reliability challenges. This paper explores integrating cutting-edge C-V2X technology with environmental perception systems to enhance safety at intersections and crosswalks. We propose a multi-module architecture combining C-V2X with state-of-the-art perception technologies, GPS mapping methods, and the client–server module to develop a co-operative perception system for collision avoidance. The proposed system includes the following: (1) a hardware setup for C-V2X communication; (2) an advanced object detection module leveraging Deep Neural Networks (DNNs); (3) a client–server-based co-operative object detection framework to overcome computational limitations of edge computing devices; and (4) a module for mapping GPS coordinates of detected objects, enabling accurate and actionable GPS data for collision avoidance—even for detected objects not equipped with C-V2X devices. The proposed system was evaluated through real-time experiments at the GMMRC testing track at Kettering University. Results demonstrate that the proposed system enhances safety by broadcasting critical obstacle information with an average latency of 9.24 milliseconds, allowing for rapid situational awareness. Furthermore, the proposed system accurately provides GPS coordinates for detected obstacles, which is essential for effective collision avoidance. The technology integration in the proposed system offers high data rates, low latency, and reliable communication, which are key features that make it highly suitable for C-V2X-based applications.more » « less
-
3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies---a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that it is not the quality of the data but its representation that accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert image-based depth maps to pseudo-LiDAR representations---essentially mimicking the LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing state-of-the-art in image-based performance---raising the detection accuracy of objects within the 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo-image-based approaches.more » « less
An official website of the United States government

