Support for connected and autonomous vehicles (CAVs) is a major use case of 5G networks. Due to their large from factors, CAVs can be equipped with multiple radio antennas, cameras, LiDAR and other sensors. In other words, they are "giant" mobile integrated communications and sensing devices. The data collected can not only facilitate edge-assisted autonomous driving, but also enable intelligent radio resource allocation by cellular networks. In this paper we conduct an initial study to assess the feasibility of delivering multi-modal sensory data collected by vehicles over emerging commercial 5G networks. We carried out an "in-the-wild" drive test and data collection campaign between Minneapolis and Chicago using a vehicle equipped with a 360° camera, a LiDAR device, multiple smart phones and a professional 5G network measurement tool. Using the collected multi-modal data, we conduct trace-driven experiments in a local streaming testbed to analyze the requirements and performance of streaming multi-modal sensor data over existing 4G/5G networks. We reveal several notable findings and point out future research directions.
more »
« less
The Architectural Implications of Multi-modal Detection Models for Autonomous Driving Systems
Object detection plays a pivotal in autonomous driving by enabling the vehicles to perceive and comprehend their environment, thereby making informed decisions for safe navigation. Camera data provides rich visual context and object recognition, while LiDAR data offers precise distance measurements and 3D mapping. Multi-modal object detection models are gaining prominence in incorporating these data types, which is essential for the comprehensive perception and situational awareness needed in autonomous vehicles. Although graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) are promising hardware options for this application, the complex knowledge required to efficiently adapt and optimize multi-modal detection models for FPGAs presents a significant barrier to their utilization on this versatile and efficient platform. In this work, we evaluate the performance of camera and LiDARbased detection models on GPU and FPGA hardware, aiming to provide a specialized understanding for translating multi-modal detection models to suit the unique architecture of heterogeneous hardware platforms in autonomous driving systems. We focus on critical metrics from both system and model performance aspects. Based on our quantitative implications, we propose foundational insights and guidance for the design of camera and LiDAR-based multi-modal detection models on diverse hardware platforms.
more »
« less
- Award ID(s):
- 2245729
- PAR ID:
- 10510976
- Publisher / Repository:
- 2024 IEEE International Conference on Mobility, Operations, Services and Technologies (MOST)
- Date Published:
- Subject(s) / Keyword(s):
- autonomous driving multi-modality heterogeneous hardware object detection GPU FPGA
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multi-sensor fusion has been widely used by autonomous vehicles (AVs) to integrate the perception results from different sensing modalities including LiDAR, camera and radar. Despite the rapid development of multi-sensor fusion systems in autonomous driving, their vulnerability to malicious attacks have not been well studied. Although some prior works have studied the attacks against the perception systems of AVs, they only consider a single sensing modality or a camera-LiDAR fusion system, which can not attack the sensor fusion system based on LiDAR, camera, and radar. To fill this research gap, in this paper, we present the first study on the vulnerability of multi-sensor fusion systems that employ LiDAR, camera, and radar. Specifically, we propose a novel attack method that can simultaneously attack all three types of sensing modalities using a single type of adversarial object. The adversarial object can be easily fabricated at low cost, and the proposed attack can be easily performed with high stealthiness and flexibility in practice. Extensive experiments based on a real-world AV testbed show that the proposed attack can continuously hide a target vehicle from the perception system of a victim AV using only two small adversarial objects.more » « less
-
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection.more » « less
-
The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.more » « less
-
With the rapid development of technology and the proliferation of uncrewed aerial systems (UAS), there is an immediate need for security solutions. Toward this end, we propose the use of a multi-robot system for autonomous and cooperative counter-UAS missions. In this paper, we present the design of the hardware and software components of different complementary robotic platforms: a mobile uncrewed ground vehicle (UGV) equipped with a LiDAR sensor, an uncrewed aerial vehicle (UAV) with a gimbal-mounted stereo camera for air-to-air inspections, and a UAV with a capture mechanism equipped with radars and camera. Our proposed system features 1) scalability to larger areas due to the distributed approach and online processing, 2) long-term cooperative missions, and 3) complementary multimodal perception for the detection of multirotor UAVs. In field experiments, we demonstrate the integration of all subsystems in accomplishing a counter-UAS task within an unstructured environment. The obtained results confirm the promising direction of using multi-robot and multi-modal systems for C-UAS.more » « less
An official website of the United States government

