skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Establishing Trust in Vehicle-to-Vehicle Coordination: A Sensor Fusion Approach
Autonomous vehicles (AVs) use diverse sensors to understand their surroundings as they continually make safety-critical decisions. However, establishing trust with other AVs is a key prerequisite because safety-critical decisions cannot be made based on data shared from untrusted sources. Existing protocols require an infrastructure network connection and a third-party root of trust to establish a secure channel, which are not always available.In this paper, we propose a sensor-fusion approach for mobile trust establishment, which combines GPS and visual data. The combined data forms evidence that one vehicle is nearby another, which is a strong indication that it is not a remote adversary hence trustworthy. Our preliminary experiments show that our sensor-fusion approach achieves above 80% successful pairing of two legitimate vehicles observing the same object with 5 meters of error. Based on these preliminary results, we anticipate that a refined approach can support fuzzy trust establishment, enabling better collaboration between nearby AVs.  more » « less
Award ID(s):
2107020
PAR ID:
10338336
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
2022 2nd Workshop on Data-Driven and Intelligent Cyber-Physical Systems for Smart Cities Workshop (DI-CPS)
Page Range / eLocation ID:
7 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models. 
    more » « less
  2. The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison. 
    more » « less
  3. As we add more autonomous and semi-autonomous vehicles (AVs) to our roads, their effects on passenger and pedestrian safety are becoming more important. Despite extensive testing before deployment, AV systems are not perfect at identifying hazards in the roadway. Although a particular AV’s sensors and software may not be 100% accurate at identifying hazards, there is an untapped pool of information held by other AVs in the vicinity that could be used to quickly and accurately identify roadway hazards before they present a safety threat. 
    more » « less
  4. AVs are affected by reduced maneuverability and performance due to the degradation of sensor performances in fog. Such degradation can cause significant object detection errors in AVs’ safety-critical conditions. For instance, YOLOv5 performs well under favorable weather but is affected by mis-detections and false positives due to atmospheric scattering caused by fog particles. The existing deep object detection techniques often exhibit a high degree of accuracy. Their drawback is being sluggish in object detection in fog. Object detection methods with a fast detection speed have been obtained using deep learning at the expense of accuracy. The problem of the lack of balance between detection speed and accuracy in fog persists. This paper presents an improved YOLOv5-based multi-sensor fusion network that combines radar object detection with a camera image bounding box. We transformed radar detection by mapping the radar detections into a two-dimensional image coordinate and projected the resultant radar image onto the camera image. Using the attention mechanism, we emphasized and improved the important feature representation used for object detection while reducing high-level feature information loss. We trained and tested our multi-sensor fusion network on clear and multi-fog weather datasets obtained from the CARLA simulator. Our results show that the proposed method significantly enhances the detection of small and distant objects. Our small CR-YOLOnet model best strikes a balance between accuracy and speed, with an accuracy of 0.849 at 69 fps. 
    more » « less
  5. Vehicles are becoming more intelligent and automated. To achieve higher automation levels, vehicles are being equipped with more and more sensors. High data rate connectivity seems critical to allow vehicles and road infrastructure exchanging all these sensor data to enlarge their sensing range and make better safety related decisions. Connectivity also enables other applications such as infotainment or high levels of traffic coordination. Current solutions for vehicular communications though do not support the gigabit-per-second data rates. This presentation makes the case that millimeter wave communication is the only viable approach for high bandwidth connected vehicles. The motivation and challenges associated with using mmWave for vehicle-to-vehicle and vehicle-to-infrastructure applications are highlighted. Examples from recent work are provided including new theoretical results that enable mmWave communication in high mobility scenarios and innovative architectural concepts like position and radar-aided communication. 
    more » « less