skip to main content


Title: Establishing Trust in Vehicle-to-Vehicle Coordination: A Sensor Fusion Approach
Autonomous vehicles (AVs) use diverse sensors to understand their surroundings as they continually make safety-critical decisions. However, establishing trust with other AVs is a key prerequisite because safety-critical decisions cannot be made based on data shared from untrusted sources. Existing protocols require an infrastructure network connection and a third-party root of trust to establish a secure channel, which are not always available.In this paper, we propose a sensor-fusion approach for mobile trust establishment, which combines GPS and visual data. The combined data forms evidence that one vehicle is nearby another, which is a strong indication that it is not a remote adversary hence trustworthy. Our preliminary experiments show that our sensor-fusion approach achieves above 80% successful pairing of two legitimate vehicles observing the same object with 5 meters of error. Based on these preliminary results, we anticipate that a refined approach can support fuzzy trust establishment, enabling better collaboration between nearby AVs.  more » « less
Award ID(s):
2107020
NSF-PAR ID:
10338336
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
2022 2nd Workshop on Data-Driven and Intelligent Cyber-Physical Systems for Smart Cities Workshop (DI-CPS)
Page Range / eLocation ID:
7 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.

     
    more » « less
  2. null (Ed.)
    A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models. 
    more » « less
  3. Vehicles are becoming more intelligent and automated. To achieve higher automation levels, vehicles are being equipped with more and more sensors. High data rate connectivity seems critical to allow vehicles and road infrastructure exchanging all these sensor data to enlarge their sensing range and make better safety related decisions. Connectivity also enables other applications such as infotainment or high levels of traffic coordination. Current solutions for vehicular communications though do not support the gigabit-per-second data rates. This presentation makes the case that millimeter wave communication is the only viable approach for high bandwidth connected vehicles. The motivation and challenges associated with using mmWave for vehicle-to-vehicle and vehicle-to-infrastructure applications are highlighted. Examples from recent work are provided including new theoretical results that enable mmWave communication in high mobility scenarios and innovative architectural concepts like position and radar-aided communication. 
    more » « less
  4. Learning the human--mobility interaction (HMI) on interactive scenes (e.g., how a vehicle turns at an intersection in response to traffic lights and other oncoming vehicles) can enhance the safety, efficiency, and resilience of smart mobility systems (e.g., autonomous vehicles) and many other ubiquitous computing applications. Towards the ubiquitous and understandable HMI learning, this paper considers both spoken language (e.g., human textual annotations) and unspoken language (e.g., visual and sensor-based behavioral mobility information related to the HMI scenes) in terms of information modalities from the real-world HMI scenarios. We aim to extract the important but possibly implicit HMI concepts (as the named entities) from the textual annotations (provided by human annotators) through a novel human language and sensor data co-learning design.

    To this end, we propose CG-HMI, a novel Cross-modality Graph fusion approach for extracting important Human-Mobility Interaction concepts from co-learning of textual annotations as well as the visual and behavioral sensor data. In order to fuse both unspoken and spoken languages, we have designed a unified representation called the human--mobility interaction graph (HMIG) for each modality related to the HMI scenes, i.e., textual annotations, visual video frames, and behavioral sensor time-series (e.g., from the on-board or smartphone inertial measurement units). The nodes of the HMIG in these modalities correspond to the textual words (tokenized for ease of processing) related to HMI concepts, the detected traffic participant/environment categories, and the vehicle maneuver behavior types determined from the behavioral sensor time-series. To extract the inter- and intra-modality semantic correspondences and interactions in the HMIG, we have designed a novel graph interaction fusion approach with differentiable pooling-based graph attention. The resulting graph embeddings are then processed to identify and retrieve the HMI concepts within the annotations, which can benefit the downstream human-computer interaction and ubiquitous computing applications. We have developed and implemented CG-HMI into a system prototype, and performed extensive studies upon three real-world HMI datasets (two on car driving and the third one on e-scooter riding). We have corroborated the excellent performance (on average 13.11% higher accuracy than the other baselines in terms of precision, recall, and F1 measure) and effectiveness of CG-HMI in recognizing and extracting the important HMI concepts through cross-modality learning. Our CG-HMI studies also provide real-world implications (e.g., road safety and driving behaviors) about the interactions between the drivers and other traffic participants.

     
    more » « less
  5. The effectiveness of obstacle avoidance response safety systems such as ADAS, has demonstrated the necessity to optimally integrate and enhance these systems in vehicles in the interest of increasing the road safety of vehicle occupants and pedestrians. Vehicle-pedestrian clearance can be achieved with a model safety envelope based on distance sensors designed to keep a threshold between the ego-vehicle and pedestrians or objects in the traffic environment. More accurate, reliable and robust distance measurements are possible by the implementation of multi-sensor fusion. This work presents the structure of a machine learning based sensor fusion algorithm that can accurately detect a vehicle safety envelope with the use of a HC-SR04 ultrasonic sensor, SF11/C microLiDAR sensor, and a 2D RPLiDAR A3M1 sensor. Sensors for the vehicle safety envelope and ADAS were calibrated for optimal performance and integration with versatile vehicle-sensor platforms. Results for this work include a robust distance sensor fusion algorithm that can correctly sense obstacles from 0.05m to 0.5m on average by 94.33% when trained as individual networks per distance. When the algorithm is trained as a common network of all distances, it can correctly sense obstacles at the same distances on average by 96.95%. Results were measured based on the precision and accuracy of the sensors’ outputs by the time of activation of the safety response once a potential collision was detected. From the results of this work the platform has the potential to identify collision scenarios, warning the driver, and taking corrective action based on the coordinate at which the risk has been identified.

     
    more » « less