skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 15 until 2:00 AM ET on Friday, January 16 due to maintenance. We apologize for the inconvenience.


Title: Multi‐vehicle multi‐sensor occupancy grid map fusion in vehicular networks
Abstract Sensing is an essential part in autonomous driving and intelligent transportation systems. It enables the vehicle to better understand itself and its surrounding environment. Vehicular networks support information sharing among different vehicles and hence enable the multi‐vehicle multi‐sensor cooperative sensing, which can greatly improve the sensing performance. However, there are a couple of issues to be addressed. First, the multi‐sensor data fusion needs to deal with heterogeneous data formats. Second, the cooperative sensing process needs to deal with low data quality and perception blind spots for some vehicles. In order to solve the above problems, in this paper the occupancy grid map is adopted to facilitate the fusion of multi‐vehicle and multi‐sensor data. The dynamic target detection frame and pixel information of the camera data are mapped to the static environment of the LiDAR point cloud, and the space‐based occupancy probability distribution kernel density estimation characterization fusion data is designed , and the occupancy grid map based on the probability level and the spatial level is generated. Real‐world experiments show that the proposed fusion framework is better compatible with the data information of different sensors and expands the sensing range by involving the collaborations among multiple vehicles in vehicular networks.  more » « less
Award ID(s):
1932139
PAR ID:
10362078
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1049
Date Published:
Journal Name:
IET Communications
Volume:
16
Issue:
1
ISSN:
1751-8628
Format(s):
Medium: X Size: p. 67-74
Size(s):
p. 67-74
Sponsoring Org:
National Science Foundation
More Like this
  1. A major challenge in cooperative sensing is to weight the measurements taken from the various sources to get an accurate result. Ideally, the weights should be inversely proportional to the error in the sensing information. However, previous cooperative sensor fusion approaches for autonomous vehicles use a fixed error model, in which the covariance of a sensor and its recognizer pipeline is just the mean of the measured covariance for all sensing scenarios. The approach proposed in this paper estimates error using key predictor terms that have high correlation with sensing and localization accuracy for accurate covariance estimation of each sensor observation. We adopt a tiered fusion model consisting of local and global sensor fusion steps. At the local fusion level, we add in a covariance generation stage using the error model for each sensor and the measured distance to generate the expected covariance matrix for each observation. At the global sensor fusion stage we add an additional stage to generate the localization covariance matrix from the key predictor term velocity and combines that with the covariance generated from the local fusion for accurate cooperative sensing. To showcase our method, we built a set of 1/10 scale model autonomous vehicles with scale accurate sensing capabilities and classified the error characteristics against a motion capture system. Results show an average and max improvement in RMSE when detecting vehicle positions of 1.42x and 1.78x respectively in a four-vehicle cooperative fusion scenario when using our error model versus a typical fixed error model. 
    more » « less
  2. A vehicular communication network allows vehicles on the road to be connected by wireless links, providing road safety in vehicular environments. Vehicular communication network is vulnerable to various types of attacks. Cryptographic techniques are used to prevent attacks such as message modification or vehicle impersonation. However, cryptographic techniques are not enough to protect against insider attacks where an attacking vehicle has already been authenticated in the network. Vehicular network safety services rely on periodic broadcasts of basic safety messages (BSMs) from vehicles in the network that contain important information about the vehicles such as position, speed, received signal strength (RSSI) etc. Malicious vehicles can inject false position information in a BSM to commit a position falsification attack which is one of the most dangerous insider attacks in vehicular networks. Position falsification attacks can lead to traffic jams or accidents given false position information from vehicles in the network. A misbehavior detection system (MDS) is an efficient way to detect such attacks and mitigate their impact. Existing MDSs require a large amount of features which increases the computational complexity to detect these attacks. In this paper, we propose a novel grid-based misbehavior detection system which utilizes the position information from the BSMs. Our model is tested on a publicly available dataset and is applied using five classification algorithms based on supervised learning. Our model performs multi-classification and is found to be superior compared to other existing methods that deal with position falsification attacks. 
    more » « less
  3. With the trend of vehicles becoming increasingly connected and potentially autonomous, vehicles are being equipped with rich sensing and communication devices. Various vehicular services based on shared real-time sensor data of vehicles from a fleet have been proposed to improve the urban efficiency, e.g., HD-live map, and traffic accident recovery. However, due to the high cost of data uploading (e.g., monthly fees for a cellular network), it would be impractical to make all well-equipped vehicles to upload real-time sensor data constantly. To better utilize these limited uploading resources and achieve an optimal road segment sensing coverage, we present a real-time sensing task scheduling framework, i.e., RISC, for Resource-Constraint modeling for urban sensing by scheduling sensing tasks of commercial vehicles with sensors based on the predictability of vehicles' mobility patterns. In particular, we utilize the commercial vehicles, including taxicabs, buses, and logistics trucks as mobile sensors to sense urban phenomena, e.g., traffic, by using the equipped vehicular sensors, e.g., dash-cam, lidar, automotive radar, etc. We implement RISC on a Chinese city Shenzhen with one-month real-world data from (i) a taxi fleet with 14 thousand vehicles; (ii) a bus fleet with 13 thousand vehicles; (iii) a truck fleet with 4 thousand vehicles. Further, we design an application, i.e., track suspect vehicles (e.g., hit-and-run vehicles), to evaluate the performance of RISC on the urban sensing aspect based on the data from a regular vehicle (i.e., personal car) fleet with 11 thousand vehicles. The evaluation results show that compared to the state-of-the-art solutions, we improved sensing coverage (i.e., the number of road segments covered by sensing vehicles) by 10% on average. 
    more » « less
  4. Multi-sensor fusion has been widely used by autonomous vehicles (AVs) to integrate the perception results from different sensing modalities including LiDAR, camera and radar. Despite the rapid development of multi-sensor fusion systems in autonomous driving, their vulnerability to malicious attacks have not been well studied. Although some prior works have studied the attacks against the perception systems of AVs, they only consider a single sensing modality or a camera-LiDAR fusion system, which can not attack the sensor fusion system based on LiDAR, camera, and radar. To fill this research gap, in this paper, we present the first study on the vulnerability of multi-sensor fusion systems that employ LiDAR, camera, and radar. Specifically, we propose a novel attack method that can simultaneously attack all three types of sensing modalities using a single type of adversarial object. The adversarial object can be easily fabricated at low cost, and the proposed attack can be easily performed with high stealthiness and flexibility in practice. Extensive experiments based on a real-world AV testbed show that the proposed attack can continuously hide a target vehicle from the perception system of a victim AV using only two small adversarial objects. 
    more » « less
  5. We consider a scenario in which an autonomous vehicle equipped with a downward-facing camera operates in a 3D environment and is tasked with searching for an unknown number of stationary targets on the 2D floor of the environment. The key challenge is to minimize the search time while ensuring a high detection accuracy. We model the sensing field using a multi-fidelity Gaussian process that systematically describes the sensing information available at different altitudes from the floor. Based on the sensing model, we design a novel algorithm called Expedited Multi-Target Search (EMTS) that (i) addresses the coverage-accuracy trade-off: sampling at locations farther from the floor provides a wider field of view but less accurate measurements, (ii) computes an occupancy map of the floor within a prescribed accuracy and quickly eliminates unoccupied regions from the search space, and (iii) travels efficiently to collect the required samples for target detection. We rigorously analyze the algorithm and establish formal guarantees on the target detection accuracy and the detection time. We illustrate the algorithm using a simulated multi-target search scenario. 
    more » « less