Abstract Sensing is an essential part in autonomous driving and intelligent transportation systems. It enables the vehicle to better understand itself and its surrounding environment. Vehicular networks support information sharing among different vehicles and hence enable the multi‐vehicle multi‐sensor cooperative sensing, which can greatly improve the sensing performance. However, there are a couple of issues to be addressed. First, the multi‐sensor data fusion needs to deal with heterogeneous data formats. Second, the cooperative sensing process needs to deal with low data quality and perception blind spots for some vehicles. In order to solve the above problems, in this paper the occupancy grid map is adopted to facilitate the fusion of multi‐vehicle and multi‐sensor data. The dynamic target detection frame and pixel information of the camera data are mapped to the static environment of the LiDAR point cloud, and the space‐based occupancy probability distribution kernel density estimation characterization fusion data is designed , and the occupancy grid map based on the probability level and the spatial level is generated. Real‐world experiments show that the proposed fusion framework is better compatible with the data information of different sensors and expands the sensing range by involving the collaborations among multiple vehicles in vehicular networks.
more »
« less
Motion Characterization for Vehicular Visible Light Communications
The increasing use of light emitting diodes (LED) and light receptors such as photodiodes and cameras in vehicles motivates the use of visible light communication (VLC) for inter–vehicular networking. However, the mobility of the vehicles presents a fundamental impediment for high throughput and link sustenance in vehicular VLC. While prior work has explored vehicular VLC system design, yet, there is no clear understanding on the amount of motion of vehicles in real world vehicular VLC use–case scenarios. To address this knowledge gap, in this paper, we present a mobility characterization study through extensive experiments in real world driving scenarios. We characterize motion using a constantly illuminated transmitter on a lead vehicle and a multi–camera setup on a following vehicle. The observations from our experiments reveal key insights on the degree of relative motion of a vehicle along its spatial axis and different vehicular motion behaviors. The motion characterization from this work lays a stepping stone to addressing mobility in vehicular VLC.
more »
« less
- Award ID(s):
- 1755925
- PAR ID:
- 10097630
- Date Published:
- Journal Name:
- 11th International Conference on Communication Systems & Networks (COMSNETS)
- Page Range / eLocation ID:
- 759 to 764
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Vehicles are becoming more intelligent and automated. To achieve higher automation levels, vehicles are being equipped with more and more sensors. High data rate connectivity seems critical to allow vehicles and road infrastructure exchanging all these sensor data to enlarge their sensing range and make better safety related decisions. Connectivity also enables other applications such as infotainment or high levels of traffic coordination. Current solutions for vehicular communications though do not support the gigabit-per-second data rates. This presentation makes the case that millimeter wave communication is the only viable approach for high bandwidth connected vehicles. The motivation and challenges associated with using mmWave for vehicle-to-vehicle and vehicle-to-infrastructure applications are highlighted. Examples from recent work are provided including new theoretical results that enable mmWave communication in high mobility scenarios and innovative architectural concepts like position and radar-aided communication.more » « less
-
With the trend of vehicles becoming increasingly connected and potentially autonomous, vehicles are being equipped with rich sensing and communication devices. Various vehicular services based on shared real-time sensor data of vehicles from a fleet have been proposed to improve the urban efficiency, e.g., HD-live map, and traffic accident recovery. However, due to the high cost of data uploading (e.g., monthly fees for a cellular network), it would be impractical to make all well-equipped vehicles to upload real-time sensor data constantly. To better utilize these limited uploading resources and achieve an optimal road segment sensing coverage, we present a real-time sensing task scheduling framework, i.e., RISC, for Resource-Constraint modeling for urban sensing by scheduling sensing tasks of commercial vehicles with sensors based on the predictability of vehicles' mobility patterns. In particular, we utilize the commercial vehicles, including taxicabs, buses, and logistics trucks as mobile sensors to sense urban phenomena, e.g., traffic, by using the equipped vehicular sensors, e.g., dash-cam, lidar, automotive radar, etc. We implement RISC on a Chinese city Shenzhen with one-month real-world data from (i) a taxi fleet with 14 thousand vehicles; (ii) a bus fleet with 13 thousand vehicles; (iii) a truck fleet with 4 thousand vehicles. Further, we design an application, i.e., track suspect vehicles (e.g., hit-and-run vehicles), to evaluate the performance of RISC on the urban sensing aspect based on the data from a regular vehicle (i.e., personal car) fleet with 11 thousand vehicles. The evaluation results show that compared to the state-of-the-art solutions, we improved sensing coverage (i.e., the number of road segments covered by sensing vehicles) by 10% on average.more » « less
-
The operational safety of Automated Driving System (ADS)-Operated Vehicles (AVs) are a rising concern with the deployment of AVs as prototypes being tested and also in commercial deployment. The robustness of safety evaluation systems is essential in determining the operational safety of AVs as they interact with human-driven vehicles. Extending upon earlier works of the Institute of Automated Mobility (IAM) that have explored the Operational Safety Assessment (OSA) metrics and infrastructure-based safety monitoring systems, in this work, we compare the performance of an infrastructure-based Light Detection And Ranging (LIDAR) system to an onboard vehicle-based LIDAR system in testing at the Maricopa County Department of Transportation SMARTDrive testbed in Anthem, Arizona. The sensor modalities are located in infrastructure and onboard the test vehicles, including LIDAR, cameras, a real-time differential GPS, and a drone with a camera. Bespoke localization and tracking algorithms are created for the LIDAR and cameras. In total, there are 26 different scenarios of the test vehicles navigating the testbed intersection; for this work, we are only considering car following scenarios. The LIDAR data collected from the infrastructure-based and onboard vehicle-based sensors system are used to perform object detection and multi-target tracking to estimate the velocity and position information of the test vehicles and use these values to compute OSA metrics. The comparison of the performance of the two systems involves the localization and tracking errors in calculating the position and the velocity of the subject vehicle, with the real-time differential GPS data serving as ground truth for velocity comparison and tracking results from the drone for OSA metrics comparison.more » « less
-
Accurate and up-to-date digital road maps are the foundation of many mobile applications, such as navigation and autonomous driving. A manually-created map suffers from the high cost for creation and maintenance due to constant road network updating. Recently, the ubiquity of GPS devices in vehicular systems has led to an unprecedented amount of vehicle sensing data for map inference. Unfortunately, accurate map inference based on vehicle GPS is challenging for two reasons. First, it is challenging to infer complete road structures due to the sensing deviation, sparse coverage, and low sampling rate of GPS of a fleet of vehicles with similar mobility patterns, e.g., taxis. Second, a road map requires various road properties such as road categories, which is challenging to be inferred by just GPS locations of vehicles. In this paper, we design a map inference system called coMap by considering multiple fleets of vehicles with Complementary Mobility Features. coMap has two key components: a graph-based map sketching component, a learning-based map painting component. We implement coMap with the data from four type-aware vehicular sensing systems in one city, which consists of 18 thousand taxis, 10 thousand private vehicles, 6 thousand trucks, and 14 thousand buses. We conduct a comprehensive evaluation of coMap with two state-of-the-art baselines along with ground truth based on OpenStreetMap and a commercial map provider, i.e., Baidu Maps. The results show that (i) for the map sketching, our work improves the performance by 15.9%; (ii) for the map painting, our work achieves 74.58% of average accuracy on road category classification.more » « less