skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on October 23, 2026

Title: Mixed Signals: A Diverse Point Cloud Dataset for Heterogeneous LiDAR V2X Collaboration
Vehicle-to-everything (V2X) collaborative perception has emerged as a promising solution to address the limitations of single-vehicle perception systems. However, existing V2X datasets are limited in scope, diversity, and quality. To address these gaps, we present Mixed Signals, a comprehensive V2X dataset featuring 45.1k point clouds and 240.6k bounding boxes collected from three connected autonomous vehicles (CAVs) equipped with two different configurations of LiDAR sensors, plus a roadside unit with dual LiDARs. Our dataset provides point clouds and bounding box annotations across 10 classes, ensuring reliable data for perception training. We provide detailed statistical analysis on the quality of our dataset and extensively benchmark existing V2X methods on it. Mixed Signals is ready-to-use, with precise alignment and consistent annotations across time and viewpoints. We hope our work advances research in the emerging, impactful field of V2X perception. Dataset details at https://mixedsignalsdataset.cs.cornell.edu/.  more » « less
Award ID(s):
2107077
PAR ID:
10639194
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Format(s):
Medium: X
Location:
International Conference on Computer Vision, Honolulu, Hawai'i, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Emerging autonomous driving systems require reliable perception of 3D surroundings. Unfortunately, current mainstream perception modalities, i.e., camera and Lidar, are vulnerable under challenging lighting and weather conditions. On the other hand, despite their all-weather operations, today's vehicle Radars are limited to location and speed detection. In this paper, we introduce MILLIPOINT, a practical system that advances the Radar sensing capability to generate 3D point clouds. The key design principle of MILLIPOINT lies in enabling synthetic aperture radar (SAR) imaging on low-cost commodity vehicle Radars. To this end, MILLIPOINT models the relation between signal variations and Radar movement, and enables self-tracking of Radar at wavelength-scale precision, thus realize coherent spatial sampling. Furthermore, MILLIPOINT solves the unique problem of specular reflection, by properly focusing on the targets with post-imaging processing. It also exploits the Radar's built-in antenna array to estimate the height of reflecting points, and eventually generate 3D point clouds. We have implemented MILLIPOINT on a commodity vehicle Radar. Our evaluation results show that MILLIPOINT effectively combats motion errors and specular reflections, and can construct 3D point clouds with much higher density and resolution compared with the existing vehicle Radar solutions. 
    more » « less
  2. Collaborative perception enables autonomous driving vehicles to share sensing or perception data via broadcast-based vehicle-to-everything (V2X) communication technologies such as Cellular-V2X (C-V2X), hoping to enable accurate perception in face of inaccurate perception results by each individual vehicle. Nevertheless, the V2X communication channel remains a significant bottleneck to the performance and usefulness of collaborative perception due to limited bandwidth and ad hoc communication scheduling. In this paper, we explore challenges and design choices for V2X-based collaborative perception, and propose an architecture that lever-ages the power of edge computing such as road-side units for central communication scheduling. Using NS-3 simulations, we show the performance gap between distributed and centralized C-V2X scheduling in terms of achievable throughput and communication efficiency, and explore scenarios where edge assistance is beneficial or even necessary for collaborative perception. 
    more » « less
  3. Advances in perception for self-driving cars have accel- erated in recent years due to the availability of large-scale datasets, typically collected at specific locations and under nice weather conditions. Yet, to achieve the high safety re- quirement, these perceptual systems must operate robustly under a wide variety of weather conditions including snow and rain. In this paper, we present a new dataset to enable robust autonomous driving via a novel data collection pro- cess — data is repeatedly recorded along a 15 km route un- der diverse scene (urban, highway, rural, campus), weather (snow, rain, sun), time (day/night), and traffic conditions (pedestrians, cyclists and cars). The dataset includes im- ages and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS to establish correspon- dence across routes. The dataset includes road and object annotations using amodal masks to capture partial occlu- sions and 3D bounding boxes. We demonstrate the unique- ness of this dataset by analyzing the performance of base- lines in amodal segmentation of road and objects, depth estimation, and 3D object detection. The repeated routes opens new research directions in object discovery, contin- ual learning, and anomaly detection. Link to Ithaca365: https://ithaca365.mae.cornell.edu/ 
    more » « less
  4. Martín-Sacristán, David; Garcia-Roger, David (Ed.)
    With the recent 5G communication technology deployment, Cellular Vehicle-to-Everything (C-V2X) significantly enhances road safety by enabling real-time exchange of critical traffic information among vehicles, pedestrians, infrastructure, and networks. However, further research is required to address real-time application latency and communication reliability challenges. This paper explores integrating cutting-edge C-V2X technology with environmental perception systems to enhance safety at intersections and crosswalks. We propose a multi-module architecture combining C-V2X with state-of-the-art perception technologies, GPS mapping methods, and the client–server module to develop a co-operative perception system for collision avoidance. The proposed system includes the following: (1) a hardware setup for C-V2X communication; (2) an advanced object detection module leveraging Deep Neural Networks (DNNs); (3) a client–server-based co-operative object detection framework to overcome computational limitations of edge computing devices; and (4) a module for mapping GPS coordinates of detected objects, enabling accurate and actionable GPS data for collision avoidance—even for detected objects not equipped with C-V2X devices. The proposed system was evaluated through real-time experiments at the GMMRC testing track at Kettering University. Results demonstrate that the proposed system enhances safety by broadcasting critical obstacle information with an average latency of 9.24 milliseconds, allowing for rapid situational awareness. Furthermore, the proposed system accurately provides GPS coordinates for detected obstacles, which is essential for effective collision avoidance. The technology integration in the proposed system offers high data rates, low latency, and reliable communication, which are key features that make it highly suitable for C-V2X-based applications. 
    more » « less
  5. Cross-modal vehicle localization is an important task for automated driving systems. This research proposes a novel approach based on LiDAR point clouds and OpenStreetMaps (OSM) via a constrained particle filter, which significantly improves the vehicle localization accuracy. The OSM modality provides not only a platform to generate simulated point cloud images, but also geometrical constraints (e.g., roads) to improve the particle filter’s final result. The proposed approach is deterministic without any learning component or need for labelled data. Evaluated by using the KITTI dataset, it achieves accurate vehicle pose tracking with a position error of less than 3 m when considering the mean error across all the sequences. This method shows state-of-the-art accuracy when compared with the existing methods based on OSM or satellite maps. 
    more » « less