<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Cyber Mobility Mirror: A Deep Learning-Based Real-World Object Perception Platform Using Roadside LiDAR</dc:title><dc:creator>Bai, Zhengwei; Nayak, Saswat P; Zhao, Xuanpeng; Wu, Guoyuan; Barth, Matthew J; Qi, Xuewei; Liu, Yongkang; Sisbot, Emrah Akin; Oguchi, Kentaro</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Abstract—Object perception plays a fundamental role in
Cooperative Driving Automation (CDA) which is regarded as
a revolutionary promoter for next-generation transportation
systems. However, the vehicle-based perception may suffer
from the limited sensing range and occlusion as well as low
penetration rates in connectivity. In this paper, we propose Cyber
Mobility Mirror (CMM), a next-generation real-world object
perception system for 3D object detection, tracking, localization,
and reconstruction, to explore the potential of roadside sensors
for enabling CDA in the real world. The CMM system consists
of six main components: i) the data pre-processor to retrieve
and preprocess the raw data; ii) the roadside 3D object detector
to generate 3D detection results; iii) the multi-object tracker
to identify detected objects; iv) the global locator to generate
geo-localization information; v) the mobile-edge-cloud-based
communicator to transmit perception information to equipped
vehicles, and vi) the onboard advisor to reconstruct and
display the real-time traffic conditions. An automatic perception
evaluation approach is proposed to support the assessment of
data-driven models without human-labeling requirements and
a CMM field-operational system is deployed at a real-world
intersection to assess the performance of the CMM. Results
from field tests demonstrate that our CMM prototype system
can achieve 96.99% precision and 83.62% recall for detection
and 73.55% ID-recall for tracking. High-fidelity real-time traffic
conditions (at the object level) can be geo-localized with a
root-mean-square error (RMSE) of 0.69m and 0.33m for lateral
and longitudinal direction, respectively, and displayed on the
GUI of the equipped vehicle with a frequency of 3 − 4Hz.</dc:description><dc:publisher>IEEE</dc:publisher><dc:date>2023-09-01</dc:date><dc:nsf_par_id>10511095</dc:nsf_par_id><dc:journal_name>IEEE Transactions on Intelligent Transportation Systems</dc:journal_name><dc:journal_volume>24</dc:journal_volume><dc:journal_issue>9</dc:journal_issue><dc:page_range_or_elocation>9476 to 9489</dc:page_range_or_elocation><dc:issn>1524-9050</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1109/TITS.2023.3268281</dc:doi><dcq:identifierAwardId>2152258</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>