Data-driven driving safety assessment is crucial in understanding the insights of traffic accidents caused by dangerous driving behaviors. Meanwhile, quantifying driving safety through well-defined metrics in real-world naturalistic driving data is also an important step for the operational safety assessment of automated vehicles (AV). However, the lack of flexible data acquisition methods and fine-grained datasets has hindered progress in this critical area. In response to this challenge, we propose a novel dataset for driving safety metrics analysis specifically tailored to car-following situations. Leveraging state-of-the-art Artificial Intelligence (AI) technology, we employ drones to capture high-resolution video data at 12 traffic scenes in the Phoenix metropolitan area. After that, we developed advanced computer vision algorithms and semantically annotated maps to extract precise vehicle trajectories and leader-follower relations among vehicles. These components, in conjunction with a set of defined metrics based on our prior work on Operational Safety Assessment (OSA) by the Institute of Automated Mobility (IAM), allow us to conduct a detailed analysis of driving safety. Our results reveal the distribution of these metrics under various real-world car-following scenarios and characterize the impact of different parameters and thresholds in the metrics. By enabling a data-driven approach to address driving safety in car-following scenarios, our work can empower traffic operators and policymakers to make informed decisions and contribute to a safer, more efficient future for road transportation systems.
more »
« less
Deep learning serves traffic safety analysis: A forward‐looking review
Abstract This paper explores deep learning (DL) methods that are used or have the potential to be used for traffic video analysis, emphasising driving safety for both autonomous vehicles and human‐operated vehicles. A typical processing pipeline is presented, which can be used to understand and interpret traffic videos by extracting operational safety metrics and providing general hints and guidelines to improve traffic safety. This processing framework includes several steps, including video enhancement, video stabilisation, semantic and incident segmentation, object detection and classification, trajectory extraction, speed estimation, event analysis, modelling, and anomaly detection. The main goal is to guide traffic analysts to develop their own custom‐built processing frameworks by selecting the best choices for each step and offering new designs for the lacking modules by providing a comparative analysis of the most successful conventional and DL‐based algorithms proposed for each step. Existing open‐source tools and public datasets that can help train DL models are also reviewed. To be more specific, exemplary traffic problems are reviewed and required steps are mentioned for each problem. Besides, connections to the closely related research areas of drivers' cognition evaluation, crowd‐sourcing‐based monitoring systems, edge computing in roadside infrastructures, automated driving systems‐equipped vehicles are investigated, and the missing gaps are highlighted. Finally, commercial implementations of traffic monitoring systems, their future outlook, and open problems and remaining challenges for widespread use of such systems are reviewed.
more »
« less
- PAR ID:
- 10570688
- Publisher / Repository:
- DOI PREFIX: 10.1049
- Date Published:
- Journal Name:
- IET Intelligent Transport Systems
- Volume:
- 17
- Issue:
- 1
- ISSN:
- 1751-956X
- Format(s):
- Medium: X Size: p. 22-71
- Size(s):
- p. 22-71
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Already known as densely populated areas with land use including housing, transportation, sanitation, utilities and communication, nowadays, cities tend to grow even bigger. Genuine road-user's types are emerging with further technological developments to come. As cities population size escalates, and roads getting congested, government agencies such as Department of Transportation (DOT) through the National Highway Traffic Safety Administration (NHTSA) are in pressing need to perfect their management systems with new efficient technologies. The challenge is to anticipate on never before seen problems, in their effort to save lives and implement sustainable cost-effective management systems. To make things yet more complicated and a bit daunting, self-driving car will be authorized in a close future in crowded major cities where roads are to be shared among pedestrians, cyclists, cars, and trucks. Roads sizes and traffic signaling will need to be constantly adapted accordingly. Counting and classifying turning vehicles and pedestrians at an intersection is an exhausting task and despite traffic monitoring systems use, human interaction is heavily required for counting. Our approach to resolve traffic intersection turning-vehicles counting is less invasive, requires no road dig up or costly installation. Live or recorded videos from already installed camera all over the cities can be used as well as any camera including cellphones. Our system is based on Neural Network and Deep Learning of object detection along computer vision technology and several methods and algorithms. Our approach will work on still images, recorded-videos, real-time live videos and will detect, classify, track and compute moving object velocity and direction using convolution neural network. Created based upon series of algorithms modeled after the human brain, our system uses NVIDIA Video cards with GPU, CUDA, OPENCV and mathematical vectors systems to perform.more » « less
-
Sharing and joint processing of camera feeds and sensor measurements, known as Cooperative Perception (CP), has emerged as a new technique to achieve higher perception qualities. CP can enhance the safety of Autonomous Vehicles (AVs) where their individual visual perception quality is compromised by adverse weather conditions (haze as foggy weather), low illumination, winding roads, and crowded traffic. While previous CP methods have shown success in elevating perception quality, they often assume perfect communication conditions and unlimited transmission resources to share camera feeds, which may not hold in real-world scenarios. Also, they make no effort to select better helpers when multiple options are available.To cover the limitations of former methods, in this paper, we propose a novel approach to realize an optimized CP under constrained communications. At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range and enhance the Object Detection (OD) accuracy of the ego vehicle. In this two-step process, we first select the helper vehicles that contribute the most to CP based on their visual range and lowest motion blur. Next, we implement a radio block optimization among the candidate vehicles to further improve communication efficiency. We specifically focus on pedestrian detection as an exemplary scenario. To validate our approach, we used the CARLA simulator to create a dataset of annotated videos for different driving scenarios where pedestrian detection is challenging for an AV with compromised vision. Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception in challenging scenarios, substantially improving driving safety under adverse conditions. Finally, we note that the networking assumptions are adopted from LTE Release 14 Mode 4 side-link communication, commonly used for Vehicle-to-Vehicle (V2V) communmore » « less
-
null (Ed.)Autonomous vehicles are predicted to dominate the transportation industry in the foreseeable future. Safety is one of the major chal- lenges to the early deployment of self-driving systems. To ensure safety, self-driving vehicles must sense and detect humans, other vehicles, and road infrastructure accurately, robustly, and timely. However, existing sensing techniques used by self-driving vehicles may not be absolutely reliable. In this paper, we design REITS, a system to improve the reliability of RF-based sensing modules for autonomous vehicles. We conduct theoretical analysis on possible failures of existing RF-based sensing systems. Based on the analysis, REITS adopts a multi-antenna design, which enables constructive blind beamforming to return an enhanced radar signal in the incident direction. REITS can also let the existing radar system sense identifi- cation information by switching between constructive beamforming state and destructive beamforming state. Preliminary results show that REITS improves the detection distance of a self-driving car radar by a factor of 3.63.more » « less
-
Camera-based systems are increasingly used for collecting information on intersections and arterials. Unlike loop controllers that can generally be only used for detection and movement of vehicles, cameras can provide rich information about the traffic behavior. Vision-based frameworks for multiple-object detection, object tracking, and near-miss detection have been developed to derive this information. However, much of this work currently addresses processing videos offline. In this article, we propose an integrated two-stream convolutional networks architecture that performs real-time detection, tracking, and near-accident detection of road users in traffic video data. The two-stream model consists of a spatial stream network for object detection and a temporal stream network to leverage motion features for multiple-object tracking. We detect near-accidents by incorporating appearance features and motion features from these two networks. Further, we demonstrate that our approaches can be executed in real-time and at a frame rate that is higher than the video frame rate on a variety of videos collected from fisheye and overhead cameras.more » « less
An official website of the United States government
