The traffic congestion hits most big cities in the world - threatening long delays and serious reductions in air quality. City and local government officials continue to face challenges in optimizing crowd flow, synchronizing traffic and mitigating threats or dangerous situations. One of the major challenges faced by city planners and traffic engineers is developing a robust traffic controller that eliminates traffic congestion and imbalanced traffic flow at intersections. Ensuring that traffic moves smoothly and minimizing the waiting time in intersections requires automated vehicle detection techniques for controlling the traffic light automatically, which are still challenging problems. In this paper, we propose an intelligent traffic pattern collection and analysis model, named TPCAM, based on traffic cameras to help in smooth vehicular movement on junctions and set to reduce the traffic congestion. Our traffic detection and pattern analysis model aims at detecting and calculating the traffic flux of vehicles and pedestrians at intersections in real-time. Our system can utilize one camera to capture all the traffic flows in one intersection instead of multiple cameras, which will reduce the infrastructure requirement and potential for easy deployment. We propose a new deep learning model based on YOLOv2 and adapt the model for the traffic detection scenarios. To reduce the network burdens and eliminate the deployment of network backbone at the intersections, we propose to process the traffic video data at the network edge without transmitting the big data back to the cloud. To improve the processing frame rate at the edge, we further propose deep object tracking algorithm leveraging adaptive multi-modal models and make it robust to object occlusions and varying lighting conditions. Based on the deep learning based detection and tracking, we can achieve pseudo-30FPS via adaptive key frame selection.
more »
« less
Machine-Learning-Based Real-Time Multi-Camera Vehicle Tracking and Travel-Time Estimation
Travel-time estimation of traffic flow is an important problem with critical implications for traffic congestion analysis. We developed techniques for using intersection videos to identify vehicle trajectories across multiple cameras and analyze corridor travel time. Our approach consists of (1) multi-object single-camera tracking, (2) vehicle re-identification among different cameras, (3) multi-object multi-camera tracking, and (4) travel-time estimation. We evaluated the proposed framework on real intersections in Florida with pan and fisheye cameras. The experimental results demonstrate the viability and effectiveness of our method.
more »
« less
- Award ID(s):
- 1922782
- NSF-PAR ID:
- 10332841
- Date Published:
- Journal Name:
- Journal of Imaging
- Volume:
- 8
- Issue:
- 4
- ISSN:
- 2313-433X
- Page Range / eLocation ID:
- 101
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Counting multi-vehicle motions via traffic cameras in urban areas is crucial for smart cities. Even though several frameworks have been proposed in this task, there is no prior work focusing on the highly common, dense and size-variant vehicles such as motorcycles. In this paper, we propose a novel framework for vehicle motion counting with adaptive label-independent tracking and counting modules that processes 12 frames per second. Our framework adapts hyperparameters for multi-vehicle tracking and properly works in complex traffic conditions, especially invariant to camera perspectives. We achieved the competitive results in terms of root-mean-square error and runtime performance.more » « less
-
Camera-based systems are increasingly used for collecting information on intersections and arterials. Unlike loop controllers that can generally be only used for detection and movement of vehicles, cameras can provide rich information about the traffic behavior. Vision-based frameworks for multiple-object detection, object tracking, and near-miss detection have been developed to derive this information. However, much of this work currently addresses processing videos offline. In this article, we propose an integrated two-stream convolutional networks architecture that performs real-time detection, tracking, and near-accident detection of road users in traffic video data. The two-stream model consists of a spatial stream network for object detection and a temporal stream network to leverage motion features for multiple-object tracking. We detect near-accidents by incorporating appearance features and motion features from these two networks. Further, we demonstrate that our approaches can be executed in real-time and at a frame rate that is higher than the video frame rate on a variety of videos collected from fisheye and overhead cameras.more » « less
-
Vehicle tracking, a core application to smart city video analytics, is becoming more widely deployed than ever before thanks to the increasing number of traffic cameras and recent advances in computer vision and machine-learning. Due to the constraints of bandwidth, latency, and privacy concerns, tracking tasks are more preferable to run on edge devices sitting close to the cameras. However, edge devices are provisioned with a fixed amount of computing budget, making them incompetent to adapt to time-varying and imbalanced tracking workloads caused by traffic dynamics. In coping with this challenge, we propose WatchDog, a real-time vehicle tracking system that fully utilizes edge nodes across the road network. WatchDog leverages computer vision tasks with different resource-accuracy tradeoffs, and decomposes and schedules tracking tasks judiciously across edge devices based on the current workload to maximize the number of tasks while ensuring a provable response time-bound at each edge device. Extensive evaluations have been conducted using real-world city-wide vehicle trajectory datasets, achieving exceptional tracking performance with a real-time guarantee.more » « less
-
Vehicle flow estimation has many potential smart cities and transportation applications. Many cities have existing camera networks which broadcast image feeds; however, the resolution and frame-rate are too low for existing computer vision algorithms to accurately estimate flow. In this work, we present a computer vision and deep learning framework for vehicle tracking. We demonstrate a novel tracking pipeline which enables accurate flow estimates in a range of environments under low resolution and frame-rate constraints. We demonstrate that our system is able to track vehicles in New York City's traffic camera video feeds at 1 Hz or lower frame-rate, and produces higher traffic flow accuracy than popular open source tracking frameworks.more » « less