Traffic intersections are prime locations for deployment of infrastructure sensors and edge computing nodes to
realize the vision of a smart city. It is expected that the needs
of a smart city, in regards to traffic and pedestrian traffic
systems monitored by cameras/video, can be met by using stateof-the-art artificial-intelligence (AI) based object detectors and
trackers. A critical component in designing an effective real-time
object detection/tracking pipeline is the understanding of how
object density, i.e., the number of objects in a scene, and imageresolution and frame rate influence the performance metrics. This
study explores the accuracy and speed metrics with the goal of
supporting pipelines that meet the precision and latency needs
of a real-time environment. We examine the impact of varying
image-resolution, frame rate and object-density on the object
detection performance metrics. The experiments on the COSMOS
testbed dataset show that varying the frame width from 416 pixels
to 832 pixels, and cropping the images to a square resolution,
result in the increase in average precision for all object classes.
Decreasing the frame rate from 15 fps to 5 fps preserves more
than 90% of the highest F1 score achieved for all object classes.
The results inform the choice of video preprocessing stages,
modifications to established AI-based object detection/tracking
methods, and suggest optimal hyper-parameter values.
Index Terms—Object Detection, Smart City, Video Resolution,
Deep Learning Models.
more »
« less
WatchDog: Real-time Vehicle Tracking on Geo-distributed Edge Nodes
Vehicle tracking, a core application to smart city video analytics, is becoming more widely deployed than ever before thanks to the increasing number of traffic cameras and recent advances in computer vision and machine-learning. Due to the constraints of bandwidth, latency, and privacy concerns, tracking tasks are more preferable to run on edge devices sitting close to the cameras. However, edge devices are provisioned with a fixed amount of computing budget, making them incompetent to adapt to time-varying and imbalanced tracking workloads caused by traffic dynamics. In coping with this challenge, we propose WatchDog, a real-time vehicle tracking system that fully utilizes edge nodes across the road network. WatchDog leverages computer vision tasks with different resource-accuracy tradeoffs, and decomposes and schedules tracking tasks judiciously across edge devices based on the current workload to maximize the number of tasks while ensuring a provable response time-bound at each edge device. Extensive evaluations have been conducted using real-world city-wide vehicle trajectory datasets, achieving exceptional tracking performance with a real-time guarantee.
more »
« less
- NSF-PAR ID:
- 10420199
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Internet of Things
- Volume:
- 4
- Issue:
- 1
- ISSN:
- 2691-1914
- Page Range / eLocation ID:
- 1 to 23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Travel-time estimation of traffic flow is an important problem with critical implications for traffic congestion analysis. We developed techniques for using intersection videos to identify vehicle trajectories across multiple cameras and analyze corridor travel time. Our approach consists of (1) multi-object single-camera tracking, (2) vehicle re-identification among different cameras, (3) multi-object multi-camera tracking, and (4) travel-time estimation. We evaluated the proposed framework on real intersections in Florida with pan and fisheye cameras. The experimental results demonstrate the viability and effectiveness of our method.more » « less
-
The traffic congestion hits most big cities in the world - threatening long delays and serious reductions in air quality. City and local government officials continue to face challenges in optimizing crowd flow, synchronizing traffic and mitigating threats or dangerous situations. One of the major challenges faced by city planners and traffic engineers is developing a robust traffic controller that eliminates traffic congestion and imbalanced traffic flow at intersections. Ensuring that traffic moves smoothly and minimizing the waiting time in intersections requires automated vehicle detection techniques for controlling the traffic light automatically, which are still challenging problems. In this paper, we propose an intelligent traffic pattern collection and analysis model, named TPCAM, based on traffic cameras to help in smooth vehicular movement on junctions and set to reduce the traffic congestion. Our traffic detection and pattern analysis model aims at detecting and calculating the traffic flux of vehicles and pedestrians at intersections in real-time. Our system can utilize one camera to capture all the traffic flows in one intersection instead of multiple cameras, which will reduce the infrastructure requirement and potential for easy deployment. We propose a new deep learning model based on YOLOv2 and adapt the model for the traffic detection scenarios. To reduce the network burdens and eliminate the deployment of network backbone at the intersections, we propose to process the traffic video data at the network edge without transmitting the big data back to the cloud. To improve the processing frame rate at the edge, we further propose deep object tracking algorithm leveraging adaptive multi-modal models and make it robust to object occlusions and varying lighting conditions. Based on the deep learning based detection and tracking, we can achieve pseudo-30FPS via adaptive key frame selection.more » « less
-
The smart parking industry continues to evolve as an increasing number of cities struggle with traffic congestion and inadequate parking availability. For urban dwellers, few things are more irritating than anxiously searching for a parking space. Research results show that as much as 30% of traffic is caused by drivers driving around looking for parking spaces in congested city areas. There has been considerable activity among researchers to develop smart technologies that can help drivers find a parking spot with greater ease, not only reducing traffic congestion but also the subsequent air pollution. Many existing solutions deploy sensors in every parking spot to address the automatic parking spot detection problems. However, the device and deployment costs are very high, especially for some large and old parking structures. A wide variety of other technological innovations are beginning to enable more adaptable systems-including license plate number detection, smart parking meter, and vision-based parking spot detection. In this paper, we propose to design a more adaptable and affordable smart parking system via distributed cameras, edge computing, data analytics, and advanced deep learning algorithms. Specifically, we deploy cameras with zoom-lens and motorized head to capture license plate numbers by tracking the vehicles when they enter or leave the parking lot; cameras with wide angle fish-eye lens will monitor the large parking lot via our custom designed deep neural network. We further optimize the algorithm and enable the real-time deep learning inference in an edge device. Through the intelligent algorithm, we can significantly reduce the cost of existing systems, while achieving a more adaptable solution. For example, our system can automatically detect when a car enters the parking space, the location of the parking spot, and precisely charge the parking fee and associate this with the license plate number.more » « less
-
The density and complexity of urban environments present significant challenges for autonomous vehicles. Moreover, ensuring pedestrians’ safety and protecting personal privacy are crucial considerations in these environments. Smart city intersections and AI-powered traffic management systems will be essential for addressing these challenges. Therefore, our research focuses on creating an experimental framework for the design of applications that support the secure and efficient management of traffic intersections in urban areas. We integrated two cameras (street-level and bird’s eye view), both viewing an intersection, and a programmable edge computing node, deployed within the COSMOS testbed in New York City, with a central management platform provided by Kentyou. We designed a pipeline to collect and analyze the video streams from both cameras and obtain real-time traffic/pedestrian-related information to support smart city applications. The obtained information from both cameras is merged, and the results are sent to a dedicated dashboard for real-time visualization and further assessment (e.g., accident prevention). The process does not require sending the raw videos in order to avoid violating pedestrians’ privacy. In this demo, we present the designed video analytic pipelines and their integration with Kentyou central management platform. Index Terms—object detection and tracking, camera networks, smart intersection, real-time visualizationmore » « less