skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Comprehensive Analysis of Object Detectors in Adverse Weather Conditions
In this paper, we meticulously examine the robustness of computer vision object detection frameworks within the intricate realm of real-world traffic scenarios, with a particular emphasis on challenging adverse weather conditions. Conventional evaluation methods often prove inadequate in addressing the complexities inherent in dynamic traffic environments—an increasingly vital consideration as global advancements in autonomous vehicle technologies persist. Our investigation delves specifically into the nuanced performance of these algorithms amidst adverse weather conditions like fog, rain, snow, sun flare, and more, acknowledging the substantial impact of weather dynamics on their precision. Significantly, we seek to underscore that an object detection framework excelling in clear weather may encounter significant challenges in adverse conditions. Our study incorporates in-depth ablation studies on dual modality architectures, exploring a range of applications including traffic monitoring, vehicle tracking, and object tracking. The ultimate goal is to elevate the safety and efficiency of transportation systems, recognizing the pivotal role of robust computer vision systems in shaping the trajectory of future autonomous and intelligent transportation technologies.  more » « less
Award ID(s):
2025234
PAR ID:
10521809
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-6929-8
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Location:
Princeton, NJ, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Sharing and joint processing of camera feeds and sensor measurements, known as Cooperative Perception (CP), has emerged as a new technique to achieve higher perception qualities. CP can enhance the safety of Autonomous Vehicles (AVs) where their individual visual perception quality is compromised by adverse weather conditions (haze as foggy weather), low illumination, winding roads, and crowded traffic. While previous CP methods have shown success in elevating perception quality, they often assume perfect communication conditions and unlimited transmission resources to share camera feeds, which may not hold in real-world scenarios. Also, they make no effort to select better helpers when multiple options are available.To cover the limitations of former methods, in this paper, we propose a novel approach to realize an optimized CP under constrained communications. At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range and enhance the Object Detection (OD) accuracy of the ego vehicle. In this two-step process, we first select the helper vehicles that contribute the most to CP based on their visual range and lowest motion blur. Next, we implement a radio block optimization among the candidate vehicles to further improve communication efficiency. We specifically focus on pedestrian detection as an exemplary scenario. To validate our approach, we used the CARLA simulator to create a dataset of annotated videos for different driving scenarios where pedestrian detection is challenging for an AV with compromised vision. Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception in challenging scenarios, substantially improving driving safety under adverse conditions. Finally, we note that the networking assumptions are adopted from LTE Release 14 Mode 4 side-link communication, commonly used for Vehicle-to-Vehicle (V2V) commun 
    more » « less
  2. Vehicle flow estimation has many potential smart cities and transportation applications. Many cities have existing camera networks which broadcast image feeds; however, the resolution and frame-rate are too low for existing computer vision algorithms to accurately estimate flow. In this work, we present a computer vision and deep learning framework for vehicle tracking. We demonstrate a novel tracking pipeline which enables accurate flow estimates in a range of environments under low resolution and frame-rate constraints. We demonstrate that our system is able to track vehicles in New York City's traffic camera video feeds at 1 Hz or lower frame-rate, and produces higher traffic flow accuracy than popular open source tracking frameworks. 
    more » « less
  3. Abstract—Object perception plays a fundamental role in Cooperative Driving Automation (CDA) which is regarded as a revolutionary promoter for next-generation transportation systems. However, the vehicle-based perception may suffer from the limited sensing range and occlusion as well as low penetration rates in connectivity. In this paper, we propose Cyber Mobility Mirror (CMM), a next-generation real-world object perception system for 3D object detection, tracking, localization, and reconstruction, to explore the potential of roadside sensors for enabling CDA in the real world. The CMM system consists of six main components: i) the data pre-processor to retrieve and preprocess the raw data; ii) the roadside 3D object detector to generate 3D detection results; iii) the multi-object tracker to identify detected objects; iv) the global locator to generate geo-localization information; v) the mobile-edge-cloud-based communicator to transmit perception information to equipped vehicles, and vi) the onboard advisor to reconstruct and display the real-time traffic conditions. An automatic perception evaluation approach is proposed to support the assessment of data-driven models without human-labeling requirements and a CMM field-operational system is deployed at a real-world intersection to assess the performance of the CMM. Results from field tests demonstrate that our CMM prototype system can achieve 96.99% precision and 83.62% recall for detection and 73.55% ID-recall for tracking. High-fidelity real-time traffic conditions (at the object level) can be geo-localized with a root-mean-square error (RMSE) of 0.69m and 0.33m for lateral and longitudinal direction, respectively, and displayed on the GUI of the equipped vehicle with a frequency of 3 − 4Hz. 
    more » « less
  4. The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. 
    more » « less
  5. Various techniques in computer vision have been proposed for water level detection. However, existing methods face challenges during adverse conditions including snow, fog, rain, and nighttime. In this paper, we introduce a novel approach that analyzes images for water level detection by incorporating a deblurring process to increase image clarity. By employing real-time object detection technique YOLOv5, we show that the proposed approach can achieve significantly improved precision, during both daytime and nighttime under under challenging weather circumstances. 
    more » « less