skip to main content


Title: Aerial images for pile fire detection using drones (UAVs)
Wildfires are one of the deadliest and dangerous natural disasters in the world. Wildfires burn millions of forests and they put many lives of humans and animals in danger. Predicting fire behavior can help firefighters to have better fire management and scheduling for future incidents and also it reduces the life risks for the firefighters. Recent advance in aerial images shows that they can be beneficial in wildfire studies. Among different methods and technologies for aerial images, Unmanned Aerial Vehicles (UAVs) and drones are beneficial to collect information regarding the fire. This study provides an aerial imagery dataset using drones during a prescribed pile fire in Northern Arizona, USA. This dataset consists of different repositories including raw aerial videos recorded by drones' cameras and also raw heatmap footage recorded by an infrared thermal camera. To help researchers, two well-known studies; fire classification and fire segmentation are defined based on the dataset. For approaches such as Neural Networks (NNs) and fire classification, 39,375 frames are labeled ("Fire" vs "Non-Fire") for the training phase. Also, another 8,617 frames are labeled for the test data. 2,003 frames are considered for the fire segmentation and regarding that, 2,003 masks are generated for the purpose of Ground Truth data with pixel-wise annotation.  more » « less
Award ID(s):
2204445 2039026
PAR ID:
10497556
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE DataPort
Date Published:
Subject(s) / Keyword(s):
Artificial Intelligence Image Processing Computer Vision Machine Learning Geoscience and Remote Sensing Remote Sensing Environmental Fire detection Fire segmentation Wildfire dataset Unmanned Aerial Vehicles UAVs Drones
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Current forest monitoring technologies including satellite remote sensing, manned/piloted aircraft, and observation towers leave uncertainties about a wildfire’s extent, behavior, and conditions in the fire’s near environment, particularly during its early growth. Rapid mapping and real-time fire monitoring can inform in-time intervention or management solutions to maximize beneficial fire outcomes. Drone systems’ unique features of 3D mobility, low flight altitude, and fast and easy deployment make them a valuable tool for early detection and assessment of wildland fires, especially in remote forests that are not easily accessible by ground vehicles. In addition, the lack of abundant, well-annotated aerial datasets – in part due to unmanned aerial vehicles’ (UAVs’) flight restrictions during prescribed burns and wildfires – has limited research advances in reliable data-driven fire detection and modeling techniques. While existing wildland fire datasets often include either color or thermal fire images, here we present (1) a multi-modal UAV-collected dataset of dual-feed side-by-side videos including both RGB and thermal images of a prescribed fire in an open canopy pine forest in Northern Arizona and (2) a deep learning-based methodology for detecting fire and smoke pixels at accuracy much higher than the usual single-channel video feeds. The collected images are labeled to “fire” or “no-fire” frames by two human experts using side-by-side RGB and thermal images to determine the label. To provide context to the main dataset’s aerial imagery, the included supplementary dataset provides a georeferenced pre-burn point cloud, an RGB orthomosaic, weather information, a burn plan, and other burn information. By using and expanding on this guide dataset, research can develop new data-driven fire detection, fire segmentation, and fire modeling techniques. 
    more » « less
  2. The dataset contains aerial photographs of Arctic sea ice obtained during the Healy-Oden Trans Arctic Expedition (HOTRAX) captured from a helicopter between 5 August and 30 September, 2005. A total of 1013 images were captured, but only 100 images were labeled. The subset of 100 images was created exclusively for the purpose of segmenting sea ice, meltponds, and open water. Original images, labels, and code for segmentation are included in the above files. For dataset, refer site: Ivan Sudakow, Vijayan Asari, Ruixu Liu, & Denis Demchev. (2022). Melt pond from aerial photographs of the Healy–Oden Trans Arctic Expedition (HOTRAX) (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6602409 Manuscript: I. Sudakow, V. K. Asari, R. Liu and D. Demchev, "MeltPondNet: A Swin Transformer U-Net for Detection of Melt Ponds on Arctic Sea Ice," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 8776-8784, 2022, doi: 10.1109/JSTARS.2022.3213192. 
    more » « less
  3. Drone based wildfire detection and modeling methods enable high-precision, real-time fire monitoring that is not provided by traditional remote fire monitoring systems, such as satellite imaging. Precise, real-time information enables rapid, effective wildfire intervention and management strategies. Drone systems’ ease of deployment, omnidirectional maneuverability, and robust sensing capabilities make them effective tools for early wildfire detection and evaluation, particularly so in environments that are inconvenient for humans and/or terrestrial vehicles. Development of emerging drone-based fire monitoring systems has been inhibited by a lack of well-annotated, high quality aerial wildfire datasets, largely as a result of UAV flight regulations for prescribed burns and wildfires. The included dataset provides a collection of side-by-side infrared and visible spectrum video pairs taken by drones during an open canopy prescribed fire in Northern Arizona in 2021. The frames have been classified by two independent classifiers with two binary classifications. The Fire label is applied when the classifiers visually observe indications of fire in either RGB or IR frame for each frame pair. The Smoke label is applied when the classifiers visually estimate that at least 50% of the RGB frame is filled with smoke. To provide additional context to the main dataset’s aerial imagery, the provided supplementary dataset includes weather information, the prescribed burn plan, a geo-referenced RGB point cloud of the preburn area, an RGB orthomosaic of the preburn area, and links to further information. 
    more » « less
  4. Recently, using drones for forest fire management has gained a lot of attention from the research community due to their advantages such as low operation and deployment cost, flexible mobility, and high-quality imaging. It also minimizes human intervention, especially in hard-to-reach areas where the use of ground-based infrastructure is troublesome. Drones can provide virtual reality to firefighters by collecting ondemand high-resolution images with adjustable zoom, focus, and perspective to improve fire control and eliminate human hazards. In this paper, we propose a novel model for fire expansion as well as a distributed algorithm for drones to relocate themselves towards the front-line of an expanding fire field. The proposed algorithm comprises a light-weight image processing for fire edge detection that is highly desirable over computational expensive deep learning methods for resource-constrained drones. The positioning algorithm includes motions tangential and normal to fire frontline to follow the fire expansion while keeping minimum pairwise distances for collision avoidance and non-overlapping imaging. We proposed an action-reward mechanism to adjust the drones’ speed and processing rate based on the fire expansion rate and the available onboard processing power. Simulations results are provided to support the efficacy of the proposed algorithm. 
    more » « less
  5. Semantic segmentation methods are typically designed for RGB color images, which are interpolated from raw Bayer images. While RGB images provide abundant color information and are easily understood by humans, they also add extra storage and computational burden for neural networks. On the other hand, raw Bayer images preserve primitive color information with a single channel, potentially increasing segmentation accuracy while significantly decreasing storage and computation time. In this paper, we propose RawSeg-Net to segment single-channel raw Bayer images directly. Different from RGB images that already contain neighboring context information during ISP color interpolation, each pixel in raw Bayer images does not contain any context clues. Based on Bayer pattern properties, RawSeg-Net assigns dynamic attention on Bayer images' spectral frequency and spatial locations to mitigate classification confusion, and proposes a re-sampling strategy to capture both global and local contextual information. 
    more » « less