- PAR ID:
- 10373823
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 14
- Issue:
- 16
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 3979
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The size and frequency of wildland fires in the western United States have dramatically increased in recent years. On high-fire-risk days, a small fire ignition can rapidly grow and become out of control. Early detection of fire ignitions from initial smoke can assist the response to such fires before they become difficult to manage. Past deep learning approaches for wildfire smoke detection have suffered from small or unreliable datasets that make it difficult to extrapolate performance to real-world scenarios. In this work, we present the Fire Ignition Library (FIgLib), a publicly available dataset of nearly 25,000 labeled wildfire smoke images as seen from fixed-view cameras deployed in Southern California. We also introduce SmokeyNet, a novel deep learning architecture using spatiotemporal information from camera imagery for real-time wildfire smoke detection. When trained on the FIgLib dataset, SmokeyNet outperforms comparable baselines and rivals human performance. We hope that the availability of the FIgLib dataset and the SmokeyNet architecture will inspire further research into deep learning methods for wildfire smoke detection, leading to automated notification systems that reduce the time to wildfire response.more » « less
-
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have, in turn, led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgent need to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. To that end, in this paper, we present our work on integrating multiple data sources into SmokeyNet, a deep learning model using spatiotemporal information to detect smoke from wildland fires. We present Multimodal SmokeyNet and SmokeyNet Ensemble for multimodal wildland fire smoke detection using satellite-based fire detections, weather sensor measurements, and optical camera images. An analysis is provided to compare these multimodal approaches to the baseline SmokeyNet in terms of accuracy metrics, as well as time-to-detect, which is important for the early detection of wildfires. Our results show that incorporating weather data in SmokeyNet improves performance numerically in terms of both F1 and time-to-detect over the baseline with a single data source. With a time-to-detect of only a few minutes, SmokeyNet can be used for automated early notification of wildfires, providing a useful tool in the fight against destructive wildfires.
-
Deep learning (DL) algorithms have achieved significantly high performance in object detection tasks. At the same time, augmented reality (AR) techniques are transforming the ways that we work and connect with people. With the increasing popularity of online and hybrid learning, we propose a new framework for improving students’ learning experiences with electrical engineering lab equipment by incorporating the abovementioned technologies. The DL powered automatic object detection component integrated into the AR application is designed to recognize equipment such as multimeter, oscilloscope, wave generator, and power supply. A deep neural network model, namely MobileNet-SSD v2, is implemented for equipment detection using TensorFlow’s object detection API. When a piece of equipment is detected, the corresponding AR-based tutorial will be displayed on the screen. The mean average precision (mAP) of the developed equipment detection model is 81.4%, while the average recall of the model is 85.3%. Furthermore, to demonstrate practical application of the proposed framework, we develop a multimeter tutorial where virtual models are superimposed on real multimeters. The tutorial includes images and web links as well to help users learn more effectively. The Unity3D game engine is used as the primary development tool for this tutorial to integrate DL and AR frameworks and create immersive scenarios. The proposed framework can be a useful foundation for AR and machine-learning-based frameworks for industrial and educational training.more » « less
-
Abstract Recent studies have suggested that microbial aerosolization in wildfire smoke is an understudied source of microbes to the atmosphere. Wildfire smoke can travel thousands of kilometers from its source with the potential to facilitate the transport of microbes, including microbes that can have far‐reaching impacts on human or ecosystem health. However, the relevance of longer‐range detection of microbes in smoke plumes remains undetermined, as previous studies have mainly focused on analyses of bioaerosols collected adjacent to or directly above wildfires. Therefore, we investigated whether wildfire smoke estimated to originate >30 km from different wildfire sources would contain detectable levels of bacterial and fungal DNA at ground level, hypothesizing that smoke‐impacted air would harbor greater amounts and a distinct composition of microbes as compared to ambient air. We used cultivation‐independent approaches to analyze 150 filters collected over time from three sampling locations in the western United States, of which 34 filters were determined to capture wildfire smoke events. Contrary to our hypothesis, smoke‐impacted samples harbored lower amounts of microbial DNA. Likewise, there was a limited signal in the composition of the microbial assemblages detected in smoke‐affected samples as compared to ambient air, but we did find that changes in humidity were associated with temporal variation in the composition of the bacterial and fungal bioaerosols. With our study design, we were unable to detect a robust and distinct microbial signal in ground‐level smoke originating from distant wildfires.
-
Vision-language (VL) pre-training has recently received considerable attention. However, most existing end-to-end pre-training approaches either only aim to tackle VL tasks such as image-text retrieval, visual question answering (VQA) and image captioning that test high-level understanding of images, or only target region-level understanding for tasks such as phrase grounding and object detection. We present FIBER (Fusion-In-the-Backbone-based transformER), a new VL model architecture that can seamlessly handle both these types of tasks. Instead of having dedicated transformer layers for fusion after the uni-modal backbones, FIBER pushes multimodal fusion deep into the model by inserting cross-attention into the image and text backbones to better capture multimodal interactions. In addition, unlike previous work that is either only pre-trained on image-text data or on fine-grained data with box-level annotations, we present a two-stage pre-training strategy that uses both these kinds of data efficiently: (i) coarse-grained pre-training based on image-text data; followed by (ii) fine-grained pre-training based on image-text-box data. We conduct comprehensive experiments on a wide range of VL tasks, ranging from VQA, image captioning, and retrieval, to phrase grounding, referring expression comprehension, and object detection. Using deep multimodal fusion coupled with the two-stage pre-training, FIBER provides consistent performance improvements over strong baselines across all tasks, often outperforming methods using magnitudes more data. Code is released at https://github.com/microsoft/FIBER.more » « less