skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Near real‐time monitoring of wading birds using uncrewed aircraft systems and computer vision
Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.  more » « less
Award ID(s):
2326954
PAR ID:
10610495
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Remote Sensing in Ecology and Conservation
Volume:
11
Issue:
3
ISSN:
2056-3485
Format(s):
Medium: X Size: p. 255-265
Size(s):
p. 255-265
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches. 
    more » « less
  2. Abstract Measuring forest biodiversity using terrestrial surveys is expensive and can only capture common species abundance in large heterogeneous landscapes. In contrast, combining airborne imagery with computer vision can generate individual tree data at the scales of hundreds of thousands of trees. To train computer vision models, ground‐based species labels are combined with airborne reflectance data. Due to the difficulty of finding rare species in a large landscape, many classification models only include the most abundant species, leading to biased predictions at broad scales. For example, if only common species are used to train the model, this assumes that these samples are representative across the entire landscape. Extending classification models to include rare species requires targeted data collection and algorithmic improvements to overcome large data imbalances between dominant and rare taxa. We use a targeted sampling workflow to the Ordway Swisher Biological Station within the US National Ecological Observatory Network (NEON), where traditional forestry plots had identified six canopy tree species with more than 10 individuals at the site. Combining iterative model development with rare species sampling, we extend a training dataset to include 14 species. Using a multi‐temporal hierarchical model, we demonstrate the ability to include species predicted at <1% frequency in landscape without losing performance on the dominant species. The final model has over 75% accuracy for 14 species with improved rare species classification compared to 61% accuracy of a baseline deep learning model. After filtering out dead trees, we generate landscape species maps of individual crowns for over 670 000 individual trees. We find distinct patches of forest composed of rarer species at the full‐site scale, highlighting the importance of capturing species diversity in training data. We estimate the relative abundance of 14 species within the landscape and provide three measures of uncertainty to generate a range of counts for each species. For example, we estimate that the dominant species,Pinus palustrisaccounts for c. 28% of predicted stems, with models predicting a range of counts between 160 000 and 210 000 individuals. These maps provide the first estimates of canopy tree diversity within a NEON site to include rare species and provide a blueprint for capturing tree diversity using airborne computer vision at broad scales. 
    more » « less
  3. Abstract Drones are increasingly popular for collecting behaviour data of group‐living animals, offering inexpensive and minimally disruptive observation methods. Imagery collected by drones can be rapidly analysed using computer vision techniques to extract information, including behaviour classification, habitat analysis and identification of individual animals. While computer vision techniques can rapidly analyse drone‐collected data, the success of these analyses often depends on careful mission planning that considers downstream computational requirements—a critical factor frequently overlooked in current studies.We present a comprehensive summary of research in the growing AI‐driven animal ecology (ADAE) field, which integrates data collection with automated computational analysis focused on aerial imagery for collective animal behaviour studies. We systematically analyse current methodologies, technical challenges and emerging solutions in this field, from drone mission planning to behavioural inference. We illustrate computer vision pipelines that infer behaviour from drone imagery and present the computer vision tasks used for each step. We map specific computational tasks to their ecological applications, providing a framework for future research design.Our analysis reveals AI‐driven animal ecology studies for collective animal behaviour using drone imagery focus on detection and classification computer vision tasks. While convolutional neural networks (CNNs) remain dominant for detection and classification tasks, newer architectures like transformer‐based models and specialized video analysis networks (e.g. X3D, I3D, SlowFast) designed for temporal pattern recognition are gaining traction for pose estimation and behaviour inference. However, reported model accuracy varies widely by computer vision task, species, habitats and evaluation metrics, complicating meaningful comparisons between studies.Based on current trends, we conclude semi‐autonomous drone missions will be increasingly used to study collective animal behaviour. While manual drone operation remains prevalent, autonomous drone manoeuvrers, powered by edge AI, can scale and standardise collective animal behavioural studies while reducing the risk of disturbance and improving data quality. We propose guidelines for AI‐driven animal ecology drone studies adaptable to various computer vision tasks, species and habitats. This approach aims to collect high‐quality behaviour data while minimising disruption to the ecosystem. 
    more » « less
  4. Project control operations in construction are mostly executed via direct observations and the manual monitoring of progress and performance of construction tasks on the job site. Project engineers move physically within job-site areas to ensure activities are executed as planned. Such physical displacements are error-prone and ineffective in cost and time, particularly in larger construction zones. It is critical to explore new methods and technologies to effectively assist performance control operations by rapidly capturing data from materials and equipment on the job site. Motivated by the ubiquitous use of unmanned aerial vehicles (UAVs) in construction projects and the maturity of computer-vision-based machine-learning (ML) techniques, this research investigates the challenges of object detection—the process of predicting classes of objects (specified construction materials and equipment)—in real time. The study addresses the challenges of data collection and predictions for remote monitoring in project control activities. It uses these two proven and robust technologies by exploring factors that impact the use of UAV aerial images to design and implement object detectors through an analytical conceptualization and a showcase demonstration. The approach sheds light on the applications of deep-learning techniques to access and rapidly identify and classify resources in real-time. It paves the way to shift from costly and time-consuming job-site walkthroughs that are coupled with manual data processing and input to more automated, streamlined operations. The research found that the critical factor to develop object detectors with acceptable levels of accuracy is collecting aerial images with for adequate scales with high frequencies from different positions of the same construction areas. 
    more » « less
  5. A mini quadrotor can be used in many applications, such as indoor airborne surveillance, payload delivery, and warehouse monitoring. In these applications, vision-based autonomous navigation is one of the most interesting research topics because precise navigation can be implemented based on vision analysis. However, pixel-based vision analysis approaches require a high-powered computer, which is inappropriate to be attached to a small indoor quadrotor. This paper proposes a method called the Motion-vector-based Moving Objects Detection. This method detects and avoids obstacles using stereo motion vectors instead of individual pixels, thereby substantially reducing the data processing requirement. Although this method can also be used in the avoidance of stationary obstacles by taking into account the ego-motion of the quadrotor, this paper primarily focuses on providing our empirical verification on the real-time avoidance of moving objects. 
    more » « less