Abstract Measuring forest biodiversity using terrestrial surveys is expensive and can only capture common species abundance in large heterogeneous landscapes. In contrast, combining airborne imagery with computer vision can generate individual tree data at the scales of hundreds of thousands of trees. To train computer vision models, ground‐based species labels are combined with airborne reflectance data. Due to the difficulty of finding rare species in a large landscape, many classification models only include the most abundant species, leading to biased predictions at broad scales. For example, if only common species are used to train the model, this assumes that these samples are representative across the entire landscape. Extending classification models to include rare species requires targeted data collection and algorithmic improvements to overcome large data imbalances between dominant and rare taxa. We use a targeted sampling workflow to the Ordway Swisher Biological Station within the US National Ecological Observatory Network (NEON), where traditional forestry plots had identified six canopy tree species with more than 10 individuals at the site. Combining iterative model development with rare species sampling, we extend a training dataset to include 14 species. Using a multi‐temporal hierarchical model, we demonstrate the ability to include species predicted at <1% frequency in landscape without losing performance on the dominant species. The final model has over 75% accuracy for 14 species with improved rare species classification compared to 61% accuracy of a baseline deep learning model. After filtering out dead trees, we generate landscape species maps of individual crowns for over 670 000 individual trees. We find distinct patches of forest composed of rarer species at the full‐site scale, highlighting the importance of capturing species diversity in training data. We estimate the relative abundance of 14 species within the landscape and provide three measures of uncertainty to generate a range of counts for each species. For example, we estimate that the dominant species,Pinus palustrisaccounts for c. 28% of predicted stems, with models predicting a range of counts between 160 000 and 210 000 individuals. These maps provide the first estimates of canopy tree diversity within a NEON site to include rare species and provide a blueprint for capturing tree diversity using airborne computer vision at broad scales.
more »
« less
Near real‐time monitoring of wading birds using uncrewed aircraft systems and computer vision
Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.
more »
« less
- Award ID(s):
- 2326954
- PAR ID:
- 10610495
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Remote Sensing in Ecology and Conservation
- Volume:
- 11
- Issue:
- 3
- ISSN:
- 2056-3485
- Format(s):
- Medium: X Size: p. 255-265
- Size(s):
- p. 255-265
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Drones are increasingly popular for collecting behaviour data of group‐living animals, offering inexpensive and minimally disruptive observation methods. Imagery collected by drones can be rapidly analysed using computer vision techniques to extract information, including behaviour classification, habitat analysis and identification of individual animals. While computer vision techniques can rapidly analyse drone‐collected data, the success of these analyses often depends on careful mission planning that considers downstream computational requirements—a critical factor frequently overlooked in current studies.We present a comprehensive summary of research in the growing AI‐driven animal ecology (ADAE) field, which integrates data collection with automated computational analysis focused on aerial imagery for collective animal behaviour studies. We systematically analyse current methodologies, technical challenges and emerging solutions in this field, from drone mission planning to behavioural inference. We illustrate computer vision pipelines that infer behaviour from drone imagery and present the computer vision tasks used for each step. We map specific computational tasks to their ecological applications, providing a framework for future research design.Our analysis reveals AI‐driven animal ecology studies for collective animal behaviour using drone imagery focus on detection and classification computer vision tasks. While convolutional neural networks (CNNs) remain dominant for detection and classification tasks, newer architectures like transformer‐based models and specialized video analysis networks (e.g. X3D, I3D, SlowFast) designed for temporal pattern recognition are gaining traction for pose estimation and behaviour inference. However, reported model accuracy varies widely by computer vision task, species, habitats and evaluation metrics, complicating meaningful comparisons between studies.Based on current trends, we conclude semi‐autonomous drone missions will be increasingly used to study collective animal behaviour. While manual drone operation remains prevalent, autonomous drone manoeuvrers, powered by edge AI, can scale and standardise collective animal behavioural studies while reducing the risk of disturbance and improving data quality. We propose guidelines for AI‐driven animal ecology drone studies adaptable to various computer vision tasks, species and habitats. This approach aims to collect high‐quality behaviour data while minimising disruption to the ecosystem.more » « less
-
Project control operations in construction are mostly executed via direct observations and the manual monitoring of progress and performance of construction tasks on the job site. Project engineers move physically within job-site areas to ensure activities are executed as planned. Such physical displacements are error-prone and ineffective in cost and time, particularly in larger construction zones. It is critical to explore new methods and technologies to effectively assist performance control operations by rapidly capturing data from materials and equipment on the job site. Motivated by the ubiquitous use of unmanned aerial vehicles (UAVs) in construction projects and the maturity of computer-vision-based machine-learning (ML) techniques, this research investigates the challenges of object detection—the process of predicting classes of objects (specified construction materials and equipment)—in real time. The study addresses the challenges of data collection and predictions for remote monitoring in project control activities. It uses these two proven and robust technologies by exploring factors that impact the use of UAV aerial images to design and implement object detectors through an analytical conceptualization and a showcase demonstration. The approach sheds light on the applications of deep-learning techniques to access and rapidly identify and classify resources in real-time. It paves the way to shift from costly and time-consuming job-site walkthroughs that are coupled with manual data processing and input to more automated, streamlined operations. The research found that the critical factor to develop object detectors with acceptable levels of accuracy is collecting aerial images with for adequate scales with high frequencies from different positions of the same construction areas.more » « less
-
A mini quadrotor can be used in many applications, such as indoor airborne surveillance, payload delivery, and warehouse monitoring. In these applications, vision-based autonomous navigation is one of the most interesting research topics because precise navigation can be implemented based on vision analysis. However, pixel-based vision analysis approaches require a high-powered computer, which is inappropriate to be attached to a small indoor quadrotor. This paper proposes a method called the Motion-vector-based Moving Objects Detection. This method detects and avoids obstacles using stereo motion vectors instead of individual pixels, thereby substantially reducing the data processing requirement. Although this method can also be used in the avoidance of stationary obstacles by taking into account the ego-motion of the quadrotor, this paper primarily focuses on providing our empirical verification on the real-time avoidance of moving objects.more » « less
-
Abstract Surface rupture from the 2019 Ridgecrest earthquake sequence, initially associated with the Mw 6.4 foreshock, occurred on 4 July on a ∼17 km long, northeast–southwest-oriented, left-lateral zone of faulting. Following the Mw 7.1 mainshock on 5 July (local time), extensive northwest–southeast-oriented, right-lateral faulting was then also mapped along a ∼50 km long zone of faults, including subparallel splays in several areas. The largest slip was observed in the epicentral area and crossing the dry lakebed of China Lake to the southeast. Surface fault rupture mapping by a large team, reported elsewhere, was used to guide the airborne data acquisition reported here. Rapid rupture mapping allowed for accurate and efficient flight line planning for the high-resolution light detection and ranging (lidar) and aerial photography. Flight line planning trade-offs were considered to allocate the medium (25 pulses per square meter [ppsm]) and high-resolution (80 ppsm) lidar data collection polygons. The National Center for Airborne Laser Mapping acquired the airborne imagery with a Titan multispectral lidar system and Digital Modular Aerial Camera (DiMAC) aerial digital camera, and U.S. Geological Survey acquired Global Positioning System ground control data. This effort required extensive coordination with the Navy as much of the airborne data acquisition occurred within their restricted airspace at the China Lake ranges.more » « less
An official website of the United States government
