Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Drones have become invaluable tools for studying animal behaviour in the wild, enabling researchers to collect aerial video data of group‐living animals. However, manually piloting drones to track animal groups consistently is challenging due to complex factors such as terrain, vegetation, group spread and movement patterns. The variability in manual piloting can result in unusable data for downstream behavioural analysis, making it difficult to collect standardized datasets for studying collective animal behaviour.To address these challenges, we present WildWing, a complete hardware and software open‐source unmanned aerial system (UAS) for autonomously collecting behavioural video data of group‐living animals. The system's main goal is to automate and standardize the collection of high‐quality aerial footage suitable for computer vision‐based behaviour analysis. We provide a novel navigation policy to autonomously track animal groups while maintaining optimal camera angles and distances for behavioural analysis, reducing the inconsistencies inherent in manual piloting.The complete WildWing system costs only $650 and incorporates drone hardware with custom software that integrates ecological knowledge into autonomous navigation decisions. The system produces 4 K resolution video at 30 fps while automatically maintaining appropriate distances and angles for behaviour analysis. We validate the system through field deployments tracking groups of Grevy's zebras, giraffes and Przewalski's horses at The Wilds conservation centre, demonstrating its ability to collect usable behavioural data consistently.By automating the data collection process, WildWing helps ensure consistent, high‐quality video data suitable for computer vision analysis of animal behaviour. This standardization is crucial for developing robust automated behaviour recognition systems to help researchers study and monitor wildlife populations at scale. The open‐source nature of WildWing makes autonomous behavioural data collection more accessible to researchers, enabling wider application of drone‐based behavioural monitoring in conservation and ecological research.more » « lessFree, publicly-accessible full text available March 10, 2026
- 
            Abstract Geometric morphometrics is used in the biological sciences to quantify morphological traits. However, the need for manual landmark placement hampers scalability, which is both time‐consuming, labor‐intensive, and open to human error. The selected landmarks embody a specific hypothesis regarding the critical geometry relevant to the biological question. Any adjustment to this hypothesis necessitates acquiring a new set of landmarks or revising them significantly, which can be impractical for large datasets. There is a pressing need for more efficient and flexible methods for landmark placement that can adapt to different hypotheses without requiring extensive human effort. This study investigates the precision and accuracy of landmarks derived from functional correspondences obtained through the functional map framework of geometry processing. We utilize a deep functional map network to learn shape descriptors, which enable us to achieve functional map‐based and point‐to‐point correspondences between specimens in our dataset. Our methodology involves automating the landmarking process by interrogating these maps to identify corresponding landmarks, using manually placed landmarks from the entire dataset as a reference. We apply our method to a dataset of rodent mandibles and compare its performance to MALPACA's, a standard tool for automatic landmark placement. Our model demonstrates a speed improvement compared to MALPACA while maintaining a competitive level of accuracy. Although MALPACA typically shows the lowest RMSE, our models perform comparably well, particularly with smaller training datasets, indicating strong generalizability. Visual assessments confirm the precision of our automated landmark placements, with deviations consistently falling within an acceptable range for MALPACA estimates. Our results underscore the potential of unsupervised learning models in anatomical landmark placement, presenting a practical and efficient alternative to traditional methods. Our approach saves significant time and effort and provides the flexibility to adapt to different hypotheses about critical geometrical features without the need for manual re‐acquisition of landmarks. This advancement can significantly enhance the scalability and applicability of geometric morphometrics, making it more feasible for large datasets and diverse biological studies.more » « less
- 
            IntroductionThis team science case study explores one cross-disciplinary science institute's change process for redesigning a weekly research coordination meeting. The narrative arc follows four stages of the adaptive process in complex adaptive systems: disequilibrium, amplification, emergence, and new order. MethodsThis case study takes an interpretative, participatory approach, where the objective is to understand the phenomena within the social context and deepen understanding of how the process unfolds over time and in context. Multiple data sources were collected and analyzed. ResultsA new adaptive order for the weekly research coordination meeting was established. The mechanism for the success of the change initiative was best explained by complexity leadership theory. DiscussionImplications for team science practice include generating momentum for change, re-examining power dynamics, defining critical teaming professional roles, building multiple pathways towards team capacity development, and holding adaptive spaces. Promising areas for further exploration are also presented.more » « less
- 
            Abstract Access to large image volumes through camera traps and crowdsourcing provides novel possibilities for animal monitoring and conservation. It calls for automatic methods for analysis, in particular, when re-identifying individual animals from the images. Most existing re-identification methods rely on either hand-crafted local features or end-to-end learning of fur pattern similarity. The former does not need labeled training data, while the latter, although very data-hungry typically outperforms the former when enough training data is available. We propose a novel re-identification pipeline that combines the strengths of both approaches by utilizing modern learnable local features and feature aggregation. This creates representative pattern feature embeddings that provide high re-identification accuracy while allowing us to apply the method to small datasets by using pre-trained feature descriptors. We report a comprehensive comparison of different modern local features and demonstrate the advantages of the proposed pipeline on two very different species.more » « less
- 
            Abstract Image‐based machine learning tools are an ascendant ‘big data’ research avenue. Citizen science platforms, like iNaturalist, and museum‐led initiatives provide researchers with an abundance of data and knowledge to extract. These include extraction of metadata, species identification, and phenomic data. Ecological and evolutionary biologists are increasingly using complex, multi‐step processes on data. These processes often include machine learning techniques, often built by others, that are difficult to reuse by other members in a collaboration.We present a conceptual workflow model for machine learning applications using image data to extract biological knowledge in the emerging field of imageomics. We derive an implementation of this conceptual workflow for a specific imageomics application that adheres to FAIR principles as a formal workflow definition that allows fully automated and reproducible execution, and consists of reusable workflow components.We outline technologies and best practices for creating an automated, reusable and modular workflow, and we show how they promote the reuse of machine learning models and their adaptation for new research questions. This conceptual workflow can be adapted: it can be semi‐automated, contain different components than those presented here, or have parallel components for comparative studies.We encourage researchers—both computer scientists and biologists—to build upon this conceptual workflow that combines machine learning tools on image data to answer novel scientific questions in their respective fields.more » « less
- 
            Synopsis Acquiring accurate 3D biological models efficiently and economically is important for morphological data collection and analysis in organismal biology. In recent years, structure-from-motion (SFM) photogrammetry has become increasingly popular in biological research due to its flexibility and being relatively low cost. SFM photogrammetry registers 2D images for reconstructing camera positions as the basis for 3D modeling and texturing. However, most studies of organismal biology still relied on commercial software to reconstruct the 3D model from photographs, which impeded the adoption of this workflow in our field due the blocking issues such as cost and affordability. Also, prior investigations in photogrammetry did not sufficiently assess the geometric accuracy of the models reconstructed. Consequently, this study has two goals. First, we presented an affordable and highly flexible SFM photogrammetry pipeline based on the open-source package OpenDroneMap (ODM) and its user interface WebODM. Second, we assessed the geometric accuracy of the photogrammetric models acquired from the ODM pipeline by comparing them to the models acquired via microCT scanning, the de facto method to image skeleton. Our sample comprised 15 Aplodontia rufa (mountain beaver) skulls. Using models derived from microCT scans of the samples as reference, our results showed that the geometry of the models derived from ODM was sufficiently accurate for gross metric and morphometric analysis as the measurement errors are usually around or below 2%, and morphometric analysis captured consistent patterns of shape variations in both modalities. However, subtle but distinct differences between the photogrammetric and microCT-derived 3D models could affect the landmark placement, which in return affected the downstream shape analysis, especially when the variance within a sample is relatively small. At the minimum, we strongly advise not combining 3D models derived from these two modalities for geometric morphometric analysis. Our findings can be indictive of similar issues in other SFM photogrammetry tools since the underlying pipelines are similar. We recommend that users run a pilot test of geometric accuracy before using photogrammetric models for morphometric analysis. For the research community, we provide detailed guidance on using our pipeline for building 3D models from photographs.more » « less
- 
            Abstract Inexpensive and accessible sensors are accelerating data acquisition in animal ecology. These technologies hold great potential for large-scale ecological understanding, but are limited by current processing approaches which inefficiently distill data into relevant information. We argue that animal ecologists can capitalize on large datasets generated by modern sensors by combining machine learning approaches with domain knowledge. Incorporating machine learning into ecological workflows could improve inputs for ecological models and lead to integrated hybrid modeling tools. This approach will require close interdisciplinary collaboration to ensure the quality of novel approaches and train a new generation of data scientists in ecology and conservation.more » « less
- 
            Conference Title: 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) Conference Start Date: 2021, Sept. 27 Conference End Date: 2021, Sept. 30 Conference Location: Champaign, IL, USAMetadata are key descriptors of research data, particularly for researchers seeking to apply machine learning (ML) to the vast collections of digitized specimens. Unfortunately, the available metadata is often sparse and, at times, erroneous. Additionally, it is prohibitively expensive to address these limitations through traditional, manual means. This paper reports on research that applies machine-driven approaches to analyzing digitized fish images and extracting various important features from them. The digitized fish specimens are being analyzed as part of the Biology Guided Neural Networks (BGNN) initiative, which is developing a novel class of artificial neural networks using phylogenies and anatomy ontologies. Automatically generated metadata is crucial for identifying the high-quality images needed for the neural network's predictive analytics. Methods that combine ML and image informatics techniques allow us to rapidly enrich the existing metadata associated with the 7,244 images from the Illinois Natural History Survey (INHS) used in our study. Results show we can accurately generate many key metadata properties relevant to the BGNN project, as well as general image quality metrics (e.g. brightness and contrast). Results also show that we can accurately generate bounding boxes and segmentation masks for fish, which are needed for subsequent machine learning analyses. The automatic process outperforms humans in terms of time and accuracy, and provides a novel solution for leveraging digitized specimens in ML. This research demonstrates the ability of computational methods to enhance the digital library services associated with the tens of thousands of digitized specimens stored in open-access repositories worldwide.more » « less
- 
            Using unmanned aerial vehicles (UAVs) to track multiple individuals simultaneously in their natural environment is a powerful approach for better understanding the collective behavior of primates. Previous studies have demonstrated the feasibility of automating primate behavior classification from video data, but these studies have been carried out in captivity or from ground-based cameras. However, to understand group behavior and the self-organization of a collective, the whole troop needs to be seen at a scale where behavior can be seen in relation to the natural environment in which ecological decisions are made. To tackle this challenge, this study presents a novel dataset for baboon detection, tracking, and behavior recognition from drone videos where troops are observed on-the-move in their natural environment as they move to and from their sleeping sites. Videos were captured from drones at Mpala Research Centre, a research station located in Laikipia County, in central Kenya. The baboon detection dataset was created by manually annotating all baboons in drone videos with bounding boxes. A tiling method was subsequently applied to create a pyramid of images at various scales from the original 5.3K resolution images, resulting in approximately 30K images used for baboon detection. The baboon tracking dataset is derived from the baboon detection dataset, where bounding boxes are consistently assigned the same ID throughout the video. This process resulted in half an hour of dense tracking data. The baboon behavior recognition dataset was generated by converting tracks into mini-scenes, a video subregion centered on each animal. These mini-scenes were annotated with 12 distinct behavior types and one additional category for occlusion, resulting in over 20 hours of data. Benchmark results show mean average precision (mAP) of 92.62% for the YOLOv8-X detection model, multiple object tracking precision (MOTP) of 87.22% for the DeepSORT tracking algorithm, and micro top-1 accuracy of 64.89% for the X3D behavior recognition model. Using deep learning to rapidly and accurately classify wildlife behavior from drone footage facilitates non-invasive data collection on behavior enabling the behavior of a whole group to be systematically and accurately recorded. The dataset can be accessed at https://baboonland.xyz.more » « lessFree, publicly-accessible full text available June 16, 2026
- 
            The availability of large datasets of organism images combined with advances in artificial intelligence (AI) has significantly enhanced the study of organisms through images, unveiling biodiversity patterns and macro-evolutionary trends. However, existing machine learning (ML)-ready organism datasets have several limitations. First, these datasets often focus on species classification only, overlooking tasks involving visual traits of organisms. Second, they lack detailed visual trait annotations, like pixel-level segmentation, that are crucial for in-depth biological studies. Third, these datasets predominantly feature organisms in their natural habitats, posing challenges for aquatic species like fish, where underwater images often suffer from poor visual clarity, obscuring critical biological traits. This gap hampers the study of aquatic biodiversity patterns which is necessary for the assessment of climate change impacts, and evolutionary research on aquatic species morphology. To address this, we introduce the Fish-Visual Trait Analysis (Fish-Vista) dataset—a large, annotated collection of about 80K fish images spanning 3000 different species, supporting several challenging and biologically relevant tasks including species classification, trait identification, and trait segmentation. These images have been curated through a sophisticated data processing pipeline applied to a cumulative set of images obtained from various museum collections. Fish-Vista ensures that visual traits of images are clearly visible, and provides fine-grained labels of various visual traits present in each image. It also offers pixel-level annotations of 9 different traits for about 7000 fish images, facilitating additional trait segmentation and localization tasks. The ultimate goal of Fish-Vista is to provide a clean, carefully curated, high-resolution dataset that can serve as a foundation for accelerating biological discoveries using advances in AI. Finally, we provide a comprehensive analysis of state-of-the-art deep learning techniques on Fish-Vista.more » « lessFree, publicly-accessible full text available June 15, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
