IntroductionEffective monitoring of insect-pests is vital for safeguarding agricultural yields and ensuring food security. Recent advances in computer vision and machine learning have opened up significant possibilities of automated persistent monitoring of insect-pests through reliable detection and counting of insects in setups such as yellow sticky traps. However, this task is fraught with complexities, encompassing challenges such as, laborious dataset annotation, recognizing small insect-pests in low-resolution or distant images, and the intricate variations across insect-pests life stages and species classes. MethodsTo tackle these obstacles, this work investigates combining two solutions, Hierarchical Transfer Learning (HTL) and Slicing-Aided Hyper Inference (SAHI), along with applying a detection model. HTL pioneers a multi-step knowledge transfer paradigm, harnessing intermediary in-domain datasets to facilitate model adaptation. Moreover, slicing-aided hyper inference subdivides images into overlapping patches, conducting independent object detection on each patch before merging outcomes for precise, comprehensive results. ResultsThe outcomes underscore the substantial improvement achievable in detection results by integrating a diverse and expansive in-domain dataset within the HTL method, complemented by the utilization of SAHI. DiscussionWe also present a hardware and software infrastructure for deploying such models for real-life applications. Our results can assist researchers and practitioners looking for solutions for insect-pest detection and quantification on yellow sticky traps.
more »
« less
From buzzes to bytes: A systematic review of automated bioacoustics models used to detect, classify and monitor insects
Insects play vital ecological roles; many provide essential ecosystem services while others are economically devastating pests and disease vectors. Concerns over insect population declines and expansion have generated a pressing need to effectively monitor insects across broad spatial and temporal scales. A promising approach is bioacoustics, which uses sound to study ecological communities. Despite recent increases in machine learning technologies, the status of emerging automated bioacoustics methods for monitoring insects is not well known, limiting potential applications. To address this gap, we systematically review the effectiveness of automated bioacoustics models over the past four decades, analysing 176 studies that met our inclusion criteria. We describe their strengths and limitations compared to traditional methods and propose productive avenues forward. We found automated bioacoustics models for 302 insect species distributed across nine Orders. Studies used intentional calls (e.g. grasshopper stridulation), by‐products of flight (e.g. bee wingbeats) and indirectly produced sounds (e.g. grain movement) for identification. Pests were the most common study focus, driven largely by weevils and borers moving in dried food and wood. All disease vector studies focused on mosquitoes. A quarter of the studies compared multiple insect families. Our review illustrates that machine learning, and deep learning in particular, are becoming the gold standard for bioacoustics automated modelling approaches. We identified models that could classify hundreds of insect species with over 90% accuracy. Bioacoustics models can be useful for reducing lethal sampling, monitoring phenological patterns within and across days and working in locations or conditions where traditional methods are less effective (e.g. shady, shrubby or remote areas). However, it is important to note that not all insect taxa emit easily detectable sounds, and that sound pollution may impede effective recordings in some environmental contexts. Synthesis and applications: Automated bioacoustics methods can be a useful tool for monitoring insects and addressing pressing ecological and societal questions. Successful applications include assessing insect biodiversity, distribution and behaviour, as well as evaluating the effectiveness of restoration and pest control efforts. We recommend collaborations among ecologists and machine learning experts to increase model use by researchers and practitioners.
more »
« less
- Award ID(s):
- 2010615
- PAR ID:
- 10532476
- Publisher / Repository:
- British Ecological Society
- Date Published:
- Journal Name:
- Journal of Applied Ecology
- Volume:
- 61
- Issue:
- 6
- ISSN:
- 0021-8901
- Page Range / eLocation ID:
- 1199 to 1211
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Insect pests significantly impact global agricultural productivity and crop quality. Effective integrated pest management strategies require the identification of insects, including beneficial and harmful insects. Automated identification of insects under real-world conditions presents several challenges, including the need to handle intraspecies dissimilarity and interspecies similarity, life-cycle stages, camouflage, diverse imaging conditions, and variability in insect orientation. An end-to-end approach for training deep-learning models, InsectNet, is proposed to address these challenges. Our approach has the following key features: (i) uses a large dataset of insect images collected through citizen science along with label-free self-supervised learning to train a global model, (ii) fine-tuning this global model using smaller, expert-verified regional datasets to create a local insect identification model, (iii) which provides high prediction accuracy even for species with small sample sizes, (iv) is designed to enhance model trustworthiness, and (v) democratizes access through streamlined machine learning operations. This global-to-local model strategy offers a more scalable and economically viable solution for implementing advanced insect identification systems across diverse agricultural ecosystems. We report accurate identification (>96% accuracy) of numerous agriculturally and ecologically relevant insect species, including pollinators, parasitoids, predators, and harmful insects. InsectNet provides fine-grained insect species identification, works effectively in challenging backgrounds, and avoids making predictions when uncertain, increasing its utility and trustworthiness. The model and associated workflows are available through a web-based portal accessible through a computer or mobile device. We envision InsectNet to complement existing approaches, and be part of a growing suite of AI technologies for addressing agricultural challenges.more » « less
-
Understanding the behavioral and neural dynamics of social interactions is a goal of contemporary neuroscience. Many machine learning methods have emerged in recent years to make sense of complex video and neurophysiological data that result from these experiments. Less focus has been placed on understanding how animals process acoustic information, including social vocalizations. A critical step to bridge this gap is determining the senders and receivers of acoustic information in social interactions. While sound source localization (SSL) is a classic problem in signal processing, existing approaches are limited in their ability to localize animal-generated sounds in standard laboratory environments. Advances in deep learning methods for SSL are likely to help address these limitations, however there are currently no publicly available models, datasets, or benchmarks to systematically evaluate SSL algorithms in the domain of bioacoustics. Here, we present the VCL Benchmark: the first large-scale dataset for benchmarking SSL algorithms in rodents. We acquired synchronized video and multi-channel audio recordings of 767,295 sounds with annotated ground truth sources across 9 conditions. The dataset provides benchmarks which evaluate SSL performance on real data, simulated acoustic data, and a mixture of real and simulated data. We intend for this benchmark to facilitate knowledge transfer between the neuroscience and acoustic machine learning communities, which have had limited overlap.more » « less
-
Diseases and insects, particularly those that are non-native and invasive, arguably pose the most destructive threat to North American forests. Currently, both exotic and native insects and diseases are producing extensive ecological damage and economic impacts. As part of an effort to identify United States tree species and forests most vulnerable to these epidemics, we compiled a list of the most serious insect and disease threats for 419 native tree species and assigned a severity rating for each of the 1378 combinations between mature tree hosts and 339 distinct insect and disease agents. We then joined this list with data from a spatially unbiased and nationally consistent forest inventory to assess the potential ecological impacts of insect and disease infestations. Specifically, potential host species mortality for each host/agent combination was used to weight species importance values on approximately 132,000 Forest Inventory and Analysis (FIA) plots across the conterminous 48 United States. When summed on each plot, these weighted importance values represent an estimate of the proportion of the plot’s existing importance value at risk of being lost. These plot estimates were then used to identify statistically significant geographic hotspots and coldspots and of potential forest impacts associated with insects and diseases in total, and for different agent types. In general, the potential impacts of insects and diseases were greater in the West, where there are both fewer agents and less diverse forests. The impact of non-native invasive agents, however, was potentially greater in the East. Indeed, the impacts of current exotic pests could be greatly magnified across much of the Eastern United States if these agents are able to reach the entirety of their hosts’ ranges. Both the list of agent/host severities and the spatially explicit results can inform species-level vulnerability assessments and broad-scale forest sustainability reporting efforts, and should provide valuable information for decision-makers who need to determine which tree species and locations to target for monitoring efforts and pro-active management activities.more » « less
-
Abstract A core goal of the National Ecological Observatory Network (NEON) is to measure changes in biodiversity across the 30‐yr horizon of the network. In contrast to NEON’s extensive use of automated instruments to collect environmental data, NEON’s biodiversity surveys are almost entirely conducted using traditional human‐centric field methods. We believe that the combination of instrumentation for remote data collection and machine learning models to process such data represents an important opportunity for NEON to expand the scope, scale, and usability of its biodiversity data collection while potentially reducing long‐term costs. In this manuscript, we first review the current status of instrument‐based biodiversity surveys within the NEON project and previous research at the intersection of biodiversity, instrumentation, and machine learning at NEON sites. We then survey methods that have been developed at other locations but could potentially be employed at NEON sites in future. Finally, we expand on these ideas in five case studies that we believe suggest particularly fruitful future paths for automated biodiversity measurement at NEON sites: acoustic recorders for sound‐producing taxa, camera traps for medium and large mammals, hydroacoustic and remote imagery for aquatic diversity, expanded remote and ground‐based measurements for plant biodiversity, and laboratory‐based imaging for physical specimens and samples in the NEON biorepository. Through its data science‐literate staff and user community, NEON has a unique role to play in supporting the growth of such automated biodiversity survey methods, as well as demonstrating their ability to help answer key ecological questions that cannot be answered at the more limited spatiotemporal scales of human‐driven surveys.more » « less
An official website of the United States government

