Abstract Monitoring wildlife abundance across space and time is an essential task to study their population dynamics and inform effective management. Acoustic recording units are a promising technology for efficiently monitoring bird populations and communities. While current acoustic data models provide information on the presence/absence of individual species, new approaches are needed to monitor population abundance, ideally across large spatio‐temporal regions.We present an integrated modelling framework that combines high‐quality but temporally sparse bird point count survey data with acoustic recordings. Our models account for imperfect detection in both data types and false positive errors in the acoustic data. Using simulations, we compare the accuracy and precision of abundance estimates using differing amounts of acoustic vocalizations obtained from a clustering algorithm, point count data, and a subset of manually validated acoustic vocalizations. We also use our modelling framework in a case study to estimate abundance of the Eastern Wood‐Pewee (Contopus virens) in Vermont, USA.The simulation study reveals that combining acoustic and point count data via an integrated model improves accuracy and precision of abundance estimates compared with models informed by either acoustic or point count data alone. Improved estimates are obtained across a wide range of scenarios, with the largest gains occurring when detection probability for the point count data is low. Combining acoustic data with only a small number of point count surveys yields estimates of abundance without the need for validating any of the identified vocalizations from the acoustic data. Within our case study, the integrated models provided moderate support for a decline of the Eastern Wood‐Pewee in this region.Our integrated modelling approach combines dense acoustic data with few point count surveys to deliver reliable estimates of species abundance without the need for manual identification of acoustic vocalizations or a prohibitively expensive large number of repeated point count surveys. Our proposed approach offers an efficient monitoring alternative for large spatio‐temporal regions when point count data are difficult to obtain or when monitoring is focused on rare species with low detection probability.
more »
« less
This content will become publicly available on August 1, 2026
Animal-Borne Adaptive Acoustic Monitoring
Animal-borne acoustic sensors provide valuable insights into wildlife behavior and environments but face significant power and storage constraints that limit deployment duration. We present a novel adaptive acoustic monitoring system designed for long-term, real-time observation of wildlife. Our approach combines low-power hardware, configurable firmware, and an unsupervised machine learning algorithm that intelligently filters acoustic data to prioritize novel or rare sounds while reducing redundant storage. The system employs a variational autoencoder to project audio features into a low-dimensional space, followed by adaptive clustering to identify events of interest. Simulation results demonstrate the system’s ability to normalize the collection of acoustic events across varying abundance levels, with rare events retained at rates of 80–85% while frequent sounds are reduced to 3–10% retention. Initial field deployments on caribou, African elephants, and bighorn sheep show promising application across diverse species and ecological contexts. Power consumption analysis indicates the need for additional optimization to achieve multi-month deployments. This technology enables the creation of novel wilderness datasets while addressing the limitations of traditional static acoustic monitoring approaches, offering new possibilities for wildlife research, ecosystem monitoring, and conservation efforts.
more »
« less
- Award ID(s):
- 2312391
- PAR ID:
- 10609167
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Journal of Sensor and Actuator Networks
- Volume:
- 14
- Issue:
- 4
- ISSN:
- 2224-2708
- Page Range / eLocation ID:
- 1-25
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Warning signals are well known in the visual system, but rare in other modalities. Some moths produce ultrasonic sounds to warn bats of noxious taste or to mimic unpalatable models. Here, we report results from a long-term study across the globe, assaying moth response to playback of bat echolocation. We tested 252 genera, spanning most families of large-bodied moths, and document anti-bat ultrasound production in 52 genera, with eight subfamily origins described. Based on acoustic analysis of ultrasonic emissions and palatability experiments with bats, it seems that acoustic warning and mimicry are the raison d'être for sound production in most moths. However, some moths use high-duty-cycle ultrasound capable of jamming bat sonar. In fact, we find preliminary evidence of independent origins of sonar jamming in at least six subfamilies. Palatability data indicate that jamming and warning are not mutually exclusive strategies. To explore the possible organization of anti-bat warning sounds into acoustic mimicry rings, we intensively studied a community of moths in Ecuador and, using machine-learning approaches, found five distinct acoustic clusters. While these data represent an early understanding of acoustic aposematism and mimicry across this megadiverse insect order, it is likely that ultrasonically signaling moths comprise one of the largest mimicry complexes on earth.more » « less
-
Audio is valuable in many mobile, embedded, and cyber-physical systems. We propose AvA, an acoustic adaptive filtering architecture, configurable to a wide range of applications and systems. By incorporating AvA into their own systems, developers can select which sounds to enhance or filter out depending on their application needs. AvA accomplishes this by using a novel adaptive beamforming algorithm called content-informed adaptive beam-forming (CIBF), that directly uses detectors and sound models that developers have created for their own applications to enhance or filter out sounds. CIBF uses a novel three step approach to prop-agate gradients from a wide range of different model types and signal feature representations to learn filter coefficients. We apply AvA to four scenarios and demonstrate that AvA enhances their respective performances by up to 11.1%. We also integrate AvA into two different mobile/embedded platforms with widely different resource constraints and target sounds/noises to show the boosts in performance and robustness these applications can see using AvA.more » « less
-
Abstract Infrasound (low frequency sound waves) can be used to monitor and characterize volcanic eruptions. However, infrasound sensors are usually placed on the ground, thus providing a limited sampling of the acoustic radiation pattern that can bias source size estimates. We present observations of explosive eruptions from a novel uncrewed aircraft system (UAS)‐based infrasound sensor platform that was strategically hovered near the active vents of Stromboli volcano, Italy. We captured eruption infrasound from short‐duration explosions and jetting events. While potential vertical directionality was inconclusive for the short‐duration explosion, we find that jetting events exhibit vertical sound directionality that was observed with a UAS close to vertical. This directionality would not have been observed using only traditional deployments of ground‐based infrasound sensors, but is consistent with jet noise theory. This proof‐of‐concept study provides unique information that can improve our ability to characterize and quantify the directionality of volcanic eruptions and their associated hazards.more » « less
-
null (Ed.)Smartphones and mobile applications have become an integral part of our daily lives. This is reflected by the increase in mobile devices, applications, and revenue generated each year. However, this growth is being met with an increasing concern for user privacy, and there have been many incidents of privacy and data breaches related to smartphones and mobile applications in recent years. In this work, we focus on improving privacy for audio-based mobile systems. These applications will generally listen to all sounds in the environment and may record privacy-sensitive signals, such as speech, that may not be needed for the application. We present PAMS, a software development package for mobile applications. PAMS integrates a novel sound source filtering algorithm called Probabilistic Template Matching to generate a set of privacy-enhancing filters that remove extraneous sounds using learned statistical "templates" of these sounds. We demonstrate the effectiveness of PAMS by integrating it into a sleep monitoring system, with the intent to remove extraneous speech from breathing, snoring, and other sleep sounds that the system is monitoring. By comparing our PAMS enhanced sleep monitoring system with existing mobile systems, we show that PAMS can reduce speech intelligibility by up to 74.3% while maintaining similar performance in detecting sleeping sounds.more » « less
An official website of the United States government
