Coral reefs are biodiverse marine ecosystems that are undergoing rapid changes, making monitoring vital as we seek to manage and mitigate stressors. Healthy reef soundscapes are rich with sounds, enabling passive acoustic recording and soundscape analyses to emerge as cost-effective, long-term methods for monitoring reef communities. Yet most biological reef sounds have not been identified or described, limiting the effectiveness of acoustic monitoring for diversity assessments. Machine learning offers a solution to scale such analyses but has yet to be successfully applied to characterize the diversity of reef fish sounds. Here we sought to characterize and categorize coral reef fish sounds using unsupervised machine learning methods. Pulsed fish and invertebrate sounds from 480 min of data sampled across 10 days over a 2-month period on a US Virgin Islands reef were manually identified and extracted, then grouped into acoustically similar clusters using unsupervised clustering based on acoustic features. The defining characteristics of these clusters were described and compared to determine the extent of acoustic diversity detected on these reefs. Approximately 55 distinct calls were identified, ranging in centroid frequency from 50 Hz to 1,300 Hz. Within this range, two main sub-bands containing multiple signal types were identified from 100 Hz to 400 Hz and 300 Hz–700 Hz, with a variety of signals outside these two main bands. These methods may be used to seek out acoustic diversity across additional marine habitats. The signals described here, though taken from a limited dataset, speak to the diversity of sounds produced on coral reefs and suggest that there might be more acoustic niche differentiation within soniferous fish communities than has been previously recognized.
more »
« less
This content will become publicly available on August 1, 2026
Animal-Borne Adaptive Acoustic Monitoring
Animal-borne acoustic sensors provide valuable insights into wildlife behavior and environments but face significant power and storage constraints that limit deployment duration. We present a novel adaptive acoustic monitoring system designed for long-term, real-time observation of wildlife. Our approach combines low-power hardware, configurable firmware, and an unsupervised machine learning algorithm that intelligently filters acoustic data to prioritize novel or rare sounds while reducing redundant storage. The system employs a variational autoencoder to project audio features into a low-dimensional space, followed by adaptive clustering to identify events of interest. Simulation results demonstrate the system’s ability to normalize the collection of acoustic events across varying abundance levels, with rare events retained at rates of 80–85% while frequent sounds are reduced to 3–10% retention. Initial field deployments on caribou, African elephants, and bighorn sheep show promising application across diverse species and ecological contexts. Power consumption analysis indicates the need for additional optimization to achieve multi-month deployments. This technology enables the creation of novel wilderness datasets while addressing the limitations of traditional static acoustic monitoring approaches, offering new possibilities for wildlife research, ecosystem monitoring, and conservation efforts.
more »
« less
- Award ID(s):
- 2312391
- PAR ID:
- 10609167
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Journal of Sensor and Actuator Networks
- Volume:
- 14
- Issue:
- 4
- ISSN:
- 2224-2708
- Page Range / eLocation ID:
- 1-25
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Monitoring wildlife abundance across space and time is an essential task to study their population dynamics and inform effective management. Acoustic recording units are a promising technology for efficiently monitoring bird populations and communities. While current acoustic data models provide information on the presence/absence of individual species, new approaches are needed to monitor population abundance, ideally across large spatio‐temporal regions.We present an integrated modelling framework that combines high‐quality but temporally sparse bird point count survey data with acoustic recordings. Our models account for imperfect detection in both data types and false positive errors in the acoustic data. Using simulations, we compare the accuracy and precision of abundance estimates using differing amounts of acoustic vocalizations obtained from a clustering algorithm, point count data, and a subset of manually validated acoustic vocalizations. We also use our modelling framework in a case study to estimate abundance of the Eastern Wood‐Pewee (Contopus virens) in Vermont, USA.The simulation study reveals that combining acoustic and point count data via an integrated model improves accuracy and precision of abundance estimates compared with models informed by either acoustic or point count data alone. Improved estimates are obtained across a wide range of scenarios, with the largest gains occurring when detection probability for the point count data is low. Combining acoustic data with only a small number of point count surveys yields estimates of abundance without the need for validating any of the identified vocalizations from the acoustic data. Within our case study, the integrated models provided moderate support for a decline of the Eastern Wood‐Pewee in this region.Our integrated modelling approach combines dense acoustic data with few point count surveys to deliver reliable estimates of species abundance without the need for manual identification of acoustic vocalizations or a prohibitively expensive large number of repeated point count surveys. Our proposed approach offers an efficient monitoring alternative for large spatio‐temporal regions when point count data are difficult to obtain or when monitoring is focused on rare species with low detection probability.more » « less
-
Warning signals are well known in the visual system, but rare in other modalities. Some moths produce ultrasonic sounds to warn bats of noxious taste or to mimic unpalatable models. Here, we report results from a long-term study across the globe, assaying moth response to playback of bat echolocation. We tested 252 genera, spanning most families of large-bodied moths, and document anti-bat ultrasound production in 52 genera, with eight subfamily origins described. Based on acoustic analysis of ultrasonic emissions and palatability experiments with bats, it seems that acoustic warning and mimicry are the raison d'être for sound production in most moths. However, some moths use high-duty-cycle ultrasound capable of jamming bat sonar. In fact, we find preliminary evidence of independent origins of sonar jamming in at least six subfamilies. Palatability data indicate that jamming and warning are not mutually exclusive strategies. To explore the possible organization of anti-bat warning sounds into acoustic mimicry rings, we intensively studied a community of moths in Ecuador and, using machine-learning approaches, found five distinct acoustic clusters. While these data represent an early understanding of acoustic aposematism and mimicry across this megadiverse insect order, it is likely that ultrasonically signaling moths comprise one of the largest mimicry complexes on earth.more » « less
-
Audio is valuable in many mobile, embedded, and cyber-physical systems. We propose AvA, an acoustic adaptive filtering architecture, configurable to a wide range of applications and systems. By incorporating AvA into their own systems, developers can select which sounds to enhance or filter out depending on their application needs. AvA accomplishes this by using a novel adaptive beamforming algorithm called content-informed adaptive beam-forming (CIBF), that directly uses detectors and sound models that developers have created for their own applications to enhance or filter out sounds. CIBF uses a novel three step approach to prop-agate gradients from a wide range of different model types and signal feature representations to learn filter coefficients. We apply AvA to four scenarios and demonstrate that AvA enhances their respective performances by up to 11.1%. We also integrate AvA into two different mobile/embedded platforms with widely different resource constraints and target sounds/noises to show the boosts in performance and robustness these applications can see using AvA.more » « less
-
Abstract Infrasound (low frequency sound waves) can be used to monitor and characterize volcanic eruptions. However, infrasound sensors are usually placed on the ground, thus providing a limited sampling of the acoustic radiation pattern that can bias source size estimates. We present observations of explosive eruptions from a novel uncrewed aircraft system (UAS)‐based infrasound sensor platform that was strategically hovered near the active vents of Stromboli volcano, Italy. We captured eruption infrasound from short‐duration explosions and jetting events. While potential vertical directionality was inconclusive for the short‐duration explosion, we find that jetting events exhibit vertical sound directionality that was observed with a UAS close to vertical. This directionality would not have been observed using only traditional deployments of ground‐based infrasound sensors, but is consistent with jet noise theory. This proof‐of‐concept study provides unique information that can improve our ability to characterize and quantify the directionality of volcanic eruptions and their associated hazards.more » « less
An official website of the United States government
