Abstract Herein, we rebut the critique of Patton et al. (2020), entitled, “No evidence that a transmissible cancer has shifted from emergence to endemism”, by Stammnitz et al. (2024). First and foremost, the authors do not conduct any phylogenetic or epidemiological analyses to rebut the inferences from the main results of the Patton et al. (2020) article, rendering the title of their rebuttal without evidence or merit. Additionally, Stammnitz et al. (2024) present a phylogenetic tree based on only 32 copy number variants (not typically used in phylogenetic analyses and evolve in a completely different way than DNA sequences) to “rebut” our tree that was inferred from 436.1 kb of sequence data and nearly two orders of magnitude more parsimony-informative sites (2520 SNPs). As such it is not surprising that their phylogeny did not have a similar branching pattern to ours, given that support for each branch of their tree was weak and the essentially formed a polytomy. That is, one could rotate their resulting tree in any direction and by nature, it would not match ours. While the authors are correct that we used suboptimal filtering of our raw whole genome sequencing data, re-analyses of the data with 30X coverage, as suggested, resulted in a mutation rate similar to that reported in Stammnitz et al. (2024). Most importantly, when we re-analyzed our data, as well as Stammnitz et al.’s own data, the results of the Patton et al. (2020) article are supported with both datasets. That is, the effective transmission rate of DFTD has transitioned over time to approach one, suggesting endemism; and, the spread of DFTD is rapid and omnidirectional despite the observed east-to-west wave of spread. Overall, Stammnitz et al. (2024) not only fail to provide evidence to contradict the findings of Patton et al. (2020), but rather help support the results with their own data.
more »
« less
Demo of Spatial Audification in OpenSpace: MMS Mission
This is an audio demo; listen with headphones. The audio begins around the 0:55 mark. In Collins et al 2024, we demonstrated a spatial audification of data from NASA's Magnetospheric Multiscale (MMS) mission produced with open-source tools in Python. In that demo, however, the sound sources for each satellite are placed in a static and representative position. Here, we use OpenSpace to associate each audio stream with its respective spacecraft, so that the audification may be experienced with spatial fidelity on a flexible timescale. This proof-of-concept uses the Open Sound Control protocol to send positional data of the sound sources from OpenSpace to SuperCollider, a method also used in Elmquist et al 2024.
more »
« less
- Award ID(s):
- 2218996
- PAR ID:
- 10562842
- Publisher / Repository:
- Zenodo
- Date Published:
- Format(s):
- Medium: X
- Right(s):
- Creative Commons Attribution 4.0 International
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Spatial sound reasoning is a fundamental human skill, enabling us to navigate and interpret our surroundings based on sound. In this paper we present BAT, which combines the spatial sound perception ability of a binaural acoustic scene analysis model with the natural language reasoning capabilities of a large language model (LLM) to replicate this innate ability. To address the lack of existing datasets of in-the-wild spatial sounds, we synthesized a binaural audio dataset using AudioSet and SoundSpaces 2.0. Next, we developed SpatialSoundQA, a spatial sound-based question-answering dataset, offering a range of QA tasks that train BAT in various aspects of spatial sound perception and reasoning. The acoustic front end encoder of BAT is a novel spatial audio encoder named Spatial Audio Spectrogram Transformer, or Spatial-AST, which by itself achieves strong performance across sound event detection, spatial localization, and distance estimation. By integrating Spatial-AST with LLaMA-2 7B model, BAT transcends standard Sound Event Localization and Detection (SELD) tasks, enabling the model to reason about the relationships between the sounds in its environment. Our experiments demonstrate BAT's superior performance on both spatial sound perception and reasoning, showcasing the immense potential of LLMs in navigating and interpreting complex spatial audio environments.more » « less
-
Smart IoT Speakers, while connected over a network, currently only produce sounds that come directly from the individual devices. We envision a future where smart speakers collaboratively produce a fabric of spatial audio, capable of perceptually placing sound in a range of locations in physical space. This could provide audio cues in homes, offices and public spaces that are flexibly linked to various positions. The perception of spatialized audio relies on binaural cues, especially the time difference and the level difference of incident sound at a user’s left and right ears. Traditional stereo speakers cannot create the spatialization perception for a user when playing binaural audio due to auditory crosstalk, as each ear hears a combination of both speaker outputs. We present Xblock, a novel time-domain pose-adaptive crosstalk cancellation technique that creates a spatial audio perception over a pair of speakers using knowledge of the user’s head pose and speaker positions. We build a prototype smart speaker IoT system empowered by Xblock, explore the effectiveness of Xblock through signal analysis, and discuss future perceptual user studies and future work.more » « less
-
We introduce multimodal neural acoustic fields for synthesizing spatial sound and enabling the creation of immersive auditory experiences from novel viewpoints and in completely unseen new environments, both virtual and real. Extending the concept of neural radiance fields to acoustics, we develop a neural network-based model that maps an environment's geometric and visual features to its audio characteristics. Specifically, we introduce a novel hybrid transformer-convolutional neural network to accomplish two core tasks: capturing the reverberation characteristics of a scene from audio-visual data, and generating spatial sound in an unseen new environment from signals recorded at sparse positions and orientations within the original scene. By learning to represent spatial acoustics in a given environment, our approach enables creation of realistic immersive auditory experiences, thereby enhancing the sense of presence in augmented and virtual reality applications. We validate the proposed approach on both synthetic and real-world visual-acoustic data and demonstrate that our method produces nonlinear acoustic effects such as reverberations, and improves spatial audio quality compared to existing methods. Furthermore, we also conduct subjective user studies and demonstrate that the proposed framework significantly improves audio perception in immersive mixed reality applications.more » « less
-
Devices from smartphones to televisions are beginning to employ dual purpose displays, where the display serves as both a video screen and a loudspeaker. In this paper we demonstrate a method to generate localized sound-radiating regions on a flat-panel display. An array of force actuators affixed to the back of the panel is driven by appropriately filtered audio signals so the total response of the panel due to the actuator array approximates a target spatial acceleration profile. The response of the panel to each actuator individually is initially measured via a laser vibrometer, and the required actuator filters for each source position are determined by an optimization procedure that minimizes the mean squared error between the reconstructed and targeted acceleration profiles. Since the single-actuator panel responses are determined empirically, the method does not require analytical or numerical models of the system’s modal response, and thus is well-suited to panels having the complex boundary conditions typical of television screens, mobile devices, and tablets. The method is demonstrated on two panels with differing boundary conditions. When integrated with display technology, the localized audio source rendering method may transform traditional displays into multimodal audio-visual interfaces by colocating localized audio sources and objects in the video stream.more » « less
An official website of the United States government
