The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to human spatial aural perception, and to validate existing relationships. Our infrastructure consists of an intuitive interaction interface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the systemメs unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such an approach may be seen as a viable alternative.
more »
« less
Reimagining Human Capacity for Location-Aware Audio Pattern Recognition: A Case for Immersive Exocentric Sonification
The following paper presents a cross-disciplinary snapshot of 21st century research in sonification and leverages the review to identify a new immersive exocentric approach to studying human capacity to perceive spatial aural cues. The paper further defines immersive exocentric sonification, highlights its unique affordances, and presents an argument for its potential to fundamentally change the way we understand and study the human capacity for location-aware audio pattern recognition. Finally, the paper describes an example of an externally funded research project that aims to tackle this newfound research whitespace.
more »
« less
- Award ID(s):
- 1748667
- PAR ID:
- 10065370
- Date Published:
- Journal Name:
- International Conference on Auditory Display
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Sonification is a method to represent data and convey information using sound. Just like the Geiger counter, humans can use sound to better understand complex sets of data that are either unable to be seen or visualized or that are too complex to understand with visual displays. Sonification research and learning have been predominantly conducted at the higher education level. However, as STEM-related programs and activities continue to be increasingly important in secondary school education, it is possible to expose high school students to university-level research through project-based learning (PBL) activities in the classroom. Using a physical snake robot prototype that was built and programmed with low-cost materials, high school students are introduced to the field of sonification and its applications to snake robots. This dissertation demonstrates the feasibility of using project-based learning to teach university level research in secondary school education. Using the sonification of snake robot movement, students learned advanced topics in robotics with the goal of realizing that university level research is accessible and understandable through PBL. This paper will begin by discussing the concept of human-robot interaction, introduce sonification, and give a brief overview of project-based learning. A detailed discussion of how the snake robot prototype was constructed and programmed, an in-depth explanation of the sonification algorithm that was used, and how sonification was taught in a high school classroom using PBL is presented along with student feedback and suggestions for future work.more » « less
-
Despite the inherent need for enhancing human-robot interaction (HRI) by non-visually communicating robotic movements and intentions, the application of sonification (the translation of data into audible information) within the field of robotics remains underexplored. This paper investigates the problem of designing sonification algorithms that translate the motion of teams of industrial mobile robots to non-speech sounds. Our proposed solution leverages the wave space sonification (WSS) framework and utilizes localized wave fields with specific orientations within the system configuration space. This WSS-based algorithm generates sounds from the motion data of mobile robots so that the resulting audio exhibits a chosen timbre when the robots pass near designated configurations or move along desired directions. To demonstrate its versatility, the WSS-based sonification algorithm is applied to a team of OMRON LD series autonomous mobile robots, sonifying their motion patterns with pure tonal sounds.more » « less
-
Much work already exists on algorithm visualization—the graphical representation of an algorithm’s behavior—and its benefits for student learning. Visualization, however, offers limited benefit for students with visual impairments. This paper explores algorithm sonification—the representation of an algorithm’s behavior using sound. To simplify the creation of sonifications for modern algorithms, this paper presents a new Thread Safe Audio Library (TSAL). To illustrate how to create sonifications, the authors have added TSAL calls to four common sorting algorithm implementations, so that as the program accesses a value being sorted, the program plays a tone whose pitch is scaled to that value’s magnitude. In the resulting sonifications, one can (in real time) hear the behavioral differences of the different sorting algorithms as they run, and directly experience how fast (or slow) the algorithms sort the same sequence, compared to one another. This paper presents experimental evidence that the sonifications improve students’ long-term recall of the four sorting algorithms’ relative speeds. The paper also discusses other potential uses of sonification.more » « less
-
Active open-vent volcanoes produce intense infrasound air- waves, and volcanoes with prominent craters can create strongly resonant signals, which are inaudible to humans, and often peak around 1 Hz. Study of volcano infrasound is used to model erup- tion dynamics, the structure of volcanic craters, and can be used as a component of volcano monitoring infrastructure. We have de- veloped a portable on-site real-time sonification device that emits an audible sound in response to an infrasonic airwave. This de- vice can be used near an active volcano both as a real-time edu- cational aid and as an accessible tool for monitoring the state of volcano activity. This paper presents this device with its hardware and software implementation, its parameter mapping sonification algorithm, recommendations for its use in the field, and strategies for future improvements.more » « less
An official website of the United States government

