skip to main content

Title: Auditory Display in Interactive Science Simulations: Description and Sonification Support Interaction and Enhance Opportunities for Learning
Science simulations are widely used in classrooms to support inquiry-based learning of complex science concepts. These tools typically rely on interactive visual displays to convey relationships. Auditory displays, including verbal description and sonification (non-speech audio), combined with alternative input capabilities, may provide an enhanced experience for learners, particularly learners with visual impairment. We completed semi-structured interviews and usability testing with eight adult learners with visual impairment for two audio-enhanced simulations. We analyzed trends and edge cases in participants' interaction patterns, interpretations, and preferences. Findings include common interaction patterns across simulation use, increased efficiency with second use, and the complementary role that description and sonification play in supporting learning opportunities. We discuss how these control and display layers work to encourage exploration and engagement with science simulations. We conclude with general and specific design takeaways to support the implementation of auditory displays for accessible simulations.
Authors:
; ;
Award ID(s):
1621363
Publication Date:
NSF-PAR ID:
10216337
Journal Name:
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
Page Range or eLocation-ID:
1 - 12
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a multimodal physics simulation, including visual and auditory (description, sound effects, and sonification) modalities to support the diverse needs of learners. We describe design challenges and solutions, and findings from final simulation evaluations with learners with and without visual impairments. We also share insights from completing research with members of diverse learner groups (N = 52). This work presents approaches for designing and evaluating accessible interactive simulations for learners with diverse needs.
  2. Auditory description display is verbalized text typically used to describe live, recorded, or graphical displays to support access for people who are blind or visually impaired. Significant prior research has resulted in guidelines for auditory description for non-interactive or minimally interactive contexts. A lack of auditory description for complex interactive environments remains a tremendous barrier to access for people with visual impairments. In this work, we present a systematic design framework for designing auditory description within complex interactive environments. We illustrate how modular descriptions aligned with this framework can result in an interactive storytelling experience constructed through user interactions. This framework has been used in a set of published and widely used interactive science simulations, and in its generalized form could be applied to a variety of contexts.
  3. The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to human spatial aural perception, and to validate existing relationships. Our infrastructure consists of an intuitive interaction interface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the systemメs unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before suchmore »an approach may be seen as a viable alternative.« less
  4. Abstract Sonification of time series data in natural science has gained increasing attention as an observational and educational tool. Sound is a direct representation for oscillatory data, but for most phenomena, less direct representational methods are necessary. Coupled with animated visual representations of the same data, the visual and auditory systems can work together to identify complex patterns quickly. We developed a multivariate data sonification and visualization approach to explore and convey patterns in a complex dynamic system, Lone Star Geyser in Yellowstone National Park. This geyser has erupted regularly for at least 100 years, with remarkable consistency in the interval between eruptions (three hours) but with significant variations in smaller scale patterns between each eruptive cycle. From a scientific standpoint, the ability to hear structures evolving over time in multiparameter data permits the rapid identification of relationships that might otherwise be overlooked or require significant processing to find. The human auditory system is adept at physical interpretation of call-and-response or causality in polyphonic sounds. Methods developed here for oscillatory and nonstationary data have great potential as scientific observational and educational tools, for data-driven composition with scientific and artistic intent, and towards the development of machine learning tools for patternmore »identification in complex data.« less
  5. Role-plays of interpersonal interactions are essential to learning across professions, but effective simulations are difficult to create in typical learning management systems. To empower educators and researchers to advance simulation-based pedagogy, we have developed the Digital Clinical Simulation Suite (DCSS, pronounced "decks"), an open-source platform for rehearsing for improvisational interactions. Participants are immersed in vignettes of professional practice through video, images, and text, and they are called upon to improvisationally make difficult decisions through recorded audio and text. Tailored data displays support participant reflection, instructional facilitation, and educational research. DCSS is based on six design principles: 1) Community Adaptation, 2) Masked Technical Complexity, 3) Authenticity of Task, 4) Improvisational Voice, 5) Data Access through "5Rs", and 6) Extensible AI Coaching. These six principles mean that any educator should be able to create a scenario that learners should engage in authentic professional challenges using ordinary computing devices, and learners and educators should have access to data for reflection, facilitation, and development of AI tools for real-time feedback. In this paper, we describe the architecture of DCSS and illustrate its use and efficacy in cases from online courses, colleges of education, and K-12 schools.