skip to main content


Title: Auditory Display in Interactive Science Simulations: Description and Sonification Support Interaction and Enhance Opportunities for Learning
Science simulations are widely used in classrooms to support inquiry-based learning of complex science concepts. These tools typically rely on interactive visual displays to convey relationships. Auditory displays, including verbal description and sonification (non-speech audio), combined with alternative input capabilities, may provide an enhanced experience for learners, particularly learners with visual impairment. We completed semi-structured interviews and usability testing with eight adult learners with visual impairment for two audio-enhanced simulations. We analyzed trends and edge cases in participants' interaction patterns, interpretations, and preferences. Findings include common interaction patterns across simulation use, increased efficiency with second use, and the complementary role that description and sonification play in supporting learning opportunities. We discuss how these control and display layers work to encourage exploration and engagement with science simulations. We conclude with general and specific design takeaways to support the implementation of auditory displays for accessible simulations.  more » « less
Award ID(s):
1621363
NSF-PAR ID:
10216337
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 - 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a multimodal physics simulation, including visual and auditory (description, sound effects, and sonification) modalities to support the diverse needs of learners. We describe design challenges and solutions, and findings from final simulation evaluations with learners with and without visual impairments. We also share insights from completing research with members of diverse learner groups (N = 52). This work presents approaches for designing and evaluating accessible interactive simulations for learners with diverse needs. 
    more » « less
  2. Graphical representations are ubiquitous in the learning and teaching of science, technology, engineering, and mathematics (STEM). However, these materials are often not accessible to the over 547,000 students in the United States with blindness and significant visual impairment, creating barriers to pursuing STEM educational and career pathways. Furthermore, even when such materials are made available to visually impaired students, access is likely through literalized modes (e.g., braille, verbal description), which is problematic as these approaches (1) do not directly convey spatial information and (2) are different from the graphic-based materials used by students without visual impairment. The purpose of this study was to design and evaluate a universally accessible system for communicating graphical representations in STEM classes. By combining a multisensory vibro-audio interface and an app running on consumer mobile hardware, the system is meant to work equally well for all students, irrespective of their visual status. We report the design of the experimental system and the results of an experiment where we compared learning performance with the system to traditional (visual or tactile) diagrams for sighted participants (n = 20) and visually impaired participants (n =9) respectively. While the experimental multimodal diagrammatic system (MDS) did result in significant learning gains for both groups of participants, the results also revealed no statistically significant differences in the capacity for learning from graphical information across both comparison groups. Likewise, there were no statistically significant differences in the capacity for learning from graphical information between the stimuli presented through the experimental system and the traditional (visual or tactile) diagram control conditions, across either participant group. These findings suggest that both groups were able to learn graphical information from the experimental system as well as traditional diagram presentation materials. This learning modality was supported without the need for conversion of the diagrams to make them accessible for participants who required tactile materials. The system also provided additional multisensory information for sighted participants to interpret and answer questions about the diagrams. Findings are interpreted in terms of new universal design principles for producing multisensory graphical representations that would be accessible to all learners.

     
    more » « less
  3. null (Ed.)
    Auditory description display is verbalized text typically used to describe live, recorded, or graphical displays to support access for people who are blind or visually impaired. Significant prior research has resulted in guidelines for auditory description for non-interactive or minimally interactive contexts. A lack of auditory description for complex interactive environments remains a tremendous barrier to access for people with visual impairments. In this work, we present a systematic design framework for designing auditory description within complex interactive environments. We illustrate how modular descriptions aligned with this framework can result in an interactive storytelling experience constructed through user interactions. This framework has been used in a set of published and widely used interactive science simulations, and in its generalized form could be applied to a variety of contexts. 
    more » « less
  4. The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to human spatial aural perception, and to validate existing relationships. Our infrastructure consists of an intuitive interaction interface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the systemメs unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such an approach may be seen as a viable alternative. 
    more » « less
  5. null (Ed.)
    Abstract Sonification of time series data in natural science has gained increasing attention as an observational and educational tool. Sound is a direct representation for oscillatory data, but for most phenomena, less direct representational methods are necessary. Coupled with animated visual representations of the same data, the visual and auditory systems can work together to identify complex patterns quickly. We developed a multivariate data sonification and visualization approach to explore and convey patterns in a complex dynamic system, Lone Star Geyser in Yellowstone National Park. This geyser has erupted regularly for at least 100 years, with remarkable consistency in the interval between eruptions (three hours) but with significant variations in smaller scale patterns between each eruptive cycle. From a scientific standpoint, the ability to hear structures evolving over time in multiparameter data permits the rapid identification of relationships that might otherwise be overlooked or require significant processing to find. The human auditory system is adept at physical interpretation of call-and-response or causality in polyphonic sounds. Methods developed here for oscillatory and nonstationary data have great potential as scientific observational and educational tools, for data-driven composition with scientific and artistic intent, and towards the development of machine learning tools for pattern identification in complex data. 
    more » « less