Various technologies mediate synchronous audio-visual one-on-one communication (SAVOC) between Deaf and Hard-of-Hearing (DHH) and hearing colleagues, including automatic-captioning smartphone apps for in-person settings, or text-chat features of videoconferencing software in remote settings. Speech and non-verbal behaviors of hearing speakers, e.g. speaking too quietly, can make SAVOC difficult for DHH users, but prior work had not examined technology-mediated contexts. In an in-person study (N=20) with an automatic captioning smartphone app, variations in a hearing actor's enunciation and intonation dynamics affected DHH users' satisfaction. In a remote study (N=23) using a videoconferencing platform with text chat, variations in speech rate, voice intensity, enunciation, intonation dynamics, and eye contact led to such differences. This work contributes empirical evidence that specific behaviors of hearing speakers affect the accessibility of technology-mediated SAVOC for DHH users, providing motivation for future work on detecting or encouraging useful communication behaviors among hearing individuals.
more »
« less
Mixed Reality Speaker Identification as an Accessibility Tool for Deaf and Hard of Hearing Users
People who are Deaf or Hard of Hearing (DHH) benefit from text captioning to understand audio, yet captions alone are often insufficient for the complex environment of a panel presentation, with rapid and unpredictable turn-taking among multiple speakers. It is challenging and tiring for DHH individuals to view captioned panel presentations, leading to feelings of misunderstanding and exclusion. In this work, we investigate the potential of Mixed Reality (MR) head-mounted displays for providing captioning with visual cues to indicate which person on the panel is speaking. For consistency in our experimental study, we simulate a panel presentation in virtual reality (VR) with various types of MR visual cues; in a study with 18 DHH participants, visual cues made it easier to identify speakers.
more »
« less
- Award ID(s):
- 1763219
- PAR ID:
- 10171224
- Date Published:
- Journal Name:
- VRST '19: 25th ACM Symposium on Virtual Reality Software and Technology
- Page Range / eLocation ID:
- 1 to 3
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appear- locked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best.more » « less
-
Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appearlocked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is bestmore » « less
-
Live TV news and interviews often include multiple individuals speaking, with rapid turn-taking, which makes it difficult for viewers who are Deaf and Hard of Hearing (DHH) to follow who is speaking when reading captions. Prior research has proposed several methods of indicating who is speaking. While recent studies have observed various preferences among DHHviewers for speaker identification methods for videos with different numbers of speakers onscreen, there has not yet been a study that has systematically explored whether there is a formal relationship between the number of people onscreen and the preferences among DHH viewers for how to indicate the speaker in captions.We conducted an empirical study followed by a semi-structured interview with 17 DHH participants to record their preferences among various speaker-identifier types for videos that vary in the number of speakers onscreen. We observed an interaction effect between DHH viewers’ preference for speaker identification and the number of speakers in a video. An analysis of open-ended feedback from participants revealed several factors that influenced their preferences. Our findings guide broadcasters and captioners in selecting speaker-identification methods for captioned videos.more » « less
-
Caption text conveys salient auditory information to deaf or hard-of-hearing (DHH) viewers. However, the emotional information within the speech is not captured. We developed three emotive captioning schemas that map the output of audio-based emotion detection models to expressive caption text that can convey underlying emotions. The three schemas used typographic changes to the text, color changes, or both. Next, we designed a Unity framework to implement these schemas and used it to generate stimuli videos. In an experimental evaluation with 28 DHH viewers, we compared DHH viewers’ ability to understand emotions and their subjective judgments across the three captioning schemas. We found no significant difference in participants’ ability to understand the emotion based on the captions or their subjective preference ratings. Open-ended feedback revealed factors contributing to individual differences in preferences among the participants and challenges with automatically generated emotive captions that motivate future work.more » « less