skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Mixed Reality Speaker Identification as an Accessibility Tool for Deaf and Hard of Hearing Users
People who are Deaf or Hard of Hearing (DHH) benefit from text captioning to understand audio, yet captions alone are often insufficient for the complex environment of a panel presentation, with rapid and unpredictable turn-taking among multiple speakers. It is challenging and tiring for DHH individuals to view captioned panel presentations, leading to feelings of misunderstanding and exclusion. In this work, we investigate the potential of Mixed Reality (MR) head-mounted displays for providing captioning with visual cues to indicate which person on the panel is speaking. For consistency in our experimental study, we simulate a panel presentation in virtual reality (VR) with various types of MR visual cues; in a study with 18 DHH participants, visual cues made it easier to identify speakers.  more » « less
Award ID(s):
1763219
PAR ID:
10171224
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
VRST '19: 25th ACM Symposium on Virtual Reality Software and Technology
Page Range / eLocation ID:
1 to 3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper explores a multimodal approach for translating emotional cues present in speech, designed with Deaf and Hard-of-Hearing (dhh) individuals in mind. Prior work has focused on visual cues applied to captions, successfully conveying whether a speaker’s words have a negative or positive tone (valence), but with mixed results regarding the intensity (arousal) of these emotions. We propose a novel method using haptic feedback to communicate a speaker’s arousal levels through vibrations on a wrist-worn device. In a formative study with 16 dhh participants, we tested six haptic patterns and found that participants preferred single per-word vibrations at 75 Hz to encode arousal. In a follow-up study with 27 dhh participants, this pattern was paired with visual cues, and narrative engagement with audio-visual content was measured. Results indicate that combining haptics with visuals significantly increased engagement compared to a conventional captioning baseline and a visuals-only affective captioning style. 
    more » « less
  2. Various technologies mediate synchronous audio-visual one-on-one communication (SAVOC) between Deaf and Hard-of-Hearing (DHH) and hearing colleagues, including automatic-captioning smartphone apps for in-person settings, or text-chat features of videoconferencing software in remote settings. Speech and non-verbal behaviors of hearing speakers, e.g. speaking too quietly, can make SAVOC difficult for DHH users, but prior work had not examined technology-mediated contexts. In an in-person study (N=20) with an automatic captioning smartphone app, variations in a hearing actor's enunciation and intonation dynamics affected DHH users' satisfaction. In a remote study (N=23) using a videoconferencing platform with text chat, variations in speech rate, voice intensity, enunciation, intonation dynamics, and eye contact led to such differences. This work contributes empirical evidence that specific behaviors of hearing speakers affect the accessibility of technology-mediated SAVOC for DHH users, providing motivation for future work on detecting or encouraging useful communication behaviors among hearing individuals. 
    more » « less
  3. Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appear- locked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best. 
    more » « less
  4. Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appearlocked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best 
    more » « less
  5. In virtual environments, many social cues (e.g. gestures, eye contact, and proximity) are currently conveyed visually or auditorily. Indicating social cues in other modalities, such as haptic cues to complement visual or audio signals, will help to increase VR’s accessibility and take advantage of the platform’s inherent flexibility. However, accessibility implementations in social VR are often siloed by single sensory modalities. To broaden the accessibility of social virtual reality beyond replacing one sensory modality with another, we identified a subset of social cues and built tools to enhance them allowing users to switch between modalities to choose how these cues are represented. Because consumer VR uses primarily visual and auditory stimuli, we started with social cues that were not accessible for blind and low vision (BLV) and d/Deaf and hard of hearing (DHH) people, and expanded how they could be represented to accommodate a number of needs. We describe how these tools were designed around the principle of social cue switching, and a standard distribution method to amplify reach. 
    more » « less