skip to main content


Search for: All records

Award ID contains: 2150429

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available April 1, 2025
  2. Live TV news and interviews often include multiple individuals speaking, with rapid turn-taking, which makes it difficult for viewers who are Deaf and Hard of Hearing (DHH) to follow who is speaking when reading captions. Prior research has proposed several methods of indicating who is speaking. While recent studies have observed various preferences among DHHviewers for speaker identification methods for videos with different numbers of speakers onscreen, there has not yet been a study that has systematically explored whether there is a formal relationship between the number of people onscreen and the preferences among DHH viewers for how to indicate the speaker in captions.We conducted an empirical study followed by a semi-structured interview with 17 DHH participants to record their preferences among various speaker-identifier types for videos that vary in the number of speakers onscreen. We observed an interaction effect between DHH viewers’ preference for speaker identification and the number of speakers in a video. An analysis of open-ended feedback from participants revealed several factors that influenced their preferences. Our findings guide broadcasters and captioners in selecting speaker-identification methods for captioned videos. 
    more » « less
  3. Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appearlocked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best 
    more » « less