Caption text conveys salient auditory information to deaf or hard-of-hearing (DHH) viewers. However, the emotional information within the speech is not captured. We developed three emotive captioning schemas that map the output of audio-based emotion detection models to expressive caption text that can convey underlying emotions. The three schemas used typographic changes to the text, color changes, or both. Next, we designed a Unity framework to implement these schemas and used it to generate stimuli videos. In an experimental evaluation with 28 DHH viewers, we compared DHH viewers’ ability to understand emotions and their subjective judgments across the three captioning schemas. We found no significant difference in participants’ ability to understand the emotion based on the captions or their subjective preference ratings. Open-ended feedback revealed factors contributing to individual differences in preferences among the participants and challenges with automatically generated emotive captions that motivate future work.
more »
« less
Effect of caption width on the TV user experience by deaf and hard of hearing viewers
Deaf and hard of hearing (DHH) viewers watch multimedia with captions on devices with widely varying widths. We investigated the impact of caption width on viewers' preferences. Previous research has shown that presenting one word lines allows viewers to read much more quickly than traditional reading, while others have shown that the optimal width for captions is 6 words per line. Our study showed that DHH viewers had no preference difference between 6 and 12 word lines. Furthermore, they significantly preferred 6 and 12 word lines over single word lines due to the need to split attention between the captions and video.
more »
« less
- PAR ID:
- 10253921
- Date Published:
- Journal Name:
- W4A '21: Proceedings of the 18th International Web for All Conference
- Page Range / eLocation ID:
- 1 to 5
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Live TV news and interviews often include multiple individuals speaking, with rapid turn-taking, which makes it difficult for viewers who are Deaf and Hard of Hearing (DHH) to follow who is speaking when reading captions. Prior research has proposed several methods of indicating who is speaking. While recent studies have observed various preferences among DHHviewers for speaker identification methods for videos with different numbers of speakers onscreen, there has not yet been a study that has systematically explored whether there is a formal relationship between the number of people onscreen and the preferences among DHH viewers for how to indicate the speaker in captions.We conducted an empirical study followed by a semi-structured interview with 17 DHH participants to record their preferences among various speaker-identifier types for videos that vary in the number of speakers onscreen. We observed an interaction effect between DHH viewers’ preference for speaker identification and the number of speakers in a video. An analysis of open-ended feedback from participants revealed several factors that influenced their preferences. Our findings guide broadcasters and captioners in selecting speaker-identification methods for captioned videos.more » « less
-
Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labelled word-importance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.more » « less
-
Recent research has investigated automatic methods for identifying how important each word in a text is for the overall message, in the context of people who are Deaf and Hard of Hearing (DHH) viewing video with captions. We examine whether DHH users report benefits from visual highlighting of important words in video captions. In formative interview and prototype studies, users indicated a preference for underlining of 5%-15% of words in a caption text to indicate that they are important, and they expressed an interest for such text markup in the context of educational lecture videos. In a subsequent user study, 30 DHH participants viewed lecture videos in two forms: with and without such visual markup. Users indicated that the videos with captions containing highlighted words were easier to read and follow, with lower perceived task-load ratings, compared to the videos without highlighting. This study motivates future research on caption highlighting in online educational videos, and it provides a foundation for how to evaluate the efficacy of such systems with users.more » « less
-
This paper explores a multimodal approach for translating emotional cues present in speech, designed with Deaf and Hard-of-Hearing (dhh) individuals in mind. Prior work has focused on visual cues applied to captions, successfully conveying whether a speaker’s words have a negative or positive tone (valence), but with mixed results regarding the intensity (arousal) of these emotions. We propose a novel method using haptic feedback to communicate a speaker’s arousal levels through vibrations on a wrist-worn device. In a formative study with 16 dhh participants, we tested six haptic patterns and found that participants preferred single per-word vibrations at 75 Hz to encode arousal. In a follow-up study with 27 dhh participants, this pattern was paired with visual cues, and narrative engagement with audio-visual content was measured. Results indicate that combining haptics with visuals significantly increased engagement compared to a conventional captioning baseline and a visuals-only affective captioning style.more » « less
An official website of the United States government

