Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appear- locked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best.
more »
« less
Effect of Image Captioning with Description on the Working Memory
Working memory plays an important role in human activities across academic, professional, and social settings. Working memory is defined as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with image description on an individual’s working memory. A study was conducted with eight neutral images comprising situations relatable to daily life such that each image could have a positive or negative description associated with the outcome of the situation in the image. The study consisted of three rounds where the first and second round involved two parts and the third round consisted of one part. The image was captioned a total of five times across the entire study. The findings highlighted that only 25% of participants were able to recall the captions which they captioned for an image after a span of 9–15 days; when comparing the recall rate of the captions, 50% of participants were able to recall the image caption from the previous round in the present round; and out of the positive and negative description associated with the image, 65% of participants recalled the former description rather than the latter.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10344385
- Date Published:
- Journal Name:
- International Conference on Human-Computer Interaction (HCII) 2022. Lecture Notes in Computer Science
- Volume:
- 13096
- Page Range / eLocation ID:
- 107–120
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (head-locked, lag, and appearlocked) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participants’ preferences were split, but most participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found head-locked and lag captions more user-friendly than appear-locked captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is bestmore » « less
-
The prefrontal cortex is larger than would be predicted by body size or visual cortex volume in great apes compared with monkeys. Because prefrontal cortex is critical for working memory, we hypothesized that recognition memory tests would engage working memory in orangutans more robustly than in rhesus monkeys. In contrast to working memory, the familiarity response that results from repetition of an image is less cognitively taxing and has been associated with nonfrontal brain regions. Across three experiments, we observed a striking species difference in the control of behavior by these two types of memory. First, we found that recognition memory performance in orangutans was controlled by working memory under conditions in which this memory system plays little role in rhesus monkeys. Second, we found that unlike the case in monkeys, familiarity was not involved in recognition memory performance in orangutans, shown by differences with monkeys across three different measures. Memory in orangutans was not improved by use of novel images, was always impaired by a concurrent cognitive load, and orangutans did not accurately identify images seen minutes ago. These results are surprising and puzzling, but do support the view that prefrontal expansion in great apes favored working memory. At least in orangutans, increased dependence on working memory may come at a cost in terms of the availability of familiarity.more » « less
-
null (Ed.)Background: Patients with uncomplicated cases of concussion are thought to fully recover within several months as symptoms resolve. However, at the group level, undergraduates reporting a history of concussion (mean: 4.14 years post-injury) show lasting deficits in visual working memory performance. To clarify what predicts long-term visual working memory outcomes given heterogeneous performance across group members, we investigated factors surrounding the injury, including gender, number of mild traumatic brain injuries, time since mild traumatic brain injury (mTBI), loss of consciousness (LOC) (yes, no), and mTBI etiology (non-sport, team sport, high impact sport, and individual sport). We also collected low-density resting state electroencephalogram to test whether spectral power was correlated with performance. Aim: The purpose of this study was to identify predictors for poor visual working memory outcomes in current undergraduates with a history of concussion. Methods: Participants provided a brief history of their injury and symptoms. Participants also completed an experimental visual working memory task. Finally, low-density resting-state electroencephalogram was collected. Results: The key observation was that LOC at the time of injury predicted superior visual working memory years later. In contrast, visual working memory performance was not predicted by other factors, including etiology, high impact sports, or electroencephalogram spectral power. Conclusions: Visual working memory deficits are apparent at the group level in current undergraduates with a history of concussion. LOC at the time of concussion predicts less impaired visual working memory performance, whereas no significant links were associated with other factors. One interpretation is that after LOC, patients are more likely to seek medical advice than without LOC. Relevance for patients: Concussion is a head injury associated with future cognitive changes in some people. Concussion should be taken seriously, and medical treatment sought whenever a head injury occurs.more » « less
-
Some people exhibit impressive memory for a wide array of semantic knowledge. What makes these trivia experts better able to learn and retain novel facts? We hypothesized that new semantic knowledge may be more strongly linked to its episodic context in trivia experts. We designed a novel online task in which 132 participants varying in trivia expertise encoded “exhibits” of naturalistic facts with related photos in one of two “museums.” Afterward, participants were tested on cued recall of facts and recognition of the associated photo and museum. Greater trivia expertise predicted higher cued recall for novel facts. Critically, trivia experts but not non-experts showed superior fact recall when they remembered both features (photo and museum) of the encoding context. These findings illustrate enhanced links between episodic memory and new semantic learning in trivia experts, and show the value of studying trivia experts as a special population that can shed light on the mechanisms of memory.more » « less
An official website of the United States government

