skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 25, 2026

Title: "We do use it, but not how hearing people think": How the Deaf and Hard of Hearing Community Uses Large Language Model Tools
Generative AI tools, particularly those utilizing large language models (LLMs), are increasingly used in everyday contexts. While these tools enhance productivity and accessibility, little is known about how Deaf and Hard of Hearing (DHH) individuals engage with them or the challenges they face when using them. This paper presents a mixed-method study exploring how the DHH community uses Text AI tools like ChatGPT to reduce communication barriers and enhance information access. We surveyed 80 DHH participants and conducted interviews with 9 participants. Our findings reveal important benefits, such as eased communication and bridging Deaf and hearing cultures, alongside challenges like lack of American Sign Language (ASL) support and Deaf cultural understanding. We highlight unique usage patterns, propose inclusive design recommendations, and outline future research directions to improve Text AI accessibility for the DHH community.  more » « less
Award ID(s):
2118824 2119589
PAR ID:
10635776
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Page Range / eLocation ID:
1 to 9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This study investigates innovative interaction designs for communication and collaborative learning between learners of mixed hearing and signing abilities, leveraging advancements in mixed reality technologies like Apple Vision Pro and generative AI for animated avatars. Adopting a participatory design approach, we engaged 15 d/Deaf and hard of hearing (DHH) students to brainstorm ideas for an AI avatar with interpreting ability (sign language to English and English to sign language) that would facilitate their face-to-face communication with hearing peers. Participants envisioned the AI avatars to address some issues with human interpreters, such as lack of availability, and provide affordable options to expensive personalized interpreting services. Our findings indicate a range of preferences for integrating the AI avatars with actual human figures of both DHH and hearing communication partners. The participants highlighted the importance of having control over customizing the AI avatar, such as AI-generated signs, voices, facial expressions, and their synchronization for enhanced emotional display in communication. Based on our findings, we propose a suite of design recommendations that balance respecting sign language norms with adherence to hearing social norms. Our study offers insights into improving the authenticity of generative AI in scenarios involving specific and sometimes unfamiliar social norms. 
    more » « less
  2. null (Ed.)
    With the proliferation of voice-based conversational user interfaces (CUIs) comes accessibility barriers for Deaf and Hard of Hearing (DHH) users. There has not been significant prior research on sign-language conversational interactions with technology. In this paper, we motivate research on this topic and identify open questions and challenges in this space, including DHH users' interests in this technology, the types of commands they may use, and the open design questions in how to structure the conversational interaction in this sign-language modality. We also describe our current research methods for addressing these questions, including how we engage with the DHH community. 
    more » « less
  3. Previous research underscored the potential of danmaku–a text-based commenting feature on videos–in engaging hearing audiences. Yet, for many Deaf and hard-of-hearing (DHH) individuals, American Sign Language (ASL) takes precedence over English. To improve inclusivity, we introduce “Signmaku,” a new commenting mechanism that uses ASL, serving as a sign language counterpart to danmaku. Through a need-finding study (N=12) and a within-subject experiment (N=20), we evaluated three design styles: real human faces, cartoon-like figures, and robotic representations. The results showed that cartoon-like signmaku not only entertained but also encouraged participants to create and share ASL comments, with fewer privacy concerns compared to the other designs. Conversely, the robotic representations faced challenges in accurately depicting hand movements and facial expressions, resulting in higher cognitive demands on users. Signmaku featuring real human faces elicited the lowest cognitive load and was the most comprehensible among all three types. Our findings offered novel design implications for leveraging generative AI to create signmaku comments, enriching co-learning experiences for DHH individuals. 
    more » « less
  4. Various technologies mediate synchronous audio-visual one-on-one communication (SAVOC) between Deaf and Hard-of-Hearing (DHH) and hearing colleagues, including automatic-captioning smartphone apps for in-person settings, or text-chat features of videoconferencing software in remote settings. Speech and non-verbal behaviors of hearing speakers, e.g. speaking too quietly, can make SAVOC difficult for DHH users, but prior work had not examined technology-mediated contexts. In an in-person study (N=20) with an automatic captioning smartphone app, variations in a hearing actor's enunciation and intonation dynamics affected DHH users' satisfaction. In a remote study (N=23) using a videoconferencing platform with text chat, variations in speech rate, voice intensity, enunciation, intonation dynamics, and eye contact led to such differences. This work contributes empirical evidence that specific behaviors of hearing speakers affect the accessibility of technology-mediated SAVOC for DHH users, providing motivation for future work on detecting or encouraging useful communication behaviors among hearing individuals. 
    more » « less
  5. Broadcasting emergency notifications during disasters is crucial, particularly in Monroe County, NY, which is home to one of the largest per capita Deaf and Hard of Hearing (DHH) populations in the United States. However, text alerts may not effectively reach DHH individuals who are in a state of reduced responsiveness, like sleep, placing them at great risk. This paper presents cloud-based platform designed to deliver emergency alerts with visual and haptic feedback. A prototype utilizing an off-the-shelf IoT device demonstrates how alerts can be received via vibration and light-based feedback. The platform aims to be accessible to DHH community, providing its own solutions to maintain haptic devices and receive critical alerts in real time. This work contributes to the literature on IT solutions for bridging the communication gap between text-based alerts and intuitive visual/haptic communication, enhancing emergency response readiness for the DHH community, ultimately improving safety and potentially saving lives. 
    more » « less