skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Nonverbal Communication through Expressive Objects
Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion through robots and expressive objects as a modality that can support communication. In this work, we work closely with an AAC user to understand how motion through a physical expressive object can support their communication. We present our process and resulting lessons on the designed object and the co-design process.  more » « less
Award ID(s):
1943072
PAR ID:
10503737
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Communications of the ACM
Volume:
67
Issue:
1
ISSN:
0001-0782
Page Range / eLocation ID:
123 to 131
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Augmentative and alternative communication (AAC) devices enable speech-based communication. However, AAC devices do not support nonverbal communication, which allows people to take turns, regulate conversation dynamics, and express intentions. Nonverbal communication requires motion, which is often challenging for AAC users to produce due to motor constraints. In this work, we explore how socially assistive robots, framed as ''sidekicks,'' might provide augmented communicators (ACs) with a nonverbal channel of communication to support their conversational goals. We developed and conducted an accessible co-design workshop that involved two ACs, their caregivers, and three motion experts. We identified goals for conversational support, co-designed prototypes depicting possible sidekick forms, and enacted different sidekick motions and behaviors to achieve speakers' goals. We contribute guidelines for designing sidekicks that support ACs according to three key parameters: attention, precision, and timing. We show how these parameters manifest in appearance and behavior and how they can guide future designs for augmented nonverbal communication. 
    more » « less
  2. Autistic children face significant challenges in vocal communication and social interaction, often leading to social isolation. There is evidence that Augmentative and Alternative Communication (AAC) offers support to mitigate these challenges, enabling them to communicate with non-vocal means through forms of AAC, such as speech-generation devices (SGDs). However, the adoption and use of SGDs are hindered by several factors, including the large amount of practice required to learn to use SGDs and the limited options for highly engaging social learning contexts. Our study introduces the novel approach of using SGDs as game controller for digital and interactive games. With three design goals guiding our work, we conducted a Wizard-of-Oz formative case study with five participants aged 3-5 years, who were learning to use their SGD. We simulated a digital coloring game, integrating the speech-generated output of the participant's SGD to function as the game's controller. From this case study, we observed that all participants engaged with the game using their SGD for at least one turn, and two participants also engaged in emerging joint attention responses with the game and game's facilitator. This paper discusses these findings and contributes directions for future research, with suggestions for the design of future SGD-controlled games and exploration of social connection and collaboration between autistic children who use AAC and their caregivers, siblings, and peers. 
    more » « less
  3. Augmentative and alternative communication (AAC) devices are used by many people around the world who experience difficulties in communicating verbally. One form of AAC device which is especially useful for minimally verbal autistic children in developing language and communication skills is the visual scene display (VSD). VSDs use images with interactive hotspots embedded in them to directly connect language to real-world contexts which are meaningful to the AAC user. While VSDs can effectively support emergent communicators (i.e., those who are beginning to learn how to use symbolic communication), their widespread adoption is impacted by how difficult these devices are to configure. We developed a prototype that uses generative AI to automatically suggest initial hotspots on an image to help non-experts efficiently create visual scene displays (VSDs). We conducted a within-subjects user study to understand how effective our prototype is in supporting non-expert users, specifically pre-service speech-language pathologists (SLPs) (N=16) who are not familiar with VSDs as an AAC intervention. Pre-service SLPs are actively studying to become clinically certified SLPs and have domain-specific knowledge about language and communication skill development. We evaluated the effectiveness of our prototype based on creation time, quality, and user confidence. We also analyzed the relevance and developmental appropriateness of the automatically generated hotspots and how often users interacted with (e.g., editing or deleting) the generated hotspots. Our results were mixed with SLPs becoming more efficient and confident. However, there were multiple negative impacts as well, including over-reliance and homogenization of communication options. The implications of these findings reach beyond the domain of AAC, especially as generative AI becomes more prevalent across domains, including assistive technology. Future work is needed to further identify and address these risks associated with integrating generative AI into assistive technology. 
    more » « less
  4. Abstract Millions of individuals who have limited or no functional speech use augmentative and alternative communication (AAC) technology to participate in daily life and exercise the human right to communication. While advances in AAC technology lag significantly behind those in other technology sectors, mainstream technology innovations such as artificial intelligence (AI) present potential for the future of AAC. However, a new future of AAC will only be as effective as it is responsive to the needs and dreams of the people who rely upon it every day. AAC innovation must reflect an iterative, collaborative process with AAC users. To do this, we worked collaboratively with AAC users to complete participatory qualitative research about AAC innovation through AI. We interviewed 13 AAC users regarding (1) their current AAC engagement; (2) the barriers they experience in using AAC; (3) their dreams regarding future AAC development; and (4) reflections on potential AAC innovations. To analyze these data, a rapid research evaluation and appraisal was used. Within this article, the themes that emerged during interviews and their implications for future AAC development will be discussed. Strengths, barriers, and considerations for participatory design will also be described. 
    more » « less
  5. Making good letter or word predictions can help accelerate the communication of users of high-tech AAC devices. This is particularly important for real-time person-to-person conversations. We investigate whether per forming speech recognition on the speaking-side of a conversation can improve language model based predictions. We compare the accuracy of three plausible microphone deployment options and the accuracy of two commercial speech recognition engines (Google and IBM Watson). We found that despite recognition word error rates of 7-16%, our ensemble of N-gram and recurrent neural network language models made predictions nearly as good as when they used the reference transcripts. 
    more » « less