skip to main content


Title: Speech Rate Adjustments in Conversations With an Amazon Alexa Socialbot
This paper investigates users’ speech rate adjustments during conversations with an Amazon Alexa socialbot in response to situational (in-lab vs. at-home) and communicative (ASR comprehension errors) factors. We collected user interaction studies and measured speech rate at each turn in the conversation and in baseline productions (collected prior to the interaction). Overall, we find that users slow their speech rate when talking to the bot, relative to their pre-interaction productions, consistent with hyperarticulation. Speakers use an even slower speech rate in the in-lab setting (relative to at-home). We also see evidence for turn-level entrainment: the user follows the directionality of Alexa’s changes in rate in the immediately preceding turn. Yet, we do not see differences in hyperarticulation or entrainment in response to ASR errors, or on the basis of user ratings of the interaction. Overall, this work has implications for human-computer interaction and theories of linguistic adaptation and entrainment.  more » « less
Award ID(s):
1911855
NSF-PAR ID:
10275334
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Frontiers in Communication
Volume:
6
ISSN:
2297-900X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose: Delayed auditory feedback (DAF) interferes with speech output. DAF causes distorted and disfluent productions and errors in the serial order of produced sounds. Although DAF has been studied extensively, the specific patterns of elicited speech errors are somewhat obscured by relatively small speech samples, differences across studies, and uncontrolled variables. The goal of this study was to characterize the types of serial order errors that increase under DAF in a systematic syllable sequence production task, which used a closed set of sounds and controlled for speech rate. Method: Sixteen adult speakers repeatedly produced CVCVCV (C = consonant, V = vowel) sequences, paced to a “visual metronome,” while hearing self-generated feedback with delays of 0–250 ms. Listeners transcribed recordings, and speech errors were classified based on the literature surrounding naturally occurring slips of the tongue. A series of mixed-effects models were used to assess the effects of delay for different error types, for error arrival time, and for speaking rate. Results: DAF had a significant effect on the overall error rate for delays of 100 ms or greater. Statistical models revealed significant effects (relative to zero delay) for vowel and syllable repetitions, vowel exchanges, vowel omissions, onset disfluencies, and distortions. Serial order errors were especially dominated by vowel and syllable repetitions. Errors occurred earlier on average within a trial for longer feedback delays. Although longer delays caused slower speech, this effect was mediated by the run number (time in the experiment) and small compared with those in previous studies. Conclusions: DAF drives a specific pattern of serial order errors. The dominant pattern of vowel and syllable repetition errors suggests possible mechanisms whereby DAF drives changes to the activity in speech planning representations, yielding errors. These mechanisms are outlined with reference to the GODIVA (Gradient Order Directions Into Velocities of Articulators) model of speech planning and production. Supplemental Material: https://doi.org/10.23641/asha.19601785 
    more » « less
  2. Speech-driven querying is becoming popular in new device environments such as smartphones, tablets, and even conversational assistants. However, such querying is largely restricted to natural language. Typed SQL remains the gold standard for sophisticated structured querying although it is painful in many environments, which restricts when and how users consume their data. In this work, we propose to bridge this gap by designing a speech-driven querying system and interface for structured data we call SpeakQL. We support a practically useful subset of regular SQL and allow users to query in any domain with novel touch/speech based human-in-the-loop correction mechanisms. Automatic speech recognition (ASR) introduces myriad forms of errors in transcriptions, presenting us with a technical challenge. We exploit our observations of SQL's properties, its grammar, and the queried database to build a modular architecture. We present the first dataset of spoken SQL queries and a generic approach to generate them for any arbitrary schema. Our experiments show that SpeakQL can automatically correct a large fraction of errors in ASR transcriptions. User studies show that SpeakQL can help users specify SQL queries significantly faster with a speedup of average 2.7x and up to 6.7x compared to typing on a tablet device. SpeakQL also reduces the user effort in specifying queries by a factor of average 10x and up to 60x compared to raw typing effort. 
    more » « less
  3. Purpose: The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. Method: Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. Results: Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered “not difficult.” CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. Conclusion: Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life. 
    more » « less
  4. Deaf and Hard-of-Hearing (DHH) users face accessibility challenges during in-person and remote meetings. While emerging use of applications incorporating automatic speech recognition (ASR) is promising, more user-interface and user-experience research is needed. While co-design methods could elucidate designs for such applications, COVID-19 has interrupted in-person research. This study describes a novel methodology for conducting online co-design workshops with 18 DHH and hearing participant pairs to investigate ASR-supported mobile and videoconferencing technologies along two design dimensions: Correcting errors in ASR output and implementing notification systems for influencing speaker behaviors. Our methodological findings include an analysis of communication modalities and strategies participants used, use of an online collaborative whiteboarding tool, and how participants reconciled differences in ideas. Finally, we present guidelines for researchers interested in online DHH co-design methodologies, enabling greater geographically diversity among study participants even beyond the current pandemic. 
    more » « less
  5. We evaluate several publicly available off-the-shelf (commercial and research) automatic speech recognition (ASR) systems on dialogue agent-directed English speech from speakers with General American vs. non-American accents. Our results show that the performance of the ASR systems for non-American accents is considerably worse than for General American accents. Depending on the recognizer, the absolute difference in performance between General American accents and all non-American accents combined can vary approximately from 2% to 12%, with relative differences varying approximately between 16% and 49%. This drop in performance becomes even larger when we consider specific categories of non-American accents indicating a need for more diligent collection of and training on non-native English speaker data in order to narrow this performance gap. There are performance differences across ASR systems, and while the same general pattern holds, with more errors for non-American accents, there are some accents for which the best recognizer is different than in the overall case. We expect these results to be useful for dialogue system designers in developing more robust inclusive dialogue systems, and for ASR providers in taking into account performance requirements for different accents. 
    more » « less