Abstract Artificial intelligence in the workplace is becoming increasingly common. These tools are sometimes used to aid users in performing their task, for example, when an artificial intelligence tool assists a radiologist in their search for abnormalities in radiographic images. The use of artificial intelligence brings a wealth of benefits, such as increasing the efficiency and efficacy of performance. However, little research has been conducted to determine how the use of artificial intelligence assistants might affect the user’s cognitive skills. In this theoretical perspective, we discuss how artificial intelligence assistants might accelerate skill decay among experts and hinder skill acquisition among learners. Further, we discuss how AI assistants might also prevent experts and learners from recognizing these deleterious effects. We then discuss the types of questions: use-inspired basic cognitive researchers, applied researchers, and computer science researchers should seek to answer. We conclude that multidisciplinary research from use-inspired basic cognitive research, domain-specific applied research, and technical research (e.g., human factors research, computer science research) is needed to (a) understand these potential consequences, (b) design artificial intelligence systems to mitigate these impacts, and (c) develop training and use protocols to prevent negative impacts on users’ cognitive skills. Only by answering these questions from multidisciplinary perspectives can we harness the benefits of artificial intelligence in the workplace while preventing negative impacts on users’ cognitive skills.
more »
« less
"All Rise for the AI Director": Eliciting Possible Futures of Voice Technology through Story Completion
How might the capabilities of voice assistants several decades in the future shape human society? To anticipate the space of possible futures for voice assistants, we asked 149 participants to each complete a story based on a brief story stem set in the year 2050 in one of five different contexts: the home, doctor's office, school, workplace, and public transit. Story completion as a method elicits participants' visions of possible futures, unconstrained by their understanding of current technological capabilities, but still reflective of current sociocultural values. Through a thematic analysis, we find these stories reveal the extremes of the capabilities and concerns of today's voice assistants---and artificial intelligence---such as improving efficiency and offering instantaneous support, but also replacing human jobs, eroding human agency, and causing harm through malfunction. We conclude by discussing how these speculative visions might inform and inspire the design of voice assistants and other artificial intelligence.
more »
« less
- Award ID(s):
- 1734456
- PAR ID:
- 10276866
- Date Published:
- Journal Name:
- DIS '20: Proceedings of the 2020 ACM Designing Interactive Systems Conference
- Page Range / eLocation ID:
- 2051 to 2064
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Intelligent voice assistants, and the thirdparty apps (aka “skills” or “actions”) that power them, are increasing in popularity and beginning to experiment with the ability to continuously listen to users. This paper studies how privacy concerns related to such always-listening voice assistants might affect consumer behavior and whether certain privacy mitigations would render them more acceptable. To explore these questions with more realistic user choices, we built an interactive app store that allowed users to install apps for a hypothetical always-listening voice assistant. In a study with 214 participants, we asked users to browse the app store and install apps for different voice assistants that offered varying levels of privacy protections. We found that users were generally more willing to install continuously-listening apps when there were greater privacy protections, but this effect was not universally present. The majority did not review any permissions in detail, but still expressed a preference for stronger privacy protections. Our results suggest that privacy factors into user choice, but many people choose to skip this information.more » « less
-
null (Ed.)People interacting with voice assistants are often frustrated by voice assistants' frequent errors and inability to respond to backchannel cues. We introduce an open-source video dataset of 21 participants' interactions with a voice assistant, and explore the possibility of using this dataset to enable automatic error recognition to inform self-repair. The dataset includes clipped and labeled videos of participants' faces during free-form interactions with the voice assistant from the smart speaker's perspective. To validate our dataset, we emulated a machine learning classifier by asking crowdsourced workers to recognize voice assistant errors from watching soundless video clips of participants' reactions. We found trends suggesting it is possible to determine the voice assistant's performance from a participant's facial reaction alone. This work posits elicited datasets of interactive responses as a key step towards improving error recognition for repair for voice assistants in a wide variety of applications.more » « less
-
null (Ed.)More and more, humans are engaging with voice-activated artificially intelligent (voice-AI) systems that have names (e.g., Alexa), apparent genders, and even emotional expression; they are in many ways a growing ‘social’ presence. But to what extent do people display sociolinguistic attitudes, developed from human-human interaction, toward these disembodied text-to-speech (TTS) voices? And how might they vary based on the cognitive traits of the individual user? The current study addresses these questions, testing native English speakers’ judgments for 6 traits (intelligent, likeable, attractive, professional, human-like, and age) for a naturally-produced female human voice and the US-English default Amazon Alexa voice. Following exposure to the voices, participants completed these ratings for each speaker, as well as the Autism Quotient (AQ) survey, to assess individual differences in cognitive processing style. Results show differences in individuals’ ratings of the likeability and human-likeness of the human and AI talkers based on AQ score. Results suggest that humans transfer social assessment of human voices to voice-AI, but that the way they do so is mediated by their own cognitive characteristics.more » « less
-
Voice assistants embodied in smart speakers (e.g., Amazon Echo, Google Home) enable conversational interaction that does not necessarily rely on expertise with mobile or desktop computing. Hence, these voice assistants offer new opportunities to different populations, including individuals who are not interested or able to use traditional computing devices such as computers and smartphones. To understand how older adults who use technology infrequently perceive and use these voice assistants, we conducted a three-week field deployment of the Amazon Echo Dot in the homes of seven older adults. Participants described increased confidence using digital technology and found the conversational voice interfaces easy to use. While some types of usage dropped over the three-week period (e.g., playing music), we observed consistent usage for finding online information. Given that much of this information was health-related, this finding emphasizes the need to revisit concerns about credibility of information with this new interaction medium. Although features to support memory (e.g., setting timers, reminders) were initially perceived as useful, the actual usage was unexpectedly low due to reliability concerns. We discuss how these findings apply to other user groups along with design implications and recommendations for future work on voice user interfaces.more » « less
An official website of the United States government

