Abstract We define Artificial Intelligence-Mediated Communication (AI-MC) as interpersonal communication in which an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals. The recent advent of AI-MC raises new questions about how technology may shape human communication and requires re-evaluation – and potentially expansion – of many of Computer-Mediated Communication’s (CMC) key theories, frameworks, and findings. A research agenda around AI-MC should consider the design of these technologies and the psychological, linguistic, relational, policy and ethical implications of introducing AI into human–human communication. This article aims to articulate such an agenda.
more »
« less
AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust
AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AI-generated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry.
more »
« less
- PAR ID:
- 10183360
- Date Published:
- Journal Name:
- Computers in human behavior
- Volume:
- 106
- ISSN:
- 0747-5632
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Proper calibration of human reliance on AI is fundamental to achieving complementary performance in AI-assisted human decision-making. Most previous works focused on assessing user reliance, and more broadly trust, retrospectively, through user perceptions and task-based measures. In this work, we explore the relationship between eye gaze and reliance under varying task difficulties and AI performance levels in a spatial reasoning task. Our results show a strong positive correlation between percent gaze duration on the AI suggestion and user AI task agreement, as well as user perceived reliance. Moreover, user agency is preserved particularly when the task is easy and when AI performance is low or inconsistent. Our results also reveal nuanced differences between reliance and trust. We discuss the potential of using eye gaze to gauge human reliance on AI in real-time, enabling adaptive AI assistance for optimal human-AI team performance.more » « less
-
As AI-mediated communication (AI-MC) becomes more prevalent in everyday interactions, it becomes increasingly important to develop a rigorous understanding of its effects on interpersonal relationships and on society at large. Controlled experimental studies offer a key means of developing such an understanding, but various complexities make it difficult for experimental AI-MC research to simultaneously achieve the criteria of experimental realism, experimental control, and scalability. After outlining these methodological challenges, this paper offers the concept of methodological middle spaces as a means to address these challenges. This concept suggests that the key to simultaneously achieving all three of these criteria is to abandon the perfect attainment of any single criterion. This concept's utility is demonstrated via its use to guide the design of a platform for conducting text-based AI-MC experiments. Through a series of three example studies, the paper illustrates how the concept of methodological middle spaces can inform the design of specific experimental methods. Doing so enabled these studies to examine research questions that would have been either difficult or impossible to investigate using existing approaches. The paper concludes by describing how future research could similarly apply the concept of methodological middle spaces to expand methodological possibilities for AI-MC research in ways that enable contributions not currently possible.more » « less
-
Remote Patient Monitoring (RPM) devices transmit patients' medical indicators (e.g., blood pressure) from the patient's home testing equipment to their healthcare providers, in order to monitor chronic conditions such as hypertension. AI systems have the potential to enhance access to timely medical advice based on the data that RPM devices produce. In this paper, we report on three studies investigating how the severity of users' medical condition (normal vs. high blood pressure), security risk (low vs. modest vs. high risk), and medical advice source (human doctor vs. AI) influence user perceptions of advisor trustworthiness and willingness to disclose RPM-acquired information. We found that trust mediated the relationship between the advice source and users' willingness to disclose health information: users trust doctors more than AI and are more willing to disclose their RPM-acquired health information to a more trusted advice source. However, we unexpectedly discovered that conditional on trust, users disclose RPM-acquired information more readily to AI than to doctors. We observed that the advice source did not influence perceptions of security and privacy risks. We conclude by discussing how our findings can support the design of RPM applications.more » « less
-
null (Ed.)Speech alignment is where talkers subconsciously adopt the speech and language patterns of their interlocutor. Nowadays, people of all ages are speaking with voice-activated, artificially-intelligent (voice-AI) digital assistants through phones or smart speakers. This study examines participants’ age (older adults, 53–81 years old vs. younger adults, 18–39 years old) and gender (female and male) on degree of speech alignment during shadowing of (female and male) human and voice-AI (Apple’s Siri) productions. Degree of alignment was assessed holistically via a perceptual ratings AXB task by a separate group of listeners. Results reveal that older and younger adults display distinct patterns of alignment based on humanness and gender of the human model talkers: older adults displayed greater alignment toward the female human and device voices, while younger adults aligned to a greater extent toward the male human voice. Additionally, there were other gender-mediated differences observed, all of which interacted with model talker category (voice-AI vs. human) or shadower age category (OA vs. YA). Taken together, these results suggest a complex interplay of social dynamics in alignment, which can inform models of speech production both in human-human and human-device interaction.more » « less
An official website of the United States government

