skip to main content


Search for: All records

Award ID contains: 1901151

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly. 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  2. Augmented Reality (AR) glasses separate dyadic interactions on diferent sides of the lens, where the person wearing the glasses (primary user) sees an AR world overlaid on their partner (secondary actor). The secondary actor interacts with the primary user under- standing they are seeing both physical and virtual worlds. We use grounded theory to study interaction tasks, participatory design ses- sions, and in-depth interviews of 10 participants and explore how AR real-time modifcations afect them. We observe a power imbalance attributed to the: (1) lack of transparency of the primary user’s view, (2) violation of agency over self-presentation, and (3) discreet record- ing capabilities of AR glasses. This information asymmetry leads to a negotiation of behaviors to reach a silently understood equilibrium. This paper addresses underlying design issues that contribute to power imbalances in dyadic interactions and ofers nuanced insights into the dynamics between primary users and secondary actors. 
    more » « less
    Free, publicly-accessible full text available July 10, 2024
  3. AI language technologies increasingly assist and expand human communication. While AI-mediated communication reduces human effort, its societal consequences are poorly understood. In this study, we investigate whether using an AI writing assistant in personal self-presentation changes how people talk about themselves. In an online experiment, we asked participants (N=200) to introduce themselves to others. An AI language assistant supported their writing by suggesting sentence completions. The language model generating suggestions was fine-tuned to preferably suggest either interest, work, or hospitality topics. We evaluate how the topic preference of a language model affected users’ topic choice by analyzing the topics participants discussed in their self-presentations. Our results suggest that AI language technologies may change the topics their users talk about. We discuss the need for a careful debate and evaluation of the topic priors built into AI language technologies. 
    more » « less
  4. As AI-mediated communication (AI-MC) becomes more prevalent in everyday interactions, it becomes increasingly important to develop a rigorous understanding of its effects on interpersonal relationships and on society at large. Controlled experimental studies offer a key means of developing such an understanding, but various complexities make it difficult for experimental AI-MC research to simultaneously achieve the criteria of experimental realism, experimental control, and scalability. After outlining these methodological challenges, this paper offers the concept of methodological middle spaces as a means to address these challenges. This concept suggests that the key to simultaneously achieving all three of these criteria is to abandon the perfect attainment of any single criterion. This concept's utility is demonstrated via its use to guide the design of a platform for conducting text-based AI-MC experiments. Through a series of three example studies, the paper illustrates how the concept of methodological middle spaces can inform the design of specific experimental methods. Doing so enabled these studies to examine research questions that would have been either difficult or impossible to investigate using existing approaches. The paper concludes by describing how future research could similarly apply the concept of methodological middle spaces to expand methodological possibilities for AI-MC research in ways that enable contributions not currently possible. 
    more » « less
  5. Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition. 
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AI-generated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry. 
    more » « less