Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior:
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract reciprocal bots , which share all resources reciprocally; andstingy bots , which share no resources at all. We also manipulate the bot’s network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.Free, publicly-accessible full text available December 1, 2024 -
Abstract Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.more » « lessFree, publicly-accessible full text available December 1, 2024
-
Machines increasingly decide over the allocation of resources or tasks among people resulting in what we call Machine Allocation Behavior. People respond strongly to how other people or machines allocate resources. However, the implications for human relationships of algorithmic allocations of, for example, tasks among crowd workers, annual bonuses among employees, or a robot’s gaze among members of a group entering a store remains unclear. We leverage a novel research paradigm to study the impact of machine allocation behavior on fairness perceptions, interpersonal perceptions, and individual performance. In a 2 × 3 between-subject design that manipulates how the allocation agent is presented (human vs. artificial intelligent [AI] system) and the allocation type (receiving less vs. equal vs. more resources), we find that group members who receive more resources perceive their counterpart as less dominant when the allocation originates from an AI as opposed to a human. Our findings have implications on our understanding of the impact of machine allocation behavior on interpersonal dynamics and on the way in which we understand human responses towards this type of machine behavior.more » « less
-
Augmented Reality (AR) glasses separate dyadic interactions on diferent sides of the lens, where the person wearing the glasses (primary user) sees an AR world overlaid on their partner (secondary actor). The secondary actor interacts with the primary user under- standing they are seeing both physical and virtual worlds. We use grounded theory to study interaction tasks, participatory design ses- sions, and in-depth interviews of 10 participants and explore how AR real-time modifcations afect them. We observe a power imbalance attributed to the: (1) lack of transparency of the primary user’s view, (2) violation of agency over self-presentation, and (3) discreet record- ing capabilities of AR glasses. This information asymmetry leads to a negotiation of behaviors to reach a silently understood equilibrium. This paper addresses underlying design issues that contribute to power imbalances in dyadic interactions and ofers nuanced insights into the dynamics between primary users and secondary actors.more » « less
-
As AI-mediated communication (AI-MC) becomes more prevalent in everyday interactions, it becomes increasingly important to develop a rigorous understanding of its effects on interpersonal relationships and on society at large. Controlled experimental studies offer a key means of developing such an understanding, but various complexities make it difficult for experimental AI-MC research to simultaneously achieve the criteria of experimental realism, experimental control, and scalability. After outlining these methodological challenges, this paper offers the concept of methodological middle spaces as a means to address these challenges. This concept suggests that the key to simultaneously achieving all three of these criteria is to abandon the perfect attainment of any single criterion. This concept's utility is demonstrated via its use to guide the design of a platform for conducting text-based AI-MC experiments. Through a series of three example studies, the paper illustrates how the concept of methodological middle spaces can inform the design of specific experimental methods. Doing so enabled these studies to examine research questions that would have been either difficult or impossible to investigate using existing approaches. The paper concludes by describing how future research could similarly apply the concept of methodological middle spaces to expand methodological possibilities for AI-MC research in ways that enable contributions not currently possible.more » « less
-
The increased use of algorithms to support decision making raises questions about whether people prefer algorithmic or human input when making decisions. Two streams of research on algorithm aversion and algorithm appreciation have yielded contradicting results. Our work attempts to reconcile these contradictory findings by focusing on the framings of humans and algorithms as a mechanism. In three decision making experiments, we created an algorithm appreciation result (Experiment 1) as well as an algorithm aversion result (Experiment 2) by manipulating only the description of the human agent and the algorithmic agent, and we demonstrated how different choices of framings can lead to inconsistent outcomes in previous studies (Experiment 3). We also showed that these results were mediated by the agent's perceived competence, i.e., expert power. The results provide insights into the divergence of the algorithm aversion and algorithm appreciation literature. We hope to shift the attention from these two contradicting phenomena to how we can better design the framing of algorithms. We also call the attention of the community to the theory of power sources, as it is a systemic framework that can open up new possibilities for designing algorithmic decision support systems.more » « less