Abstract Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior:reciprocal bots, which share all resources reciprocally; andstingy bots, which share no resources at all. We also manipulate the bot’s network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.
more »
« less
The social consequences of Machine Allocation Behavior: Fairness, interpersonal perceptions and performance
Machines increasingly decide over the allocation of resources or tasks among people resulting in what we call Machine Allocation Behavior. People respond strongly to how other people or machines allocate resources. However, the implications for human relationships of algorithmic allocations of, for example, tasks among crowd workers, annual bonuses among employees, or a robot’s gaze among members of a group entering a store remains unclear. We leverage a novel research paradigm to study the impact of machine allocation behavior on fairness perceptions, interpersonal perceptions, and individual performance. In a 2 × 3 between-subject design that manipulates how the allocation agent is presented (human vs. artificial intelligent [AI] system) and the allocation type (receiving less vs. equal vs. more resources), we find that group members who receive more resources perceive their counterpart as less dominant when the allocation originates from an AI as opposed to a human. Our findings have implications on our understanding of the impact of machine allocation behavior on interpersonal dynamics and on the way in which we understand human responses towards this type of machine behavior.
more »
« less
- Award ID(s):
- 1942085
- PAR ID:
- 10493566
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Computers in Human Behavior
- Volume:
- 146
- Issue:
- C
- ISSN:
- 0747-5632
- Page Range / eLocation ID:
- 107628
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Improving our understanding of how humans perceive AI teammates is an important foundation for our general understanding of human-AI teams. Extending relevant work from cognitive science, we propose a framework based on item response theory for modeling these perceptions. We apply this framework to real-world experiments, in which each participant works alongside another person or an AI agent in a question-answering setting, repeatedly assessing their teammate’s performance. Using this experimental data, we demonstrate the use of our framework for testing research questions about people’s perceptions of both AI agents and other people. We contrast mental models of AI teammates with those of human teammates as we characterize the dimensionality of these mental models, their development over time, and the influence of the participants’ own self-perception. Our results indicate that people expect AI agents’ performance to be significantly better on average than the performance of other humans, with less variation across different types of problems. We conclude with a discussion of the implications of these findings for human-AI interaction.more » « less
-
AI-mediated communication (AI-MC) represents a new paradigm where communication is augmented or generated by an intelligent system. As AI-MC becomes more prevalent, it is important to understand the effects that it has on human interactions and interpersonal relationships. Previous work tells us that in human interactions with intelligent systems, misattribution is common and trust is developed and handled differently than in interactions between humans. This study uses a 2 (successful vs. unsuccessful conversation) x 2 (standard vs. AI-mediated messaging app) between subjects design to explore whether AI mediation has any effects on attribution and trust. We show that the presence of AI-generated smart replies serves to increase perceived trust between human communicators and that, when things go awry, the AI seems to be perceived as a coercive agent, allowing it to function like a moral crumple zone and lessen the responsibility assigned to the other human communicator. These findings suggest that smart replies could be used to improve relationships and perceptions of conversational outcomes between interlocutors. Our findings also add to existing literature regarding perceived agency in smart agents by illustrating that in this type of AI-MC, the AI is considered to have agency only when communication goes awry.more » « less
-
To guide social interaction, people often rely on expectations about the traits of other people, based on markers of social group membership (i.e., stereotypes). Although the influence of stereotypes on social behavior is widespread, key questions remain about how traits inferred from social-group membership are instantiated in the brain and incorporated into neural computations that guide social behavior. Here, we show that the human lateral orbitofrontal cortex (OFC) represents the content of stereotypes about members of different social groups in the service of social decision-making. During functional MRI scanning, participants decided how to distribute resources across themselves and members of a variety of social groups in a modified Dictator Game. Behaviorally, we replicated our recent finding that inferences about others' traits, captured by a two-dimensional framework of stereotype content (warmth and competence), had dissociable effects on participants' monetary-allocation choices: recipients' warmth increased participants’ aversion to advantageous inequity (i.e., earning more than recipients), and recipients’ competence increased participants’ aversion to disadvantageous inequity (i.e., earning less than recipients). Neurally, representational similarity analysis revealed that others' traits in the two-dimensional space were represented in the temporoparietal junction and superior temporal sulcus, two regions associated with mentalizing, and in the lateral OFC, known to represent inferred features of a decision context outside the social domain. Critically, only the latter predicted individual choices, suggesting that the effect of stereotypes on behavior is mediated by inference-based decision-making processes in the OFC.more » « less
-
Reis, H.; Itzchakov, G. (Ed.)Building intimate relationships is rewarding but entails risking rejection. Trait self-esteem—a person's overall self-evaluation—has important implications for how people behave in socially risky situations. Integrating established models of responsiveness and intimacy with theory and research on self-esteem, we present a model that highlights the ways in which self-esteem impacts intimacy-building. A review of relevant research reveals that compared to people with high self-esteem, people with low self-esteem exhibit interpersonal perceptions and behaviors that can hinder intimacy development—for example, disclosing less openly, and eliciting and perceiving less responsiveness from others. We identify important directions for future research and consider methods for encouraging intimacy-promoting processes among people with low self-esteem.more » « less