Team member inclusion is vital in collaborative teams. In this work, we explore two strategies to increase the inclusion of human team members in a human-robot team: 1) giving a person in the group a specialized role (the 'robot liaison') and 2) having the robot verbally support human team members. In a human subjects experiment (N = 26 teams, 78 participants), groups of three participants completed two rounds of a collaborative task. In round one, two participants (ingroup) completed a task with a robot in one room, and one participant (outgroup) completed the same task with a robot in a different room. In round two, all three participants and one robot completed a second task in the same room, where one participant was designated as the robot liaison. During round two, the robot verbally supported each participant 6 times on average. Results show that participants with the robot liaison role had a lower perceived group inclusion than the other group members. Additionally, when outgroup members were the robot liaison, the group was less likely to incorporate their ideas into the group's final decision. In response to the robot's supportive utterances, outgroup members, and not ingroup members, showed an increase in the proportion of time they spent talking to the group. Our results suggest that specialized roles may hinder human team member inclusion, whereas supportive robot utterances show promise in encouraging contributions from individuals who feel excluded.
more »
« less
The Influence of Robot Verbal Support on Human Team Members: Encouraging Outgroup Contributions and Suppressing Ingroup Supportive Behavior
As teams of people increasingly incorporate robot members, it is essential to consider how a robot's actions may influence the team's social dynamics and interactions. In this work, we investigated the effects of verbal support from a robot (e.g., “ good idea Salim ,” “ yeah ”) on human team members' interactions related to psychological safety and inclusion. We conducted a between-subjects experiment ( N = 39 groups, 117 participants) where the robot team member either (A) gave verbal support or (B) did not give verbal support to the human team members of a human-robot team comprised of 2 human ingroup members, 1 human outgroup member, and 1 robot. We found that targeted support from the robot (e.g., “ good idea George ”) had a positive effect on outgroup members, who increased their verbal participation after receiving targeted support from the robot. When comparing groups that did and did not have verbal support from the robot, we found that outgroup members received fewer verbal backchannels from ingroup members if their group had robot verbal support. These results suggest that verbal support from a robot may have some direct benefits to outgroup members but may also reduce the obligation ingroup members feel to support the verbal contributions of outgroup members.
more »
« less
- Award ID(s):
- 1813651
- PAR ID:
- 10284320
- Date Published:
- Journal Name:
- Frontiers in Psychology
- Volume:
- 11
- ISSN:
- 1664-1078
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)This paper presents preliminary research on whether children will accept a robot as part of their ingroup, and on how a robot's group membership affects trust, closeness, and social support. Trust is important in human-robot interactions because it affects if people will follow robots' advice. In this study, we randomly assigned 11- and 12-year-old participants to a condition such that participants were either on a team with the robot (ingroup) or were opponents of the robot (outgroup) for an online game. Thus far, we have eight participants in the ingroup condition. Our preliminary results showed that children had a low level of trust, closeness, and social support with the robot. Participants had a much more negative response than we anticipated. We speculate that there will be a more positive response with an in-person setting rather than a remote one.more » « less
-
Windmann, Sabine (Ed.)Decades of research in social identity have shown that people instinctively hold positive attitudes towards ingroup members and negative attitudes towards outgroup members. However, it remains unclear how people respond to individuals explicitly identified with both one’s ingroup and outgroup. We propose that when people are exposed to dual-identified individuals and groups (e.g., Muslim-Americans explicitly identifying with both their Muslim and American identities), intergroup attitudes will improve, driven more by the ingroup component (American), despite the presence of the outgroup component (Muslim). Moreover, we suggest exposure to dual-identification can also improve attitudes toward the broader outgroup (Muslims more generally), a phenomenon called the gateway-group effect. To test these hypotheses, we created a new measure of dual-identification and conducted three studies involving both Muslim-Americans and Mexican-Americans. Results confirmed that exposure to explicitly dual-identified groups improved attitudes towards the dual-identified group (e.g., Mexican-Americans) as well as toward the respective outgroup (e.g., Mexicans).more » « less
-
Humans behave more prosocially toward ingroup (vs. outgroup) members. This preregistered research examined the influence of God concepts and memories of past behavior on prosociality toward outgroups. In Study 1 (n = 573), participants recalled their past kind or mean behavior (between-subjects) directed toward an outgroup. Subsequently, they completed a questionnaire assessing their views of God. Our dependent measure was the number of lottery entries given to another outgroup member. Participants who recalled their kind (vs. mean) behavior perceived God as more benevolent, which in turn predicted more generous allocation to the outgroup (vs. ingroup). Study 2 (n = 281) examined the causal relation by manipulating God concepts (benevolent vs. punitive). We found that not only recalling kind behaviors but perceiving God as benevolent increased outgroup generosity. The current research extends work on morality, religion, and intergroup relations by showing that benevolent God concepts and memories of past kind behaviors jointly increase outgroup generosity.more » « less
-
Two experiments examined the polarization of public support for COVID-19 policies due to people’s (lack of) trust in political leaders and nonpartisan experts. In diverse samples in the United States (Experiment 1; N = 1,802) and the United Kingdom (Experiment 2; N = 1,825), participants evaluated COVID-19 policies that were framed as proposed by ingroup political leaders, outgroup political leaders, nonpartisan experts, or, in the United States, a bipartisan group of political leaders. At the time of the study in April 2020, COVID-19 was an unfamiliar and shared threat. Therefore, there were theoretical reasons suggesting that attitudes toward COVID-19 policy may not have been politically polarized. Yet, our results demonstrated that even relatively early in the pandemic people supported policies from ingroup political leaders more than the same policies from outgroup leaders, extending prior research on how people align their policy stances to political elites from their own parties. People also trusted experts and ingroup political leaders more than they did outgroup political leaders. Partly because of this polarized trust, policies from experts and bipartisan groups were more widely supported than policies from ingroup political leaders. These results illustrate the potentially detrimental role political leaders may play and the potential for effective leadership by bipartisan groups and nonpartisan experts in shaping public policy attitudes during crises.more » « less