skip to main content


Title: The Impact of Governance Bots on Sense of Virtual Community: Development and Validation of the GOV-BOTs Scale

Bots are increasingly being used for governance-related purposes in online communities, yet no instrumentation exists for measuring how users assess their beneficial or detrimental impacts. In order to support future human-centered and community-based research, we developed a new scale called GOVernance Bots in Online communiTies (GOV-BOTs) across two rounds of surveys on Reddit (N=820). We applied rigorous psychometric criteria to demonstrate the validity of GOV-BOTs, which contains two subscales: bot governance (4 items) and bot tensions (3 items). Whereas humans have historically expected communities to be composed entirely of humans, the social participation of bots as non-human agents now raises fundamental questions about psychological, philosophical, and ethical implications. Addressing psychological impacts, our data show that perceptions of effective bot governance positively contribute to users' sense of virtual community (SOVC), whereas perceived bot tensions may only impact SOVC if users are more aware of bots. Finally, we show that users tend to experience the greatest SOVC across groups of subreddits, rather than individual subreddits, suggesting that future research should carefully re-consider uses and operationalizations of the term community.

 
more » « less
Award ID(s):
1910225
NSF-PAR ID:
10469332
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Association of Computing Machinery
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
6
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 30
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bots in online social networks can be used for good or bad but their presence is unavoidable and will increase in the future. To investigate how the interaction networks of bots and humans evolve, we created six social bots on Twitter with AI language models and let them carry out standard user operations. Three different strategies were implemented for the bots: a trend-targeting strategy (TTS), a keywords-targeting strategy (KTS) and a user-targeting strategy (UTS). We examined the interaction patterns such as targeting users, spreading messages, propagating relationships, and engagement. We focused on the emergent local structures or motifs and found that the strategies of the social bots had a significant impact on them. Motifs resulting from interactions with bots following TTS or KTS are simple and show significant overlap, while those resulting from interactions with UTS-governed bots lead to more complex motifs. These findings provide insights into human-bot interaction patterns in online social networks, and can be used to develop more effective bots for beneficial tasks and to combat malicious actors. 
    more » « less
  2. Making online social communities ‘better’ is a challenging undertaking, as online communities are extraordinarily varied in their size, topical focus, and governance. As such, what is valued by one community may not be valued by another.However, community values are challenging to measure as they are rarely explicitly stated.In this work, we measure community values through the first large-scale survey of community values, including 2,769 reddit users in 2,151 unique subreddits. Through a combination of survey responses and a quantitative analysis of publicly available reddit data, we characterize how these values vary within and across communities.Amongst other findings, we show that community members disagree about how safe their communities are, that longstanding communities place 30.1% more importance on trustworthiness than newer communities, and that community moderators want their communities to be 56.7% less democratic than non-moderator community members.These findings have important implications, including suggesting that care must be taken to protect vulnerable community members, and that participatory governance strategies may be difficult to implement.Accurate and scalable modeling of community values enables research and governance which is tuned to each community's different values. To this end, we demonstrate that a small number of automatically quantifiable features capture a significant yet limited amount of the variation in values between communities with a ROC AUC of 0.667 on a binary classification task.However, substantial variation remains, and modeling community values remains an important topic for future work.We make our models and data public to inform community design and governance. 
    more » « less
  3. null (Ed.)
    Adopting new technology is challenging for volunteer moderation teams of online communities. Challenges are aggravated when communities increase in size. In a prior qualitative study, Kiene et al. found evidence that moderator teams adapted to challenges by relying on their experience in other technological platforms to guide the creation and adoption of innovative custom moderation "bots." In this study, we test three hypotheses on the social correlates of user innovated bot usage drawn from a previous qualitative study. We find strong evidence of the proposed relationship between community size and the use of user innovated bots. Although previous work suggests that smaller teams of moderators will be more likely to use these bots and that users with experience moderating in the previous platform will be more likely to do so, we find little evidence in support of either proposition. 
    more » « less
  4. Abstract

    Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior:reciprocal bots, which share all resources reciprocally; andstingy bots, which share no resources at all. We also manipulate the bot’s network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.

     
    more » « less
  5. Information manipulation is widespread in today’s media environment. Online networks have disrupted the gatekeeping role of traditional media by allowing various actors to influence the public agenda; they have also allowed automated accounts (or bots) to blend with human activity in the flow of information. Here, we assess the impact that bots had on the dissemination of content during two contentious political events that evolved in real time on social media. We focus on events of heightened political tension because they are particularly susceptible to information campaigns designed to mislead or exacerbate conflict. We compare the visibility of bots with human accounts, verified accounts, and mainstream news outlets. Our analyses combine millions of posts from a popular microblogging platform with web-tracking data collected from two different countries and timeframes. We employ tools from network science, natural language processing, and machine learning to analyze the diffusion structure, the content of the messages diffused, and the actors behind those messages as the political events unfolded. We show that verified accounts are significantly more visible than unverified bots in the coverage of the events but also that bots attract more attention than human accounts. Our findings highlight that social media and the web are very different news ecosystems in terms of prevalent news sources and that both humans and bots contribute to generate discrepancy in news visibility with their activity. 
    more » « less