Bots have become critical for managing online communities on platforms, especially to match the increasing technical sophistication of online harms. However, community leaders often adoptthird-party bots, creating room for misalignment in their assumptions, expectations, and understandings (i.e., their technological frames) about them. On platforms where sharing bots can be extremely valuable, how community leaders can revise their frames about bots to more effectively adopt them is unclear. In this work, we conducted a qualitative interview study with 16 community leaders on Discord examining how they adopt third-party bots. We found that participants addressed challenges stemming from uncertainties about a bot's security, reliability, and fit through emergent social ecosystems. Formal and informal opportunities to discuss bots with others across communities enabled participants to revise their technological frames over time, closing gaps in bot-specific skills and knowledge. This social process of learning shifted participants' perspectives of the labor of bot adoption into something that was satisfying and fun, underscoring the value of collaborative and communal approaches to adopting bots. Finally, by shaping participants' mental models of the nature, value, and use of bots, social ecosystems also raise some practical tensions in how they support user creativity and customization in third-party bot use. Together, the social nature of adopting third-party bots in our interviews offers insight into how we can better support the sharing of valuable user-facing tools across online communities. 
                        more » 
                        « less   
                    
                            
                            The Impact of Governance Bots on Sense of Virtual Community: Development and Validation of the GOV-BOTs Scale
                        
                    
    
            Bots are increasingly being used for governance-related purposes in online communities, yet no instrumentation exists for measuring how users assess their beneficial or detrimental impacts. In order to support future human-centered and community-based research, we developed a new scale called GOVernance Bots in Online communiTies (GOV-BOTs) across two rounds of surveys on Reddit (N=820). We applied rigorous psychometric criteria to demonstrate the validity of GOV-BOTs, which contains two subscales: bot governance (4 items) and bot tensions (3 items). Whereas humans have historically expected communities to be composed entirely of humans, the social participation of bots as non-human agents now raises fundamental questions about psychological, philosophical, and ethical implications. Addressing psychological impacts, our data show that perceptions of effective bot governance positively contribute to users' sense of virtual community (SOVC), whereas perceived bot tensions may only impact SOVC if users are more aware of bots. Finally, we show that users tend to experience the greatest SOVC across groups of subreddits, rather than individual subreddits, suggesting that future research should carefully re-consider uses and operationalizations of the term community. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1910225
- PAR ID:
- 10469332
- Publisher / Repository:
- Association of Computing Machinery
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 6
- Issue:
- CSCW2
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 30
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Bots in online social networks can be used for good or bad but their presence is unavoidable and will increase in the future. To investigate how the interaction networks of bots and humans evolve, we created six social bots on Twitter with AI language models and let them carry out standard user operations. Three different strategies were implemented for the bots: a trend-targeting strategy (TTS), a keywords-targeting strategy (KTS) and a user-targeting strategy (UTS). We examined the interaction patterns such as targeting users, spreading messages, propagating relationships, and engagement. We focused on the emergent local structures or motifs and found that the strategies of the social bots had a significant impact on them. Motifs resulting from interactions with bots following TTS or KTS are simple and show significant overlap, while those resulting from interactions with UTS-governed bots lead to more complex motifs. These findings provide insights into human-bot interaction patterns in online social networks, and can be used to develop more effective bots for beneficial tasks and to combat malicious actors.more » « less
- 
            null (Ed.)Adopting new technology is challenging for volunteer moderation teams of online communities. Challenges are aggravated when communities increase in size. In a prior qualitative study, Kiene et al. found evidence that moderator teams adapted to challenges by relying on their experience in other technological platforms to guide the creation and adoption of innovative custom moderation "bots." In this study, we test three hypotheses on the social correlates of user innovated bot usage drawn from a previous qualitative study. We find strong evidence of the proposed relationship between community size and the use of user innovated bots. Although previous work suggests that smaller teams of moderators will be more likely to use these bots and that users with experience moderating in the previous platform will be more likely to do so, we find little evidence in support of either proposition.more » « less
- 
            Bots are playing an increasingly important role in the creation of knowledge in Wikipedia. In many cases, editors and bots form tightly knit teams. Humans develop bots, argue for their approval, and maintain them, performing tasks such as monitoring activity, merging similar bots, splitting complex bots, and turning off malfunctioning bots. Yet this is not the entire picture. Bots are designed to perform certain functions and can acquire new functionality over time. They play particular roles in the editing process. Understanding these roles is an important step towards understanding the ecosystem, and designing better bots and interfaces between bots and humans. This is important for understanding Wikipedia along with other kinds of work in which autonomous machines affect tasks performed by humans. In this study, we use unsupervised learning to build a nine category taxonomy of bots based on their functions in English Wikipedia. We then build a multi-class classifier to classify 1,601 bots based on labeled data. We discuss different bot activities, including their edit frequency, their working spaces, and their software evolution. We use a model to investigate how bots playing certain roles will have differential effects on human editors. In particular, we build on previous research on newcomers by studying the relationship between the roles bots play, the interactions they have with newcomers, and the ensuing survival rate of the newcomers.more » « less
- 
            Making online social communities ‘better’ is a challenging undertaking, as online communities are extraordinarily varied in their size, topical focus, and governance. As such, what is valued by one community may not be valued by another.However, community values are challenging to measure as they are rarely explicitly stated.In this work, we measure community values through the first large-scale survey of community values, including 2,769 reddit users in 2,151 unique subreddits. Through a combination of survey responses and a quantitative analysis of publicly available reddit data, we characterize how these values vary within and across communities.Amongst other findings, we show that community members disagree about how safe their communities are, that longstanding communities place 30.1% more importance on trustworthiness than newer communities, and that community moderators want their communities to be 56.7% less democratic than non-moderator community members.These findings have important implications, including suggesting that care must be taken to protect vulnerable community members, and that participatory governance strategies may be difficult to implement.Accurate and scalable modeling of community values enables research and governance which is tuned to each community's different values. To this end, we demonstrate that a small number of automatically quantifiable features capture a significant yet limited amount of the variation in values between communities with a ROC AUC of 0.667 on a binary classification task.However, substantial variation remains, and modeling community values remains an important topic for future work.We make our models and data public to inform community design and governance.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    