Bots and humans can combine in multiple ways in the service of knowledge production. Designers make choices about the purpose of the bots, their technical architecture, and their initiative. That is, they decide about functions, mechanisms, and interfaces. Together these dimensions suggest a design space for systems of bots and humans. These systems are evaluated along several criteria. One criterion is productivity. Another is their effects on human editors, especially newcomers. A third is sustainability: how they persist in the face of change. Design and evaluation spaces are described as part of an analysis of Wiki-related bots: two bots and their effects are discussed in detail, and an agenda for further research is suggested.
more »
« less
The Roles Bots Play in Wikipedia
Bots are playing an increasingly important role in the creation of knowledge in Wikipedia. In many cases, editors and bots form tightly knit teams. Humans develop bots, argue for their approval, and maintain them, performing tasks such as monitoring activity, merging similar bots, splitting complex bots, and turning off malfunctioning bots. Yet this is not the entire picture. Bots are designed to perform certain functions and can acquire new functionality over time. They play particular roles in the editing process. Understanding these roles is an important step towards understanding the ecosystem, and designing better bots and interfaces between bots and humans. This is important for understanding Wikipedia along with other kinds of work in which autonomous machines affect tasks performed by humans. In this study, we use unsupervised learning to build a nine category taxonomy of bots based on their functions in English Wikipedia. We then build a multi-class classifier to classify 1,601 bots based on labeled data. We discuss different bot activities, including their edit frequency, their working spaces, and their software evolution. We use a model to investigate how bots playing certain roles will have differential effects on human editors. In particular, we build on previous research on newcomers by studying the relationship between the roles bots play, the interactions they have with newcomers, and the ensuing survival rate of the newcomers.
more »
« less
- PAR ID:
- 10602183
- Publisher / Repository:
- Association for Computing Machinery (ACM)
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 3
- Issue:
- CSCW
- ISSN:
- 2573-0142
- Format(s):
- Medium: X Size: p. 1-20
- Size(s):
- p. 1-20
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Structured data peer production (SDPP) platforms like Wikidata play an important role in knowledge production. Compared to traditional peer production platforms like Wikipedia, Wikidata data is more structured and intended to be used by machines, not (directly) by people; end-user interactions with Wikidata often happen through intermediary "invisible machines." Given this distinction, we wanted to understand Wikidata contributor motivations and how they are affected by usage invisibility caused by the machine intermediaries. Through an inductive thematic analysis of 15 interviews, we find that: (i) Wikidata editors take on two archetypes---Architects who define the ontological infrastructure of Wikidata, and Masons who build the database through data entry and editing; (ii) the structured nature of Wikidata reveals novel editor motivations, such as an innate drive for organizational work; (iii) most Wikidata editors have little understanding of how their contributions are used, which may demotivate some. We synthesize these insights to help guide the future design of SDPP platforms in supporting the engagement of different types of editors.more » « less
-
null (Ed.)Software bots are used by Open Source Software (OSS) projects to streamline the code review process. Interfacing between developers and automated services, code review bots report continuous integration failures, code quality checks, and code coverage. However, the impact of such bots on maintenance tasks is still neglected. In this paper, we study how project maintainers experience code review bots. We surveyed 127 maintainers and asked about their expectations and perception of changes incurred by code review bots. Our findings reveal that the most frequent expectations include enhancing the feedback bots provide to developers, reducing the maintenance burden for developers, and enforcing code coverage. While maintainers report that bots satisfied their expectations, they also perceived unexpected effects, such as communication noise and newcomers' dropout. Based on these results, we provide a series of implications for bot developers, as well as insights for future research.more » « less
-
Citizen science projects face a dilemma in relying on contributions from volunteers to achieve their scientific goals: providing volunteers with explicit training might increase the quality of contributions, but at the cost of losing the work done by newcomers during the training period, which for many is the only work they will contribute to the project. Based on research in cognitive science on how humans learn to classify images, we have designed an approach to use machine learning to guide the presentation of tasks to newcomers that help them more quickly learn how to do the image classification task while still contributing to the work of the project. A Bayesian model for tracking volunteer learning is presented.more » « less
-
Abstract Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior:reciprocal bots, which share all resources reciprocally; andstingy bots, which share no resources at all. We also manipulate the bot’s network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.more » « less