skip to main content


Title: After Violation But Before Sanction: Understanding Volunteer Moderators' Profiling Processes Toward Violators in Live Streaming Communities
Content moderation is an essential part of online community health and governance. While much of extant research is centered on what happens to the content, moderation also involves the management of violators. This study focuses on how moderators (mods) make decisions about their actions after the violation takes place but before the sanction by examining how they "profile" the violators. Through observations and interviews with volunteer mods on Twitch, we found that mods engage in a complex process of collaborative evidence collection and profile violators into different categories to decide the type and extent of punishment. Mods consider violators' characteristics as well as behavioral history and violation context before taking moderation action. The main purpose of the profiling was to avoid excessive punishment and aim to integrate violators more into the community. We discuss the contributions of profiling to moderation practice and suggest design mechanisms to facilitate mods' profiling processes.  more » « less
Award ID(s):
1928627
NSF-PAR ID:
10383983
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
5
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Volunteer moderators (mods) play significant roles in developing moderation standards and dealing with harmful content in their micro-communities. However, little work explores how volunteer mods work as a team. In line with prior work about understanding volunteer moderation, we interview 40 volunteer mods on Twitch — a leading live streaming platform. We identify how mods collaborate on tasks (off-streaming coordination and preparation, in-stream real-time collaboration, and relationship building both off-stream and in-stream to reinforce collaboration) and how mods contribute to moderation standards (collaboratively working on the community rulebook and individually shaping community norms). We uncover how volunteer mods work as an effective team. We also discuss how the affordances of multi-modal communication and informality of volunteer moderation contribute to task collaboration, standards development, and mod’s roles and responsibilities. 
    more » « less
  2. Content moderation is a crucial aspect of online platforms, and it requires human moderators (mods) to repeatedly review and remove harmful content. However, this moderation process can lead to cognitive overload and emotional labor for the mods. As new platforms and designs emerge, such as live streaming space, new challenges arise due to the real-time nature of the interactions. In this study, we examined the use of ignoring as a moderation strategy by interviewing 19 Twitch mods. Our findings indicated that ignoring involves complex cognitive processes and significant invisible labor in the decision-making process. Additionally, we found that ignoring is an essential component of real-time moderation. These preliminary findings suggest that ignoring has the potential to be a valuable moderation strategy in future interactive systems, which highlights the need to design better support for ignoring in interactive live-streaming systems. 
    more » « less
  3. To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems. 
    more » « less
  4. Cultural evolution researchers still debate whether humans are unique among species in having social norms, i.e. moralized, group-specific, socially learned, shared understandings of the rules by which social life should be conducted, maintained via moral emotions that inspire impartial third parties to punish violators of these rules. I sought to establish what behaviors spark outrage in capuchins by recording the details of social context whenever a capuchin aggressed against or screamed at another monkey. Food theft, certain types of sexual interaction, and branch-breaking displays were situations that elicited outrage often enough to warrant documentation of which other monkeys witnessed these events, and how they responded. Three decades of long-term data on ten groups were used to measure degree of maternal kinship and relationship quality (using focal follow data and ad libitum data) between the bystander monkeys and the monkeys involved in the putative norm violation. This population fails to meet three of my operational criteria for social norms: (1) There is very little between-group variation in the patterning of social behaviors relevant to the putative social rules identified. (2) The rate at which third party bystanders aggress against putative norm violators is low (0.6-7.0%). (3) Using a logistic regression modeling framework, the most salient predictor of whether third party bystanders punish putative rule violators is the quality of bystanders’ relationships with those violators, suggesting that bystander behavior is driven more by grudge-holding against particular individuals with whom they have poor-quality relationships than by altruistic enforcement of a group-wide behavioral standard. 
    more » « less
  5. Many online communities rely on postpublication moderation where contributors-even those that are perceived as being risky-are allowed to publish material immediately and where moderation takes place after the fact. An alternative arrangement involves moderating content before publication. A range of communities have argued against prepublication moderation by suggesting that it makes contributing less enjoyable for new members and that it will distract established community members with extra moderation work. We present an empirical analysis of the effects of a prepublication moderation system called FlaggedRevs that was deployed by several Wikipedia language editions. We used panel data from 17 large Wikipedia editions to test a series of hypotheses related to the effect of the system on activity levels and contribution quality. We found that the system was very effective at keeping low-quality contributions from ever becoming visible. Although there is some evidence that the system discouraged participation among users without accounts, our analysis suggests that the system's effects on contribution volume and quality were moderate at most. Our findings imply that concerns regarding the major negative effects of prepublication moderation systems on contribution quality and project productivity may be overstated.

     
    more » « less