skip to main content


Title: "How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation
To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems.  more » « less
Award ID(s):
2006854
NSF-PAR ID:
10337609
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
5
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 25
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization. 
    more » « less
  2. Most social media platforms implement content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive approaches, e.g., bans and content removal. We consider an alternative justice framework, restorative justice, which aids victims in healing, supports offenders in repairing the harm, and engages community members in addressing the harm collectively. To assess the utility of restorative justice in addressing online harm, we interviewed 23 users from Overwatch gaming communities, including moderators, victims, and offenders; such communities are particularly susceptible to harm, with nearly three quarters of all online game players suffering from some form of online abuse. We study how the communities currently handle harm cases through the lens of restorative justice and examine their attitudes toward implementing restorative justice processes. Our analysis reveals that cultural, technical, and resource-related obstacles hinder implementation of restorative justice within the existing punitive framework despite online community needs and existing structures to support it. We discuss how current content moderation systems can embed restorative justice goals and practices and overcome these challenges. 
    more » « less
  3. Online volunteers are an uncompensated yet valuable labor force for many social platforms. For example, volunteer content moderators perform a vast amount of labor to maintain online communities. However, as social platforms like Reddit favor revenue generation and user engagement, moderators are under-supported to manage the expansion of online communities. To preserve these online communities, developers and researchers of social platforms must account for and support as much of this labor as possible. In this paper, we quantitatively characterize the publicly visible and invisible actions taken by moderators on Reddit, using a unique dataset of private moderator logs for 126 subreddits and over 900 moderators. Our analysis of this dataset reveals the heterogeneity of moderation work across both communities and moderators. Moreover, we find that analyzing only visible work – the dominant way that moderation work has been studied thus far – drastically underestimates the amount of human moderation labor on a subreddit. We discuss the implications of our results on content moderation research and social platforms. 
    more » « less
  4. Despite the growing prevalence of M L algorithms, N LP , algorithmically-driven content recommender systems and other computational mechanisms on social media platforms, some core and mission-critical functions are nonetheless deeply reliant on the persistence of humans-in-the-loop to both validate computational models in use, and to intervene when those models fail. Perhaps nowhere is this human interaction with/on behalf of computation more key than in social media content moderation, where human capacities for discretion, discernment and the holding of complex mental models of decision-trees and changing policy are called upon hundreds, if not thousands, of times per day. This paper presents findings related to a larger qualitative, interview-based study of an in-house content moderation team (Trust Safety, or TS) at a mid-size, erstwhile niche social platform we call FanBase. Findings indicate that while the FanBase TS team is treated well in terms of support from managers, respect and support from the wider company, and mental health services provided (particularly in comparison to other social media companies), the work of content moderation remains an extremely taxing form of labor that is not adequately compensated or supported. 
    more » « less
  5. Transparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid to unpacking what transparency entails in moderation design, especially for content creators. We interviewed 28 YouTubers to understand their moderation experiences and analyze the dimensions of moderation transparency. We identified four primary dimensions: participants desired the moderation system to present moderation decisions saliently, explain the decisions profoundly, afford communication with the users effectively, and offer repairment and learning opportunities. We discuss how these four dimensions are mutually constitutive and conditioned in the context of creator moderation, where the target of governance mechanisms extends beyond the content to creator careers. We then elaborate on how a dynamic, transparency perspective could value content creators' digital labor, how transparency design could support creators' learning, as well as implications for transparency design of other creator platforms. 
    more » « less