skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: "Give Everybody [..] a Little Bit More Equity": Content Creator Perspectives and Responses to the Algorithmic Demonetization of Content Associated with Disadvantaged Groups
Algorithmic systems help manage the governance of digital platforms featuring user-generated content, including how money is distributed to creators from the profits a platform earns from advertising on this content. However, creators producing content about disadvantaged populations have reported that these kinds of systems are biased, having associated their content with prohibited or unsafe content, leading to what creators believed were error-prone decisions to demonetize their videos. Motivated by these reports, we present the results of 20 interviews with YouTube creators and a content analysis of videos, tweets, and news about demonetization cases to understand YouTubers' perceptions of demonetization affecting videos featuring disadvantaged or vulnerable populations, as well as creator responses to demonetization, and what kinds of tools and infrastructure support they desired. We found creators had concerns about YouTube's algorithmic system stereotyping content featuring vulnerable demographics in harmful ways, for example by labeling it "unsafe'' for children or families -- creators believed these demonetization errors led to a range of economic, social, and personal harms. To provide more context to these findings, we analyzed and report on the technique a few creators used to audit YouTube's algorithms to learn what could cause the demonetization of videos featuring LGBTQ people, culture and/or social issues. In response to the varying beliefs about the causes and harms of demonetization errors, we found our interviewees wanted more reliable information and statistics about demonetization cases and errors, more control over their content and advertising, and better economic security.  more » « less
Award ID(s):
2040942
PAR ID:
10391666
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
6
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 37
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems. 
    more » « less
  2. Content creators with marginalized identities are disproportionately affected by shadowbanning on social media platforms, which impacts their economic prospects online. Through a diary study and interviews with eight marginalized content creators who are women, pole dancers, plus size, and/or LGBTQIA+, this paper examines how content creators with marginalized identities experience shadowbanning. We highlight the labor and economic inequalities of shadowbanning, and the resulting invisible online labor that marginalized creators often must perform. We identify three types of invisible labor that marginalized content creators engage in to mitigate shadowbanning and sustain their online presence: mental and emotional labor, misdirected labor, and community labor. We conclude that even though marginalized content creators engaged in cross-platform collaborative labor and personal mental/emotional labor to mitigate the impacts of shadowbanning, it was insufficient to prevent uncertainty and economic precarity created by algorithmic opacity and ambiguity. 
    more » « less
  3. Social media users have long been aware of opaque content moderation systems and how they shape platform environments. On TikTok, creators increasingly utilize algospeak to circumvent unjust content restriction, meaning, they change or invent words to prevent TikTok’s content moderation algorithm from banning their video (e.g., “le$bean” for “lesbian”). We interviewed 19 TikTok creators about their motivations and practices of using algospeak in relation to their experience with TikTok’s content moderation. Participants largely anticipated how TikTok’s algorithm would read their videos, and used algospeak to evade unjustified content moderation while simultaneously ensuring target audiences can still find their videos. We identify non-contextuality, randomness, inaccuracy, and bias against marginalized communities as major issues regarding freedom of expression, equality of subjects, and support for communities of interest. Using algospeak, we argue for a need to improve contextually informed content moderation to valorize marginalized and tabooed audiovisual content on social media. 
    more » « less
  4. YouTube is the world's most widely used video platform, with over 70% of content viewed through algorithmic recommendations. While prior audits have examined polarization in YouTube's long-form video recommendations, the platform's fast-growing Shorts feature remains understudied. In this paper, we present the first large-scale audit comparing political content exposure and engagement dynamics across short-form and long-form videos on YouTube. We design a matched audit based on the insight that many news media organizations publish both short and long versions of the same content and collect 50,000 pairs of long-form and short-form video recommendations from both political and nonpolitcal seed videos. We analyze recommendations along several dimensions: the frequency of political recommendations, the diversity of retrieved videos, the engagement those videos receive, and finally, the partisan alignment between recommended videos and seed videos. Our results highlight fundamental differences between each algorithm, which we hope we can inform future research in analyzing the impact of YouTube recommendations. 
    more » « less
  5. How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization. 
    more » « less