skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: "We Care About the Internet; We Care About Everything" Understanding Social Media Content Moderators' Mental Models and Support Needs
Despite the growing prevalence of M L algorithms, N LP , algorithmically-driven content recommender systems and other computational mechanisms on social media platforms, some core and mission-critical functions are nonetheless deeply reliant on the persistence of humans-in-the-loop to both validate computational models in use, and to intervene when those models fail. Perhaps nowhere is this human interaction with/on behalf of computation more key than in social media content moderation, where human capacities for discretion, discernment and the holding of complex mental models of decision-trees and changing policy are called upon hundreds, if not thousands, of times per day. This paper presents findings related to a larger qualitative, interview-based study of an in-house content moderation team (Trust Safety, or TS) at a mid-size, erstwhile niche social platform we call FanBase. Findings indicate that while the FanBase TS team is treated well in terms of support from managers, respect and support from the wider company, and mental health services provided (particularly in comparison to other social media companies), the work of content moderation remains an extremely taxing form of labor that is not adequately compensated or supported.  more » « less
Award ID(s):
1928286
PAR ID:
10455915
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 56th Hawaii International Conference on System Sciences
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Moderating content on social media can lead to severe psychological distress. However, little is known about the type, severity, and consequences of distress experienced by volunteer content moderators (VCMs), who do this work voluntarily. We present results from a survey that investigated why Facebook Group and subreddit VCMs quit, and whether reasons for quitting are correlated with psychological distress, demographics, and/or community characteristics. We found that VCMs are likely to experience psychological distress that stems from struggles with other moderators, moderation team leads’ harmful behaviors, and having too little available time, and these experiences of distress relate to their reasons for quitting. While substantial research has focused on making the task of detecting and assessing toxic content easier or less distressing for moderation workers, our study shows that social interventions for VCM workers, for example, to support them in navigating interpersonal conflict with other moderators, may be necessary. 
    more » « less
  2. Social media provide a fertile ground where conspiracy theories and radical ideas can flourish, reach broad audiences, and sometimes lead to hate or violence beyond the online world itself. QAnon represents a notable example of a political conspiracy that started out on social media but turned mainstream, in part due to public endorsement by influential political figures. Nowadays, QAnon conspiracies often appear in the news, are part of political rhetoric, and are espoused by significant swaths of people in the United States. It is therefore crucial to understand how such a conspiracy took root online, and what led so many social media users to adopt its ideas. In this work, we propose a framework that exploits both social interaction and content signals to uncover evidence of user radicalization or support for QAnon. Leveraging a large dataset of 240M tweets collected in the run-up to the 2020 US Presidential election, we define and validate a multivariate metric of radicalization. We use that to separate users in distinct, naturally-emerging, classes of behaviors associated with radicalization processes, from self-declared QAnon supporters to hyper-active conspiracy promoters. We also analyze the impact of Twitter's moderation policies on the interactions among different classes: we discover aspects of moderation that succeed, yielding a substantial reduction in the endorsement received by hyperactive QAnon accounts. But we also uncover where moderation fails, showing how QAnon content amplifiers are not deterred or affected by the Twitter intervention. Our findings refine our understanding of online radicalization processes, reveal effective and ineffective aspects of moderation, and call for the need to further investigate the role social media play in the spread of conspiracies. 
    more » « less
  3. Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability. 
    more » « less
  4. Social media users have long been aware of opaque content moderation systems and how they shape platform environments. On TikTok, creators increasingly utilize algospeak to circumvent unjust content restriction, meaning, they change or invent words to prevent TikTok’s content moderation algorithm from banning their video (e.g., “le$bean” for “lesbian”). We interviewed 19 TikTok creators about their motivations and practices of using algospeak in relation to their experience with TikTok’s content moderation. Participants largely anticipated how TikTok’s algorithm would read their videos, and used algospeak to evade unjustified content moderation while simultaneously ensuring target audiences can still find their videos. We identify non-contextuality, randomness, inaccuracy, and bias against marginalized communities as major issues regarding freedom of expression, equality of subjects, and support for communities of interest. Using algospeak, we argue for a need to improve contextually informed content moderation to valorize marginalized and tabooed audiovisual content on social media. 
    more » « less
  5. Social media users create folk theories to help explain how elements of social media operate. Marginalized social media users face disproportionate content moderation and removal on social media platforms. We conducted a qualitative interview study (n = 24) to understand how marginalized social media users may create folk theories in response to content moderation and their perceptions of platforms’ spirit, and how these theories may relate to their marginalized identities. We found that marginalized social media users develop folk theories informed by their perceptions of platforms’ spirit to explain instances where their content was moderated in ways that violate their perceptions of how content moderation should work in practice. These folk theories typically address content being removed despite not violating community guidelines, along with bias against marginalized users embedded in guidelines. We provide implications for platforms, such as using marginalized users’ folk theories as tools to identify elements of platform moderation systems that function incorrectly and disproportionately impact marginalized users. 
    more » « less