skip to main content


This content will become publicly available on July 1, 2024

Title: You Can (Not) Say What You Want: Using Algospeak to Contest and Evade Algorithmic Content Moderation on TikTok

Social media users have long been aware of opaque content moderation systems and how they shape platform environments. On TikTok, creators increasingly utilize algospeak to circumvent unjust content restriction, meaning, they change or invent words to prevent TikTok’s content moderation algorithm from banning their video (e.g., “le$bean” for “lesbian”). We interviewed 19 TikTok creators about their motivations and practices of using algospeak in relation to their experience with TikTok’s content moderation. Participants largely anticipated how TikTok’s algorithm would read their videos, and used algospeak to evade unjustified content moderation while simultaneously ensuring target audiences can still find their videos. We identify non-contextuality, randomness, inaccuracy, and bias against marginalized communities as major issues regarding freedom of expression, equality of subjects, and support for communities of interest. Using algospeak, we argue for a need to improve contextually informed content moderation to valorize marginalized and tabooed audiovisual content on social media.

 
more » « less
Award ID(s):
2150217
NSF-PAR ID:
10480449
Author(s) / Creator(s):
; ;
Publisher / Repository:
Social Media + Society
Date Published:
Journal Name:
Social Media + Society
Volume:
9
Issue:
3
ISSN:
2056-3051
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Algospeak refers to social media users intentionally altering or substituting words when creating or sharing online content, for example, using ‘le$bean’ for ‘lesbian’. This study discusses the characteristics of algospeak as a computer-mediated language phenomenon on TikTok with regards to users’ algorithmic literacy and their awareness of how the platform’s algorithms work. We then present results from an interview study with TikTok creators on their motivations to utilize algospeak. Our results indicate that algospeak is used to oppose TikTok’s algorithmic moderation system in order to prevent unjust content violations and shadowbanning when posting about benign yet seemingly unwanted subjects on TikTok. In this, we find that although algospeak helps to prevent consequences, it often impedes the creation of quality content. We provide an adapted definition of algospeak and new insights into user-platform interactions in the context of algorithmic systems and algorithm awareness. 
    more » « less
  2. Abstract. With TikTok emerging as one of the most popular social mediaplatforms, there is significant potential for science communicators tocapitalize on this success and to share their science with a broad, engagedaudience. While videos of chemistry and physics experiments are prominentamong educational science content on TikTok, videos related to thegeosciences are comparatively lacking, as is an analysis of what types ofgeoscience videos perform well on TikTok. To increase the visibility of thegeosciences and geophysics on TikTok and to determine best strategies forgeoscience communication on the app, we created a TikTok account called“Terra Explore” (@TerraExplore). The Terra Explore account is a jointeffort between science communication specialists at UNAVCO, IRIS(Incorporated Research Institutions for Seismology), and OpenTopography. Weproduced 48 educational geoscience videos over a 4-month period betweenOctober 2021 and February 2022. We evaluated the performance of each videobased on its reach, engagement, and average view duration to determine thequalities of a successful video. Our video topics primarily focused onseismology, earthquakes, topography, lidar (light detection and ranging),and GPS (Global Positioning System), in alignment with our organizationalmissions. Over this time period, our videos garnered over 2 million totalviews, and our account gained over 12 000 followers. The videos thatreceived the most views received nearly all (∼ 97 %) oftheir views from the For You page, TikTok's algorithmic recommendation feed. Wefound that short videos (< 30 s) had a high average view duration,but longer videos (> 60 s) had the highest engagement rates.Lecture-style videos that were approximately 60 s in length had moresuccess in both reach and engagement. Our videos that received the highestnumber of views featured content that was related to a recent newsworthyevent (e.g., an earthquake) or that explained location-based geology of arecognizable area. Our results highlight the algorithm-driven nature ofTikTok, which results in a low barrier to entry and success for new sciencecommunication creators. 
    more » « less
  3. Social media users create folk theories to help explain how elements of social media operate. Marginalized social media users face disproportionate content moderation and removal on social media platforms. We conducted a qualitative interview study (n = 24) to understand how marginalized social media users may create folk theories in response to content moderation and their perceptions of platforms’ spirit, and how these theories may relate to their marginalized identities. We found that marginalized social media users develop folk theories informed by their perceptions of platforms’ spirit to explain instances where their content was moderated in ways that violate their perceptions of how content moderation should work in practice. These folk theories typically address content being removed despite not violating community guidelines, along with bias against marginalized users embedded in guidelines. We provide implications for platforms, such as using marginalized users’ folk theories as tools to identify elements of platform moderation systems that function incorrectly and disproportionately impact marginalized users. 
    more » « less
  4. Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators’ well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation. 
    more » « less
  5. Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability. 
    more » « less