skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Punishment and Its Discontents: An Analysis of Permanent Ban in an Online Game Community
Platforms face the challenge of managing toxic behaviors such as flaming, hateful remarks, and harassment. To discipline their users, platforms usually adopt a punitive approach that issues punishments ranging from a warning message to content removal to permanent ban (PB). As the severest punishment, PB deprives the user of their privileges on the platform, such as account access and purchased content. But little is known regarding the experiential side of PB within the user community. In this study, we analyzed PB in League of Legends, one of the largest online games today. We argue that what PB does is not precisely to discipline players into well-behaved community members. Rather, PB functions to produce the stereotype of "the most toxic player" in the community and is best seen as a platform rhetoric. We further discuss the need to contextualize toxicity from the restorative lens.  more » « less
Award ID(s):
2006854
PAR ID:
10337606
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
5
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 21
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Content moderation is a critical service performed by a variety of people on social media, protecting users from offensive or harmful content by reviewing and removing either the content or the perpetrator. These moderators fall into one of two categories: employees or volunteers. Prior research has suggested that there are differences in the effectiveness of these two types of moderators, with the more transparent user-based moderation being useful for educating users. However, direct comparisons between commercially-moderated and user-moderated platforms are rare, and apart from the difference in transparency, we still know little about what other disparities in user experience these two moderator types may create. To explore this, we conducted cross-platform surveys of over 900 users of commercially-moderated (Facebook, Instagram, Twitter, and YouTube) and user-moderated (Reddit and Twitch) social media platforms. Our results indicated that although user-moderated platforms did seem to be more transparent than commercially-moderated ones, this did not lead to user-moderated platforms being perceived as less toxic. In addition, commercially-moderated platform users want companies to take more responsibility for content moderation than they currently do, while user-moderated platform users want designated moderators and those who post on the site to take more responsibility. Across platforms, users seem to feel powerless and want to be taken care of when it comes to content moderation as opposed to engaging themselves. 
    more » « less
  2. As content moderation becomes a central aspect of all social media platforms and online communities, interest has grown in how to make moderation decisions contestable. On social media platforms where individual communities moderate their own activities, the responsibility to address user appeals falls on volunteers from within the community. While there is a growing body of work devoted to understanding and supporting the volunteer moderators' workload, little is known about their practice of handling user appeals. Through a collaborative and iterative design process with Reddit moderators, we found that moderators spend considerable effort in investigating user ban appeals and desired to directly engage with users and retain their agency over each decision. To fulfill their needs, we designed and built AppealMod, a system that induces friction in the appeals process by asking users to provide additional information before their appeals are reviewed by human moderators. In addition to giving moderators more information, we expected the friction in the appeal process would lead to a selection effect among users, with many insincere and toxic appeals being abandoned before getting any attention from human moderators. To evaluate our system, we conducted a randomized field experiment in a Reddit community of over 29 million users that lasted for four months. As a result of the selection effect, moderators viewed only 30% of initial appeals and less than 10% of the toxically worded appeals; yet they granted roughly the same number of appeals when compared with the control group. Overall, our system is effective at reducing moderator workload and minimizing their exposure to toxic content while honoring their preference for direct engagement and agency in appeals. 
    more » « less
  3. User reporting is an essential component of content moderation on many online platforms--in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level--for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms. 
    more » « less
  4. Synopsis Interdisciplinary collaborations are essential for addressing complex global challenges, yet forming and sustaining such teams is often hindered by institutional barriers, differences in discipline-specific languages, and cultural divides. Existing tools and platforms frequently fail to foster the deep, ongoing engagement necessary for successful interdisciplinary work. This paper proposes a novel web-based platform designed to stimulate and support interdisciplinary collaborations by integrating social media elements, such as user-friendly communication tools, algorithms for identifying and connecting individuals with complementary and unique skills, and smart suggestions for potential collaborators. The platform would facilitate building and maintaining the engagement of target users, and provide guardrails to engender community trust, with a goal to also tackle issues related to power dynamics, cultural differences, institutional structures, and varying levels of prestige or funding. By addressing these challenges, the proposed platform would enable and accelerate productive interdisciplinary research and collaborative ideation, and ultimately stimulate more innovative and effective solutions to complex scientific and societal problems. 
    more » « less
  5. Social media users create folk theories to help explain how elements of social media operate. Marginalized social media users face disproportionate content moderation and removal on social media platforms. We conducted a qualitative interview study (n = 24) to understand how marginalized social media users may create folk theories in response to content moderation and their perceptions of platforms’ spirit, and how these theories may relate to their marginalized identities. We found that marginalized social media users develop folk theories informed by their perceptions of platforms’ spirit to explain instances where their content was moderated in ways that violate their perceptions of how content moderation should work in practice. These folk theories typically address content being removed despite not violating community guidelines, along with bias against marginalized users embedded in guidelines. We provide implications for platforms, such as using marginalized users’ folk theories as tools to identify elements of platform moderation systems that function incorrectly and disproportionately impact marginalized users. 
    more » « less