Volunteer moderators play a crucial role in safeguarding online communities, actively combating hate, harassment, and inappropriate content while enforcing community standards. Prior studies have examined moderation tools and practices, moderation challenges, and the emotional labor and burnout of volunteer moderators. However, researchers have yet to delve into the ways moderators support one another in combating hate and harassment within the communities they moderate through participation in meta-communities of moderators. To address this gap, we have conducted a qualitative content analysis of 115 hate and harassment-related threads from r/ModSupport and r/modhelp, two major subreddit forums for moderators for this type of mutual support. Our study reveals that moderators seek assistance on topics ranging from fighting attacks to understanding Reddit policies and rules to just venting their frustration. Other moderators respond to these requests by validating their frustration and challenges, showing emotional support, and providing information and tangible resources to help with their situation. Based on these findings, we share the implications of our work in facilitating platform and peer support for online volunteer moderators on Reddit and similar platforms.
more »
« less
SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
more »
« less
- Award ID(s):
- 1916096
- PAR ID:
- 10295029
- Date Published:
- Journal Name:
- 42nd IEEE Symposium on Security and Privacy
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Online harassment and content moderation have been well-documented in online communities. However, new contexts and systems always bring new ways of harassment and need new moderation mechanisms. This study focuses on hate raids, a form of group attack in real-time in live streaming communities. Through a qualitative analysis of hate raids discussion in the Twitch subreddit (r/Twitch), we found that (1) hate raids as a human-bot coordinated group attack leverages the live stream system to attack marginalized streamers and other potential groups with(out) breaking the rules, (2) marginalized streamers suffer compound harms with insufficient support from the platform, (3) moderation strategies are overwhelmingly technical, but streamers still struggle to balance moderation and participation considering their marginalization status and needs. We use affordances as a lens to explain how hate raids happens in live streaming systems and propose moderation-by-design as a lens when developing new features or systems to mitigate the potential abuse of such designs.more » « less
-
Social media platforms often rely on volunteer moderators to combat hate and harassment and create safe online environments. In the face of challenges combating hate and harassment, moderators engage in mutual support with one another. We conducted a qualitative content analysis of 115 hate and harassment-related threads from r/ModSupport and r/modhelp, two major subreddit forums for this type of mutual support. We analyze the challenges moderators face; complex tradeoffs related to privacy, utility, and harassment; and major challenges in the relationship between moderators and platform admins. We also present the first systematization of how platform features (including especially security, privacy, and safety features) are misused for online abuse, and drawing on this systematization we articulate design themes for platforms that want to resist such misuse.more » « less
-
Harassment has long been considered a severe social issue and a culturally contextualized construct. More recently, understanding and mitigating emerging harassment in social Virtual Reality (VR) has become a growing research area in HCI and CSCW. Based on the perspective of harassment in the U.S. culture, in this paper we identify new characteristics of online harassment in social VR using 30 in-depth interviews. We especially attend to how people who are already considered marginalized in the gaming and virtual worlds contexts (e.g., women, LGBTQ, and ethnic minorities) experience such harassment. As social VR is still a novel technology, our proactive approach highlights embodied harassment as an emerging but understudied form of harassment in novel online social spaces. Our critical review of social VR users' experiences of harassment and recommendations to mitigate such harassment also extends the current conceptualization of online harassment in CSCW. We therefore contribute to the active prevention of future harassment in nuanced online environments, platforms, and experiences.more » « less
-
Online harassment against women - particularly in gaming and virtual worlds contexts - remains a salient and pervasive issue, and arguably reflects the systems of offline structural oppression to control women's bodies and rights in today's world. Harassment in social Virtual Reality (VR) is also a growing new frontier of research in HCI and CSCW, particularly focusing on marginalized users such as women. Based on interviews with 31 women users of social VR, our findings present women's experiences of harassment risks in social VR as compared to harassment targeting women in pre-existing, on-screen online gaming and virtual worlds, along with strategies women employ to manage harassment in social VR with varying degrees of success. This study contributes to the growing body of literature on harassment in social VR by highlighting how women's marginalization online and offline impact their perceptions of and strategies to mitigate harassment in this unique space. It also provides a critical reflection on women's mitigation strategies and proposes important implications to rethink social VR design to better prevent harassment against women and other marginalized communities in the future metaverse.more » « less
An official website of the United States government

