skip to main content

Title: SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
; ; ; ; ; ; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
42nd IEEE Symposium on Security and Privacy
Sponsoring Org:
National Science Foundation
More Like this
  1. While most moderation actions on major social platforms are performed by either the platforms themselves or volunteer moderators, it is rare for platforms to collaborate directly with moderators to address problems. This paper examines how the group-chatting platform Discord coordinated with experienced volunteer moderators to respond to hate and harassment toward LGBTQ+ communities during Pride Month, June 2021, in what came to be known as the "Pride Mod" initiative. Representatives from Discord and volunteer moderators collaboratively identified and communicated with targeted communities, and volunteers temporarily joined servers that requested support to supplement those servers' existing volunteer moderation teams. Though LGBTQ+ communities were subject to a wave of targeted hate during Pride Month, the communities that received the requested volunteer support reported having a better capacity to handle the issues that arose. This paper reports the results of interviews with 11 moderators who participated in the initiative as well as the Discord employee who coordinated it. We show how this initiative was made possible by the way Discord has cultivated trust and built formal connections with its most active volunteers, and discuss the ethical implications of formal collaborations between for-profit platforms and volunteer users.
  2. Harassment has long been considered a severe social issue and a culturally contextualized construct. More recently, understanding and mitigating emerging harassment in social Virtual Reality (VR) has become a growing research area in HCI and CSCW. Based on the perspective of harassment in the U.S. culture, in this paper we identify new characteristics of online harassment in social VR using 30 in-depth interviews. We especially attend to how people who are already considered marginalized in the gaming and virtual worlds contexts (e.g., women, LGBTQ, and ethnic minorities) experience such harassment. As social VR is still a novel technology, our proactive approach highlights embodied harassment as an emerging but understudied form of harassment in novel online social spaces. Our critical review of social VR users' experiences of harassment and recommendations to mitigate such harassment also extends the current conceptualization of online harassment in CSCW. We therefore contribute to the active prevention of future harassment in nuanced online environments, platforms, and experiences.
  3. Mapping of spatial hotspots, i.e., regions with significantly higher rates of generating cases of certain events (e.g., disease or crime cases), is an important task in diverse societal domains, including public health, public safety, transportation, agriculture, environmental science, and so on. Clustering techniques required by these domains differ from traditional clustering methods due to the high economic and social costs of spurious results (e.g., false alarms of crime clusters). As a result, statistical rigor is needed explicitly to control the rate of spurious detections. To address this challenge, techniques for statistically-robust clustering (e.g., scan statistics) have been extensively studied by the data mining and statistics communities. In this survey, we present an up-to-date and detailed review of the models and algorithms developed by this field. We first present a general taxonomy for statistically-robust clustering, covering key steps of data and statistical modeling, region enumeration and maximization, and significance testing. We further discuss different paradigms and methods within each of the key steps. Finally, we highlight research gaps and potential future directions, which may serve as a stepping stone in generating new ideas and thoughts in this growing field and beyond.
  4. A forcing function is an intervention for constraining human behavior. However, the literature describing forcing functions provides little guidance for when and how to apply forcing functions or their associated trade-offs. In this paper, we address these shortcomings by introducing a novel taxonomy of forcing functions. This taxonomy extends the previous methods in four ways. First, it identifies two levels of forcing function solidity: hard forcing functions, which explicitly enforce constraints through the system, and soft forcing functions, which convey or communicate constraints. Second, each solidity level is decomposed into specific types. Third, the taxonomy hierarchically ranks forcing function solidities and types based on trade-offs of constraint and resilience. Fourth, for hard forcing functions, our taxonomy offers formal guidance for identifying the minimally constraining intervention that will prevent a specific error from occurring. We validated the ability of our method to identify effective error interventions by applying it to systems with known errors from the literature. We then compared the solutions offered by our method to known, effective interventions. We discuss our results and offer suggestions for further developments in future research.
  5. Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguishmore »between high and low levels of future hostility with an AUC of .91 (task 2).« less