Most social media platforms implement content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive approaches, e.g., bans and content removal. We consider an alternative justice framework, restorative justice, which aids victims in healing, supports offenders in repairing the harm, and engages community members in addressing the harm collectively. To assess the utility of restorative justice in addressing online harm, we interviewed 23 users from Overwatch gaming communities, including moderators, victims, and offenders; such communities are particularly susceptible to harm, with nearly three quarters of all online game players suffering from some form of online abuse. We study how the communities currently handle harm cases through the lens of restorative justice and examine their attitudes toward implementing restorative justice processes. Our analysis reveals that cultural, technical, and resource-related obstacles hinder implementation of restorative justice within the existing punitive framework despite online community needs and existing structures to support it. We discuss how current content moderation systems can embed restorative justice goals and practices and overcome these challenges.
more »
« less
Drawing from justice theories to support targets of online harassment
Most content moderation approaches in the United States rely on criminal justice models that sanction offenders via content removal or user bans. However, these models write the online harassment targets out of the justice-seeking process. Via an online survey with US participants ( N = 573), this research draws from justice theories to investigate approaches for supporting targets of online harassment. We uncover preferences for banning offenders, removing content, and apologies, but aversion to mediation and adjusting targets’ audiences. Preferences vary by identities (e.g. transgender participants on average find more exposure to be undesirable; American Indian or Alaska Native participants on average find payment to be unfair) and by social media behaviors (e.g. Instagram users report payment as just and fair). Our results suggest that a one-size-fits-all approach will fail some users while privileging others. We propose a broader theoretical and empirical landscape for supporting online harassment targets.
more »
« less
- Award ID(s):
- 1763297
- PAR ID:
- 10547159
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- New Media & Society
- Volume:
- 23
- Issue:
- 5
- ISSN:
- 1461-4448
- Format(s):
- Medium: X Size: p. 1278-1300
- Size(s):
- p. 1278-1300
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Social media platforms aspire to create online experiences where users can participate safely and equitably. However, women around the world experience widespread online harassment, including insults, stalking, aggression, threats, and non-consensual sharing of sexual photos. This article describes women's perceptions of harm associated with online harassment and preferred platform responses to that harm. We conducted a survey in 14 geographic regions around the world (N = 3,993), focusing on regions whose perspectives have been insufficiently elevated in social media governance decisions (e.g. Mongolia, Cameroon). Results show that, on average, women perceive greater harm associated with online harassment than men, especially for non-consensual image sharing. Women also prefer most platform responses compared to men, especially removing content and banning users; however, women are less favorable towards payment as a response. Addressing global gender-based violence online requires understanding how women experience online harms and how they wish for it to be addressed. This is especially important given that the people who build and govern technology are not typically those who are most likely to experience online harms.more » « less
-
Online harassment is pervasive. While substantial research has examined the nature of online harassment and how to moderate it, little work has explored how social media users evaluate the profiles of online harassers. This is important for helping people who may be experiencing or observing harassment to quickly and efficiently evaluate the user doing the harassing. We conducted a lab experiment (N=45) that eye-tracked participants while they viewed profiles of users who engaged in online harassment on mock Facebook, Twitter, and Instagram profiles. We evaluated what profile elements they looked at and for how long relative to a control group, and their qualitative attitudes about harasser profiles. Results showed that participants look at harassing users' post history more quickly than non-harassing users. They are also somewhat more likely to recall harassing profiles than non-harassing profiles. However, they do not spend more time on harassing profiles. Understanding what users pay attention to and recall may offer new design opportunities for supporting people who experience or observe harassment online.more » « less
-
Online harassment refers to a wide range of harmful behaviors, including hate speech, insults, doxxing, and non-consensual image sharing. Social media platforms have developed complex processes to try to detect and manage content that may violate community guidelines; however, less work has examined the types of harms associated with online harassment or preferred remedies to that harassment. We conducted three online surveys with US adult Internet users measuring perceived harms and preferred remedies associated with online harassment. Study 1 found greater perceived harm associated with non-consensual photo sharing, doxxing, and reputational damage compared to other types of harassment. Study 2 found greater perceived harm with repeated harassment compared to one-time harassment, but no difference between individual and group harassment. Study 3 found variance in remedy preferences by harassment type; for example, banning users is rated highly in general, but is rated lower for non-consensual photo sharing and doxxing compared to harassing family and friends and damaging reputation. Our findings highlight that remedies should be responsive to harassment type and potential for harm. Remedies are also not necessarily correlated with harassment severity—expanding remedies may allow for more contextually appropriate and effective responses to harassment.more » « less
-
An overarching goal of Artificial Intelligence (AI) is creating autonomous, social agents that help people. Two important challenges, though, are that different people prefer different assistance from agents and that preferences can change over time. Thus, helping behaviors should be tailored to how an individual feels during the interaction. We hypothesize that human nonverbal behavior can give clues about users' preferences for an agent's helping behaviors, augmenting an agent's ability to computationally predict such preferences with machine learning models. To investigate our hypothesis, we collected data from 194 participants via an online survey in which participants were recorded while playing a multiplayer game. We evaluated whether the inclusion of nonverbal human signals, as well as additional context (e.g., via game or personality information), led to improved prediction of user preferences between agent behaviors compared to explicitly provided survey responses. Our results suggest that nonverbal communication -- a common type of human implicit feedback -- can aid in understanding how people want computational agents to interact with them.more » « less
An official website of the United States government
