The increasing harms caused by hate, harassment, and other forms of abuse online have motivated major platforms to explore hierarchical governance. The idea is to allow communities to have designated members take on moderation and leadership duties; meanwhile, members can still escalate issues to the platform. But these promising approaches have only been explored in plaintext settings where community content is public to the platform. It is unclear how one can realize hierarchical governance in the huge and increasing number of online communities that utilize end-to-end encrypted (E2EE) messaging for privacy. We propose the design of private, hierarchical governance systems. These should enable similar levels of community governance as in plaintext settings, while maintaining cryptographic privacy of content and governance actions not reported to the platform. We design the first such system, taking a layered approach that adds governance logic on top of an encrypted messaging protocol; we show how an extension to the message layer security (MLS) protocol suffices for achieving a rich set of governance policies. Our approach allows developers to rapidly prototype new governance features, taking inspiration from a plaintext system called PolicyKit. We report on an initial prototype encrypted messaging system called MlsGov that supports content-based community and platform moderation, elections of community moderators, votes to remove abusive users, and more.
more »
« less
"Is Reporting Worth the Sacrifice of Revealing What I’ve Sent?": Privacy Considerations When Reporting on End-to-End Encrypted Platforms
User reporting is an essential component of content moderation on many online platforms--in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level--for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms.
more »
« less
- Award ID(s):
- 2120497
- PAR ID:
- 10523195
- Publisher / Repository:
- USENIX Association
- Date Published:
- ISBN:
- 978-1-939133-36-6
- Format(s):
- Medium: X
- Location:
- Anaheim, CA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Social media platforms often rely on volunteer moderators to combat hate and harassment and create safe online environments. In the face of challenges combating hate and harassment, moderators engage in mutual support with one another. We conducted a qualitative content analysis of 115 hate and harassment-related threads from r/ModSupport and r/modhelp, two major subreddit forums for this type of mutual support. We analyze the challenges moderators face; complex tradeoffs related to privacy, utility, and harassment; and major challenges in the relationship between moderators and platform admins. We also present the first systematization of how platform features (including especially security, privacy, and safety features) are misused for online abuse, and drawing on this systematization we articulate design themes for platforms that want to resist such misuse.more » « less
-
Due to challenges around low-quality comments and misinformation, many news outlets have opted to turn off commenting features on their websites. The New York Times (NYT), on the other hand, has continued to scale up its online discussion resources to reach large audiences. Through interviews with the NYT moderation team, we present examples of how moderators manage the first ~24 hours of online discussion after a story breaks, while balancing concerns about journalistic credibility. We discuss how managing comments at the NYT is not merely a matter of content regulation, but can involve reporting from the "community beat" to recognize emerging topics and synthesize the multiple perspectives in a discussion to promote community. We discuss how other news organizations---including those lacking moderation resources---might appropriate the strategies and decisions offered by the NYT. Future research should investigate strategies to share and update the information generated about topics in the news through the course of content moderation.more » « less
-
This design project arose with the purpose to intervene within the current landscape of content moderation. Our team’s primary focus is community moderators, specifically volunteer moderators for online community spaces. Community moderators play a key role in up-keeping the guidelines and culture of online community spaces, as well as managing and protecting community members against harmful content online. Yet, community moderators notably lack the official resources and training that their commercial moderator counterparts have. To address this, we present ModeratorHub, a knowledge sharing platform that focuses on community moderation. In our current design stage, we focused 2 features: (1) moderation case documentation and (2) moderation case sharing. These are our team’s initial building blocks of a larger intervention aimed to support moderators and promote social support and collaboration among end users of online community ecosystems.more » « less
-
Mainstream platforms’ content moderation systems typically employ generalized “one-size-fits-all” approaches, intended to serve both general and marginalized users. Thus, transgender people must often create their own technologies and moderation systems to meet their specific needs. In our interview study of transgender technology creators (n=115), we found that creators face issues of transphobic abuse and disproportionate content moderation. Trans tech creators address these issues by carefully moderating and vetting their userbases, centering trans contexts in content moderation systems, and employing collective governance and community models. Based on these findings, we argue that trans tech creators’ approaches to moderation offer important insights into how to better design for trans users, and ultimately, marginalized users in the larger platform ecology. We introduce the concept of trans-centered moderation – content moderation that reviews and successfully vets transphobic users, appoints trans moderators to effectively moderate trans contexts, considers the limitations and constraints of technology for addressing social challenges, and employs collective governance and community models. Trans-centered moderation can help to improve platform design for trans users while reducing the harm faced by trans people and marginalized users more broadly.more » « less