Interdependent privacy (IDP) violations occur when users share personal information about others without permission, resulting in potential embarrassment, reputation loss, or harassment. There are several strategies that can be applied to protect IDP, but little is known regarding how social media users perceive IDP threats or how they prefer to respond to them. We utilized a mixed-method approach with a replication study to examine user beliefs about various government-, platform-, and user-level strategies for managing IDP violations. Participants reported that IDP represented a 'serious' online threat, and identified themselves as primarily responsible for responding to violations. IDP strategies that felt more familiar and provided greater perceived control over violations (e.g., flagging, blocking, unfriending) were rated as more effective than platform or government driven interventions. Furthermore, we found users were more willing to share on social media if they perceived their interactions as protected. Findings are discussed in relation to control paradox theory.
more »
« less
What are Effective Strategies of Handling Harassment on Twitch?: Users' Perspectives
Harassment is an issue in online communities with the live streaming platform Twitch being no exception. In this study, we surveyed 375 Twitch users in person at TwitchCon, asking them about who should be responsible for deciding what should be allowed and what strategies they perceived to be effective in handling harassment. We found that users thought that streamers should be most responsible for enforcing rules and that either blocking bad actors, ignoring them, or trying to educate them were the most effective strategies.
more »
« less
- Award ID(s):
- 1841354
- PAR ID:
- 10178899
- Date Published:
- Journal Name:
- CSCW '19: Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing
- Page Range / eLocation ID:
- 166 to 170
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Rules and norms are critical to community governance. Live streaming communities like Twitch consist of thousands of micro-communities called channels. We conducted two studies to understand the micro-community rules. Study one suggests that Twitch users perceive that both rules transparency and communication frequency matter to channel vibe and frequency of harassment. Study two finds that the most popular channels have no channel or chat rules; among these having rules, rules encouraged by streamers are prominent. We explain why this may happen and how this contributes to community moderation and future research.more » « less
-
Social VR's focus on embodied and immersive experiences has led to intensified and more physicalized forms of harassment than other online contexts. Therefore, a growing body of HCI and CSCW work has explored multiple strategies and mechanisms to prevent and mitigate harassment risks in social VR. However, existing works have also highlighted a fundamental challenge in mitigating harassment in social VR - the apparent lack of consensus among social VR users on how to explicitly define harassment and what behaviors should be considered harassing in social VR. In this work, we aim to offer new knowledge on the uncertainty about how harassment is defined and perceived in social VR, particularly by learning from social VR users who have experiencedboth sides of harassment accusations. Based on interviews with 12 participants with diverse identities who have both been harassed by others and been accused of harassing others in social VR, we unpack how people justify and reflect on their behavior given their prior experiences of both being victims of harassment and being called a harasser. We thus offer unique insights into the complexity of harassment in social VR by highlighting cases of gray areas and critical ethical implications in such harassment accusations, which are understudied in the existing literature. We also propose two high-level design principles for new strategies and approaches to foster safe social VR spaces based on people's unique experiences of both sides of harassment accusations in social VR.more » « less
-
Online harassment refers to a wide range of harmful behaviors, including hate speech, insults, doxxing, and non-consensual image sharing. Social media platforms have developed complex processes to try to detect and manage content that may violate community guidelines; however, less work has examined the types of harms associated with online harassment or preferred remedies to that harassment. We conducted three online surveys with US adult Internet users measuring perceived harms and preferred remedies associated with online harassment. Study 1 found greater perceived harm associated with non-consensual photo sharing, doxxing, and reputational damage compared to other types of harassment. Study 2 found greater perceived harm with repeated harassment compared to one-time harassment, but no difference between individual and group harassment. Study 3 found variance in remedy preferences by harassment type; for example, banning users is rated highly in general, but is rated lower for non-consensual photo sharing and doxxing compared to harassing family and friends and damaging reputation. Our findings highlight that remedies should be responsive to harassment type and potential for harm. Remedies are also not necessarily correlated with harassment severity—expanding remedies may allow for more contextually appropriate and effective responses to harassment.more » « less
-
Online harassment and content moderation have been well-documented in online communities. However, new contexts and systems always bring new ways of harassment and need new moderation mechanisms. This study focuses on hate raids, a form of group attack in real-time in live streaming communities. Through a qualitative analysis of hate raids discussion in the Twitch subreddit (r/Twitch), we found that (1) hate raids as a human-bot coordinated group attack leverages the live stream system to attack marginalized streamers and other potential groups with(out) breaking the rules, (2) marginalized streamers suffer compound harms with insufficient support from the platform, (3) moderation strategies are overwhelmingly technical, but streamers still struggle to balance moderation and participation considering their marginalization status and needs. We use affordances as a lens to explain how hate raids happens in live streaming systems and propose moderation-by-design as a lens when developing new features or systems to mitigate the potential abuse of such designs.more » « less
An official website of the United States government

