With the growing ubiquity of the Internet and access to media-based social media platforms, the risks associated with media content sharing on social media and the need for safety measures against such risks have grown paramount. At the same time, risk is highly contextualized, especially when it comes to media content youth share privately on social media. In this work, we conducted qualitative content analyses on risky media content flagged by youth participants and research assistants of similar ages to explore contextual dimensions of youth online risks. The contextual risk dimensions were then used to inform semi- and self-supervised state-of-the-art vision transformers to automate the process of identifying risky images shared by youth. We found that vision transformers are capable of learning complex image features for use in automated risk detection and classification. The results of our study serve as a foundation for designing contextualized and youth-centered machine-learning methods for automated online risk detection.
more »
« less
Form-From: A Design Space of Social Media Systems
Social media systems are as varied as they are pervasive. They have been almost universally adopted for a broad range of purposes including work, entertainment, activism, and decision making. As a result, they have also diversified, with many distinct designs differing in content type, organization, delivery mechanism, access control, and many other dimensions. In this work, we aim to characterize and then distill a concise design space of social media systems that can help us understand similarities and differences, recognize potential consequences of design choice, and identify spaces for innovation. Our model, which we call Form-From, characterizes social media based on (1) the form of the content, either threaded or flat, and (2) from where or from whom one might receive content, ranging from spaces to networks to the commons. We derive Form-From inductively from a larger set of 62 dimensions organized into 10 categories. To demonstrate the utility of our model, we trace the history of social media systems as they traverse the Form-From space over time, and we identify common design patterns within cells of the model.
more »
« less
- Award ID(s):
- 2236618
- PAR ID:
- 10525111
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 8
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 47
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this work, we analyze video data and interviews from a public deployment of two trash barrel robots in a large public space to better understand the sensemaking activities people perform when they encounter robots in public spaces. Based on an analysis of 274 human–robot interactions and interviews withN =65 individuals or groups, we discovered that people were responding not only to the robots or their behavior, but also to the general idea of deploying robots as trashcans, and the larger social implications of that idea. They wanted to understand details about the deployment because having that knowledge would change how they interact with the robot. Based on our data and analysis, we have provided implications for design that may be topics for future human–robot design researchers who are exploring robots for public space deployment. Furthermore, our work offers a practical example of analyzing field data to make sense of robots in public spaces.more » « less
-
Social media users have long been aware of opaque content moderation systems and how they shape platform environments. On TikTok, creators increasingly utilize algospeak to circumvent unjust content restriction, meaning, they change or invent words to prevent TikTok’s content moderation algorithm from banning their video (e.g., “le$bean” for “lesbian”). We interviewed 19 TikTok creators about their motivations and practices of using algospeak in relation to their experience with TikTok’s content moderation. Participants largely anticipated how TikTok’s algorithm would read their videos, and used algospeak to evade unjustified content moderation while simultaneously ensuring target audiences can still find their videos. We identify non-contextuality, randomness, inaccuracy, and bias against marginalized communities as major issues regarding freedom of expression, equality of subjects, and support for communities of interest. Using algospeak, we argue for a need to improve contextually informed content moderation to valorize marginalized and tabooed audiovisual content on social media.more » « less
-
Social media platforms make trade-offs in their design and policy decisions to attract users and stand out from other platforms. These decisions are influenced by a number of considerations, e.g. what kinds of content moderation to deploy or what kinds of resources a platform has access to. Their choices play into broader political tensions; social media platforms are situated within a social context that frames their impact, and they can have politics through their design that enforce power structures and serve existing authorities. We turn to Pillowfort, a small social media platform, to examine these political tensions as a case study. Using a discourse analysis, we examine public discussion posts between staff and users as they negotiate the site's development over a period of two years. Our findings illustrate the tensions in navigating the politics that users bring with them from previous platforms, the difficulty of building a site's unique identity and encouraging commitment, and examples of how design decisions can both foster and break trust with users. Drawing from these findings, we discuss how the success and failure of new social media platforms are impacted by political influences on design and policy decisions.more » « less
-
Social media applications have benefited users in several ways, including ease of communication and quick access to information. However, they have also introduced several privacy and safety risks. These risks are particularly concerning in the context of interpersonal attacks, which are carried out by abusive friends, family members, intimate partners, co-workers, or even strangers. Evidence shows interpersonal attackers regularly exploit social media platforms to harass and spy on their targets. To help protect targets from such attacks, social media platforms have introduced several privacy and safety features. However, it is unclear how effective they are against interpersonal threats. In this work, we analyzed ten popular social media applications, identifying 100 unique privacy and safety features that provide controls across eight categories: discoverability, visibility, saving and sharing, interaction, self-censorship, content moderation, transparency, and reporting. We simulated 59 different attack actions by a persistent attacker — aimed at account discovery, information gathering, non-consensual sharing, and harassment — and found many were successful. Based on our findings, we proposed improvements to mitigate these risks.more » « less
An official website of the United States government

