As part of a Youth Advisory Board of teens (YAB), a longitudinal and interactive program to engage with teens for adolescent online safety research, we used an Asynchronous Remote Community (ARC) method with seven teens to explore their social media usage and perspectives on privacy on social media. There was a spectrum of privacy levels in our teen participants’ preferred social media platforms and preferences varied depending on their user goals such as content viewing and socializing. They recognized privacy risks they could encounter on social media, hence, actively used privacy features afforded by platforms to stay safe while meeting their goals. In addition, our teen participants designed solutions that can aid users to exercise more granular control over determining what information on their accounts is to be shared with which groups of users. Our findings highlight the need to ensure researchers and social media developers work with teens to provide teen-centric solutions for safer experiences on social media. 
                        more » 
                        « less   
                    This content will become publicly available on April 1, 2026
                            
                            Can Social Media Privacy and Safety Features Protect Targets of Interpersonal Attacks? A Systematic Analysis
                        
                    
    
            Social media applications have benefited users in several ways, including ease of communication and quick access to information. However, they have also introduced several privacy and safety risks. These risks are particularly concerning in the context of interpersonal attacks, which are carried out by abusive friends, family members, intimate partners, co-workers, or even strangers. Evidence shows interpersonal attackers regularly exploit social media platforms to harass and spy on their targets. To help protect targets from such attacks, social media platforms have introduced several privacy and safety features. However, it is unclear how effective they are against interpersonal threats. In this work, we analyzed ten popular social media applications, identifying 100 unique privacy and safety features that provide controls across eight categories: discoverability, visibility, saving and sharing, interaction, self-censorship, content moderation, transparency, and reporting. We simulated 59 different attack actions by a persistent attacker — aimed at account discovery, information gathering, non-consensual sharing, and harassment — and found many were successful. Based on our findings, we proposed improvements to mitigate these risks. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2339679
- PAR ID:
- 10589576
- Publisher / Repository:
- Proceedings on Privacy Enhancing Technologies Symposium
- Date Published:
- Journal Name:
- Proceedings on Privacy Enhancing Technologies
- Volume:
- 2025
- Issue:
- 2
- ISSN:
- 2299-0984
- Page Range / eLocation ID:
- 326 to 343
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Voluntary sharing of personal information is at the heart of user engagement on social media and central to platforms' business models. From the users' perspective, so-called self-disclosure is closely connected with both privacy risks and social rewards. Prior work has studied contextual influences on self-disclosure, from platform affordances and interface design to user demographics and perceived social capital. Our work takes a mixed-methods approach to understand the contextual information which might be integrated in the development of privacy-enhancing technologies. Through observational study of several Reddit communities, we explore the ways in which topic of discussion, group norms, peer effects, and audience size are correlated with personal information sharing. We then build and test a prototype privacy-enhancing tool that exposes these contextual factors. Our work culminates in a browser extension that automatically detects instances of self-disclosure in Reddit posts at the time of posting and provides additional context to users before they post to support enhanced privacy decision-making. We share this prototype with social media users, solicit their feedback, and outline a path forward for privacy-enhancing technologies in this space.more » « less
- 
            Future online safety technologies should consider the privacy needs of adolescents (ages 13-17) and support their ability to self-regulate their online behaviors and navigate online risks. To do this, adolescent online safety researchers and practitioners must shift towards solutions that are more teen-centric by designing privacy-preserving online safety solutions for teens. In this paper, we discuss privacy challenges we have encountered in conducting adolescent online safety research. We discuss privacy concerns of teens in regard to sharing their private social media data with researchers and potentially taking part in a user study where they share some of this information with their parents. Our research emphasizes a need for more privacy-preserving interventions for teens.more » « less
- 
            With the growing ubiquity of the Internet and access to media-based social media platforms, the risks associated with media content sharing on social media and the need for safety measures against such risks have grown paramount. At the same time, risk is highly contextualized, especially when it comes to media content youth share privately on social media. In this work, we conducted qualitative content analyses on risky media content flagged by youth participants and research assistants of similar ages to explore contextual dimensions of youth online risks. The contextual risk dimensions were then used to inform semi- and self-supervised state-of-the-art vision transformers to automate the process of identifying risky images shared by youth. We found that vision transformers are capable of learning complex image features for use in automated risk detection and classification. The results of our study serve as a foundation for designing contextualized and youth-centered machine-learning methods for automated online risk detection.more » « less
- 
            Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse today. The opaque nature of the algorithms these platforms use to curate content raises societal questions. Prior studies have used black-box methods led by experts or collaborative audits driven by everyday users to show that these algorithms can lead to biased or discriminatory outcomes. However, existing auditing methods face fundamental limitations because they function independent of the platforms. Concerns of potential harmful outcomes have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. The first contribution of our work is to enumerate the challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4× for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
