skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2329976

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. On social media, teens must manage their interpersonal boundaries not only with other people, but also with the algorithms embedded in these platforms. In this context, we engaged seven teens in an Asynchronous Remote Community (ARC) as part of a multi-year Youth Advisory Board (YAB) to discuss how they navigate, cope, and co-design for improved boundary management. Teens had preconceived notions of different platforms and navigated boundaries based on specific goals; yet, they struggled when platforms lacked the granular controls needed to meet their needs. Teens enjoyed the personalization afforded by algorithms, but they felt violated when algorithms pushed unwanted content. Teens designed features for enhanced control over their discoverability and for real-time risk detection to avoid boundary turbulence. We provide design guidelines for improved social media boundary management for youth and pinpoint educational opportunities to enhance teens’ understanding and use of social media privacy settings and algorithms. 
    more » « less
    Free, publicly-accessible full text available June 23, 2026
  2. Ensuring the online safety of youth has motivated research towards the development of machine learning (ML) methods capable of accurately detecting social media risks after-the-fact. However, for these detection models to be effective, they must proactively identify high-risk scenarios (e.g., sexual solicitations, cyberbullying) to mitigate harm. This `real-time' responsiveness is a recognized challenge within the risk detection literature. Therefore, this paper presents a novel two-level framework that first uses reinforcement learning to identify conversation stop points to prioritize messages for evaluation. Then, we optimize state-of-the-art deep learning models to accurately categorize risk priority (low, high). We apply this framework to a time-based simulation using a rich dataset of 23K private conversations with over 7 million messages donated by 194 youth (ages 13-21). We conducted an experiment comparing our new approach to a traditional conversation-level baseline. We found that the timeliness of conversations significantly improved from over 2 hours to approximately 16 minutes with only a slight reduction in accuracy (0.88 to 0.84). This study advances real-time detection approaches for social media data and provides a benchmark for future training reinforcement learning that prioritizes the timeliness of classifying high-risk conversations. 
    more » « less
    Free, publicly-accessible full text available June 7, 2026
  3. The debate on whether social media has a net positive or negative effect on youth is ongoing. Therefore, we conducted a thematic analysis on 2,061 posts made by 1,038 adolescents aged 15-17 on an online peer-support platform to investigate the ways in which these teens discussed popular social media platforms in their posts and to identify differences in their experiences across platforms. Our findings revealed four main emergent themes for the ways in which social media was discussed: 1) Sharing negative experiences or outcomes of social media use (58%, n = 1,095), 2) Attempts to connect with others (45%, n = 922), 3) Highlighting the positive side of social media use (20%, n = 409), and 4) Seeking information (20%, n = 491). Overall, while sharing about negative experiences was more prominent, teens also discussed balanced perspectives of connection-seeking, positive experiences, and information support on social media that should not be discounted. Moreover, we found statistical significance for how these experiences differed across social media platforms. For instance, teens were most likely to seek romantic relationships on Snapchat and self-promote on YouTube. Meanwhile, Instagram was mentioned most frequently for body shaming, and Facebook was the most commonly discussed platform for privacy violations (mostly from parents). The key takeaway from our study is that the benefits and drawbacks of teens' social media usage can co-exist and net effects (positive or negative) can vary across different teens across various contexts. As such, we advocate for mitigating the negative experiences and outcomes of social media use as voiced by teens, to improve, rather than limit or restrict, their overall social media experience. We do this by taking an affordance perspective that aims to promote the digital well-being and online safety of youth by design. 
    more » « less
    Free, publicly-accessible full text available November 7, 2025
  4. Feasible and developmentally appropriate sociotechnical approaches for protecting youth from online risks have become a paramount concern among human-computer interaction research communities. Therefore, we conducted 38 interviews with entrepreneurs, IT professionals, clinicians, educators, and researchers who currently work in the space of youth online safety to understand the different sociotechnical approaches they proposed to keep youth safe online, while overcoming key challenges associated with these approaches. We identified three approaches taken among these stakeholders, which included 1) leveraging artificial intelligence (AI)/machine learning to detect risks, 2) building security/safety tools, and 3) developing new forms of parental control software. The trade-offs between privacy and protection, as well as other tensions among different stakeholders (e.g., tensions toward the big-tech companies) arose as major challenges, followed by the subjective nature of risk, lack of necessary but proprietary data, and costs to develop these technical solutions. To overcome the challenges, solutions such as building centralized and multi-disciplinary collaborations, creating sustainable business plans, prioritizing human-centered approaches, and leveraging state-of-art AI were suggested. Our contribution to the body of literature is providing evidence-based implications for the design of sociotechnical solutions to keep youth safe online. 
    more » « less
  5. Online harassment negatively impacts mental health, with victims expressing increased concerns such as depression, anxiety, and even increased risk of suicide, especially among youth and young adults. Yet, research has mainly focused on building automated systems to detect harassment incidents based on publicly available social media trace data, overlooking the impact of these negative events on the victims, especially in private channels of communication. Looking to close this gap, we examine a large dataset of private message conversations from Instagram shared and annotated by youth aged 13-21. We apply trained classifiers from online mental health to analyze the impact of online harassment on indicators pertinent to mental health expressions. Through a robust causal inference design involving a difference-in-differences analysis, we show that harassment results in greater expression of mental health concerns in victims up to 14 days following the incidents, while controlling for time, seasonality, and topic of conversation. Our study provides new benchmarks to quantify how victims perceive online harassment in the immediate aftermath of when it occurs. We make social justice-centered design recommendations to support harassment victims in private networked spaces. We caution that some of the paper's content could be triggering to readers. 
    more » « less
  6. Recent increases in self-harm and suicide rates among youth have coincided with prevalent social media use; therefore, making these sensitive topics of critical importance to the HCI research community. We analyzed 1,224 direct message conversations (DMs) from 151 young Instagram users (ages 13-21), who engaged in private conversations using self-harm and suicide-related language. We found that youth discussed their personal experiences, including imminent thoughts of suicide and/or self-harm, as well as their past attempts and recovery. They gossiped about others, including complaining about triggering content and coercive threats of self-harm and suicide but also tried to intervene when a friend was in danger. Most of the conversations involved suicide or self-harm language that did not indicate the intent to harm but instead used hyperbolical language or humor. Our results shed light on youth perceptions, norms, and experiences of self-harm and suicide to inform future efforts towards risk detection and prevention. ContentWarning: This paper discusses the sensitive topics of self-harm and suicide. Reader discretion is advised. 
    more » « less
  7. Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for “real-time” risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as “early” detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms’ improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks. 
    more » « less