skip to main content


Search for: All records

Creators/Authors contains: "Wisniewski, Pamela J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 17, 2025
  2. Free, publicly-accessible full text available June 17, 2025
  3. Online harassment negatively impacts mental health, with victims expressing increased concerns such as depression, anxiety, and even increased risk of suicide, especially among youth and young adults. Yet, research has mainly focused on building automated systems to detect harassment incidents based on publicly available social media trace data, overlooking the impact of these negative events on the victims, especially in private channels of communication. Looking to close this gap, we examine a large dataset of private message conversations from Instagram shared and annotated by youth aged 13-21. We apply trained classifiers from online mental health to analyze the impact of online harassment on indicators pertinent to mental health expressions. Through a robust causal inference design involving a difference-in-differences analysis, we show that harassment results in greater expression of mental health concerns in victims up to 14 days following the incidents, while controlling for time, seasonality, and topic of conversation. Our study provides new benchmarks to quantify how victims perceive online harassment in the immediate aftermath of when it occurs. We make social justice-centered design recommendations to support harassment victims in private networked spaces. We caution that some of the paper's content could be triggering to readers.

     
    more » « less
    Free, publicly-accessible full text available May 31, 2025
  4. Free, publicly-accessible full text available May 11, 2025
  5. Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for “real-time” risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as “early” detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms’ improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks. 
    more » « less
    Free, publicly-accessible full text available May 11, 2025
  6. Free, publicly-accessible full text available May 11, 2025
  7. Recent increases in self-harm and suicide rates among youth have coincided with prevalent social media use; therefore, making these sensitive topics of critical importance to the HCI research community. We analyzed 1,224 direct message conversations (DMs) from 151 young Instagram users (ages 13-21), who engaged in private conversations using self-harm and suicide-related language. We found that youth discussed their personal experiences, including imminent thoughts of suicide and/or self-harm, as well as their past attempts and recovery. They gossiped about others, including complaining about triggering content and coercive threats of self-harm and suicide but also tried to intervene when a friend was in danger. Most of the conversations involved suicide or self-harm language that did not indicate the intent to harm but instead used hyperbolical language or humor. Our results shed light on youth perceptions, norms, and experiences of self-harm and suicide to inform future efforts towards risk detection and prevention. ContentWarning: This paper discusses the sensitive topics of self-harm and suicide. Reader discretion is advised. 
    more » « less
    Free, publicly-accessible full text available May 11, 2025
  8. Free, publicly-accessible full text available May 2, 2025
  9. Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative implications such as privacy concerns. Another complementary strategy involves using generative artificial intelligence to empower adolescents by educating them about predatory behaviors. To this end, we envision developing state-of-the-art conversational agents to simulate the conversations between adolescents and predators for educational purposes. Yet, one key challenge is the lack of a dataset to train such conversational agents. In this position paper, we present our motivation for empowering adolescents to cope with cybergrooming. We propose to develop large-scale, authentic datasets through an online survey targeting adolescents and parents. We discuss some initial background behind our motivation and proposed design of the survey, such as situating the participants in artificial cybergrooming scenarios, then allowing participants to respond to the survey to obtain their authentic responses. We also present several open questions related to our proposed approach and hope to discuss them with the workshop attendees. 
    more » « less
    Free, publicly-accessible full text available May 1, 2025
  10. Research involving sensitive data often leads to valuable human-centered insights. Yet, the effects of participating in and conducting research about sensitive data with youth are poorly understood. We conducted meta-level research to improve our understanding of these effects. We did the following: (i) asked youth (aged 13-21) to share their private Instagram Direct Messages (DMs) and flag their unsafe DMs; (ii) interviewed 30 participants about the experience of reflecting on this sensitive data; (iii) interviewed research assistants (RAs, n=12) about their experience analyzing youth's data. We found that reflecting about DMs brought discomfort for participants and RAs, although both benefited from increasing their awareness about online risks, their behavior, and privacy and social media practices. Participants had high expectations for safeguarding their private data while their concerns were mitigated by the potential to improve online safety. We provide implications for ethical research practices and the development of reflective practices among participants and RAs through applying trauma-informed principles to HCI research.

     
    more » « less
    Free, publicly-accessible full text available April 17, 2025