skip to main content


Title: "Strike at the Root": Co-designing Real-Time Social Media Interventions for Adolescent Online Risk Prevention

Adolescent online safety researchers have emphasized the importance of moving beyond restrictive and privacy invasive approaches to online safety, towards resilience-based approaches for empowering teens to deal with online risks independently. Unfortunately, many of the existing online safety interventions are focused on parental mediation and not contextualized to teens' personal experiences online; thus, they do not effectively cater to the unique needs of teens. To better understand how we might design online safety interventions that help teens deal with online risks, as well as when and how to intervene, we must include teens as partners in the design process and equip them with the skills needed to contribute equally to the design process. As such, we conducted User Experience (UX) bootcamps with 21 teens (ages 13-17) to first teach them important UX design skills using industry standard tools, so they could create storyboards for unsafe online interactions commonly experienced by teens and high-fidelity, interactive prototypes for dealing with these situations. Based on their storyboards, teens often encountered information breaches and sexual risks with strangers, as well as cyberbullying from acquaintances or friends. While teens often blocked or reported strangers, they struggled with responding to risks from friends or acquaintances, seeking advice from others on the best action to take. Importantly, teens did not find any of the existing ways for responding to these risks to be effective in keeping them safe. When asked to create their own design-based interventions, teens frequently envisioned nudges that occurred in real-time. Interestingly, teens more often designed for risk prevention (rather than risk coping) by focusing on nudging the risk perpetrator (rather than the victim) to rethink their actions, block harmful actions from occurring, or penalizing perpetrators for inappropriate behavior to prevent it from happening again in the future. Teens also designed personalized sensitivity filters to provide teens the ability to manage content they wanted to see online. Some teens also designed personalized nudges, so that teens could receive intelligent, guided advice from the platform that would help them know how to handle online risks themselves without intervention from their parents. Our findings highlight how teens want to address online risks at the root by putting the onus of risk prevention on those who perpetrate them - rather than on the victim. Our work is the first to leverage co-design with teens to develop novel online safety interventions that advocate for a paradigm shift from youth risk protection to promoting good digital citizenship.

 
more » « less
Award ID(s):
2333207
NSF-PAR ID:
10486656
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW1
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 32
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. There is growing concern regarding adolescent online risks posed by social media. Prior work calls for a paradigm shift from restrictive approaches towards strength-based solutions to online safety, that provide autonomy and control to teens. To better understand how we might design online safety interventions that help teens deal with online risks, we must include teens as partners in the design and evaluation of online safety solutions. To address this gap, my first dissertation study focused on co-designing online safety features with teens, which showed that teens often design real-time interventions that resemble "nudges". Therefore, my dissertation focuses on evaluating the effectiveness of these nudge designs in an ecologically valid social media simulation. To do this, I will conduct three studies: 1) a User Experience Bootcamp with teens to teach them design skills for co-designing online safety features, 2) a focus group study to design an ecologically valid social media simulation, 3) a between-subjects experiment within a social media simulation for evaluating the effect of nudges in educating teens and helping them make safer choices when exposed to risk. My goal for this research is to understand, design, develop, and evaluate online safety nudges that can help promote self-regulated, autonomous, and safer interactions for teens online. 
    more » « less
  2. Online sexual risks pose a serious and frequent threat to adolescents’ online safety. While significant work is done within the HCI community to understand teens’ sexual experiences through public posts, we extend their research by qualitatively analyzing 156 private Instagram conversations flagged by 58 adolescents to understand the characteristics of sexual risks faced with strangers, acquaintances, and friends. We found that youth are often victimized by strangers through sexual solicitation/harassment as well as sexual spamming via text and visual media, which is often ignored by them. In contrast, adolescents’ played mixed roles with acquaintances, as they were often victims of sexual harassment, but sometimes engaged in sexting, or interacted by rejecting sexual requests from acquaintances. Lastly, adolescents were never recipients of sexual risks with their friends, as they mostly mutually participated in sexting or sexual spamming. Based on these results, we provide our insights and recommendations for future researchers. Trigger Warning: This paper contains explicit language and anonymized private sexual messages. Reader discretion advised. 
    more » « less
  3. We conducted User Experience (UX) Bootcamps with teens (ages 13-17) to teach them important UX design skills and industry standard tools for co-designing effective online safety interventions or “nudges”. In the process, we asked teens to storyboard about their risky or uncomfortable experiences and design high-fidelity prototypes for online safety interventions that would help mitigate these negative experiences. 
    more » « less
  4. Introduction Social media has created opportunities for children to gather social support online (Blackwell et al., 2016; Gonzales, 2017; Jackson, Bailey, & Foucault Welles, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Ponathil, Agnisarman, Khasawneh, Narasimha, & Madathil, 2017). However, social media also has the potential to expose children and adolescents to undesirable behaviors. Research showed that social media can be used to harass, discriminate (Fritz & Gonzales, 2018), dox (Wood, Rose, & Thompson, 2018), and socially disenfranchise children (Page, Wisniewski, Knijnenburg, & Namara, 2018). Other research proposes that social media use might be correlated to the significant increase in suicide rates and depressive symptoms among children and adolescents in the past ten years (Mitchell, Wells, Priebe, & Ybarra, 2014). Evidence based research suggests that suicidal and unwanted behaviors can be promulgated through social contagion effects, which model, normalize, and reinforce self-harming behavior (Hilton, 2017). These harmful behaviors and social contagion effects may occur more frequently through repetitive exposure and modelling via social media, especially when such content goes “viral” (Hilton, 2017). One example of viral self-harming behavior that has generated significant media attention is the Blue Whale Challenge (BWC). The hearsay about this challenge is that individuals at all ages are persuaded to participate in self-harm and eventually kill themselves (Mukhra, Baryah, Krishan, & Kanchan, 2017). Research is needed specifically concerning BWC ethical concerns, the effects the game may have on teenagers, and potential governmental interventions. To address this gap in the literature, the current study uses qualitative and content analysis research techniques to illustrate the risk of self-harm and suicide contagion through the portrayal of BWC on YouTube and Twitter Posts. The purpose of this study is to analyze the portrayal of BWC on YouTube and Twitter in order to identify the themes that are presented on YouTube and Twitter posts that share and discuss BWC. In addition, we want to explore to what extent are YouTube videos compliant with safe and effective suicide messaging guidelines proposed by the Suicide Prevention Resource Center (SPRC). Method Two social media websites were used to gather the data: 60 videos and 1,112 comments from YouTube and 150 posts from Twitter. The common themes of the YouTube videos, comments on those videos, and the Twitter posts were identified using grounded, thematic content analysis on the collected data (Padgett, 2001). Three codebooks were built, one for each type of data. The data for each site were analyzed, and the common themes were identified. A deductive coding analysis was conducted on the YouTube videos based on the nine SPRC safe and effective messaging guidelines (Suicide Prevention Resource Center, 2006). The analysis explored the number of videos that violated these guidelines and which guidelines were violated the most. The inter-rater reliabilities between the coders ranged from 0.61 – 0.81 based on Cohen’s kappa. Then the coders conducted consensus coding. Results & Findings Three common themes were identified among all the posts in the three social media platforms included in this study. The first theme included posts where social media users were trying to raise awareness and warning parents about this dangerous phenomenon in order to reduce the risk of any potential participation in BWC. This was the most common theme in the videos and posts. Additionally, the posts claimed that there are more than 100 people who have played BWC worldwide and provided detailed description of what each individual did while playing the game. These videos also described the tasks and different names of the game. Only few videos provided recommendations to teenagers who might be playing or thinking of playing the game and fewer videos mentioned that the provided statistics were not confirmed by reliable sources. The second theme included posts of people that either criticized the teenagers who participated in BWC or made fun of them for a couple of reasons: they agreed with the purpose of BWC of “cleaning the society of people with mental issues,” or they misunderstood why teenagers participate in these kind of challenges, such as thinking they mainly participate due to peer pressure or to “show off”. The last theme we identified was that most of these users tend to speak in detail about someone who already participated in BWC. These videos and posts provided information about their demographics and interviews with their parents or acquaintances, who also provide more details about the participant’s personal life. The evaluation of the videos based on the SPRC safe messaging guidelines showed that 37% of the YouTube videos met fewer than 3 of the 9 safe messaging guidelines. Around 50% of them met only 4 to 6 of the guidelines, while the remaining 13% met 7 or more of the guidelines. Discussion This study is the first to systematically investigate the quality, portrayal, and reach of BWC on social media. Based on our findings from the emerging themes and the evaluation of the SPRC safe messaging guidelines we suggest that these videos could contribute to the spread of these deadly challenges (or suicide in general since the game might be a hoax) instead of raising awareness. Our suggestion is parallel with similar studies conducted on the portrait of suicide in traditional media (Fekete & Macsai, 1990; Fekete & Schmidtke, 1995). Most posts on social media romanticized people who have died by following this challenge, and younger vulnerable teens may see the victims as role models, leading them to end their lives in the same way (Fekete & Schmidtke, 1995). The videos presented statistics about the number of suicides believed to be related to this challenge in a way that made suicide seem common (Cialdini, 2003). In addition, the videos presented extensive personal information about the people who have died by suicide while playing the BWC. These videos also provided detailed descriptions of the final task, including pictures of self-harm, material that may encourage vulnerable teens to consider ending their lives and provide them with methods on how to do so (Fekete & Macsai, 1990). On the other hand, these videos both failed to emphasize prevention by highlighting effective treatments for mental health problems and failed to encourage teenagers with mental health problems to seek help and providing information on where to find it. YouTube and Twitter are capable of influencing a large number of teenagers (Khasawneh, Ponathil, Firat Ozkan, & Chalil Madathil, 2018; Pater & Mynatt, 2017). We suggest that it is urgent to monitor social media posts related to BWC and similar self-harm challenges (e.g., the Momo Challenge). Additionally, the SPRC should properly educate social media users, particularly those with more influence (e.g., celebrities) on elements that boost negative contagion effects. While the veracity of these challenges is doubted by some, posting about the challenges in unsafe manners can contribute to contagion regardless of the challlenges’ true nature. 
    more » « less
  5. Sexual exploration is a natural part of adolescent development; yet, unmediated internet access has enabled teens to engage in a wider variety of potentially riskier sexual interactions than previous generations, from normatively appropriate sexual interactions to sexually abusive situations. Teens have turned to online peer support platforms to disclose and seek support about these experiences. Therefore, we analyzed posts (N=45,955) made by adolescents (ages 13--17) on an online peer support platform to deeply examine their online sexual risk experiences. By applying a mixed methods approach, we 1) accurately (average of AUC = 0.90) identified posts that contained teen disclosures about online sexual risk experiences and classified the posts based on level of consent (i.e., consensual, non-consensual, sexual abuse) and relationship type (i.e., stranger, dating/friend, family) between the teen and the person in which they shared the sexual experience, 2) detected statistically significant differences in the proportions of posts based on these dimensions, and 3) further unpacked the nuance in how these online sexual risk experiences were typically characterized in the posts. Teens were significantly more likely to engage in consensual sexting with friends/dating partners; unwanted solicitations were more likely from strangers and sexual abuse was more likely when a family member was involved. We contribute to the HCI and CSCW literature around youth online sexual risk experiences by moving beyond the false dichotomy of "safe" versus "risky". Our work provides a deeper understanding of technology-mediated adolescent sexual behaviors from the perspectives of sexual well-being, risk detection, and the prevention of online sexual violence toward youth. 
    more » « less