skip to main content


Title: An Investigation of the Portrayal of Social Media Challenges on YouTube and Twitter
A social media phenomenon that has received limited research attention is the advent and propagation of viral online challenges. Several of these challenges entail self-harming behavior, which, combined with their viral nature, poses physical and psychological risks for both participants and viewers. The objective of this study is to identify the nature of what people post about the social media challenges that vary in their level of risk. To do so, we conducted a qualitative analysis of three viral social media challenges, the Blue Whale, Tide Pod, and Ice Bucket challenges, based on 180 YouTube videos, 3,607 comments on those YouTube videos, and 450 Twitter posts. We identified common themes across the YouTube videos, comments, and Twitter posts: (1) promoting education and awareness, (2) criticizing the participants, (3) providing detailed information about the participants, (4) giving viewers a tutorial on how to participate, and (5) attempting to understand this seemingly senseless online behavior. We used social norm theory to discuss what leads people to post about the challenges and how posts intended to raise awareness about harmful challenges could potentially create a contagion effect by spreading knowledge about them, thereby increasing participation. Finally, we proposed design implications that could potentially minimize the risks and propagation of harmful social media challenges.  more » « less
Award ID(s):
1832904
NSF-PAR ID:
10219334
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Social Computing
Volume:
4
Issue:
1
ISSN:
2469-7818
Page Range / eLocation ID:
1 to 23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Introduction Social media has created opportunities for children to gather social support online (Blackwell et al., 2016; Gonzales, 2017; Jackson, Bailey, & Foucault Welles, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Ponathil, Agnisarman, Khasawneh, Narasimha, & Madathil, 2017). However, social media also has the potential to expose children and adolescents to undesirable behaviors. Research showed that social media can be used to harass, discriminate (Fritz & Gonzales, 2018), dox (Wood, Rose, & Thompson, 2018), and socially disenfranchise children (Page, Wisniewski, Knijnenburg, & Namara, 2018). Other research proposes that social media use might be correlated to the significant increase in suicide rates and depressive symptoms among children and adolescents in the past ten years (Mitchell, Wells, Priebe, & Ybarra, 2014). Evidence based research suggests that suicidal and unwanted behaviors can be promulgated through social contagion effects, which model, normalize, and reinforce self-harming behavior (Hilton, 2017). These harmful behaviors and social contagion effects may occur more frequently through repetitive exposure and modelling via social media, especially when such content goes “viral” (Hilton, 2017). One example of viral self-harming behavior that has generated significant media attention is the Blue Whale Challenge (BWC). The hearsay about this challenge is that individuals at all ages are persuaded to participate in self-harm and eventually kill themselves (Mukhra, Baryah, Krishan, & Kanchan, 2017). Research is needed specifically concerning BWC ethical concerns, the effects the game may have on teenagers, and potential governmental interventions. To address this gap in the literature, the current study uses qualitative and content analysis research techniques to illustrate the risk of self-harm and suicide contagion through the portrayal of BWC on YouTube and Twitter Posts. The purpose of this study is to analyze the portrayal of BWC on YouTube and Twitter in order to identify the themes that are presented on YouTube and Twitter posts that share and discuss BWC. In addition, we want to explore to what extent are YouTube videos compliant with safe and effective suicide messaging guidelines proposed by the Suicide Prevention Resource Center (SPRC). Method Two social media websites were used to gather the data: 60 videos and 1,112 comments from YouTube and 150 posts from Twitter. The common themes of the YouTube videos, comments on those videos, and the Twitter posts were identified using grounded, thematic content analysis on the collected data (Padgett, 2001). Three codebooks were built, one for each type of data. The data for each site were analyzed, and the common themes were identified. A deductive coding analysis was conducted on the YouTube videos based on the nine SPRC safe and effective messaging guidelines (Suicide Prevention Resource Center, 2006). The analysis explored the number of videos that violated these guidelines and which guidelines were violated the most. The inter-rater reliabilities between the coders ranged from 0.61 – 0.81 based on Cohen’s kappa. Then the coders conducted consensus coding. Results & Findings Three common themes were identified among all the posts in the three social media platforms included in this study. The first theme included posts where social media users were trying to raise awareness and warning parents about this dangerous phenomenon in order to reduce the risk of any potential participation in BWC. This was the most common theme in the videos and posts. Additionally, the posts claimed that there are more than 100 people who have played BWC worldwide and provided detailed description of what each individual did while playing the game. These videos also described the tasks and different names of the game. Only few videos provided recommendations to teenagers who might be playing or thinking of playing the game and fewer videos mentioned that the provided statistics were not confirmed by reliable sources. The second theme included posts of people that either criticized the teenagers who participated in BWC or made fun of them for a couple of reasons: they agreed with the purpose of BWC of “cleaning the society of people with mental issues,” or they misunderstood why teenagers participate in these kind of challenges, such as thinking they mainly participate due to peer pressure or to “show off”. The last theme we identified was that most of these users tend to speak in detail about someone who already participated in BWC. These videos and posts provided information about their demographics and interviews with their parents or acquaintances, who also provide more details about the participant’s personal life. The evaluation of the videos based on the SPRC safe messaging guidelines showed that 37% of the YouTube videos met fewer than 3 of the 9 safe messaging guidelines. Around 50% of them met only 4 to 6 of the guidelines, while the remaining 13% met 7 or more of the guidelines. Discussion This study is the first to systematically investigate the quality, portrayal, and reach of BWC on social media. Based on our findings from the emerging themes and the evaluation of the SPRC safe messaging guidelines we suggest that these videos could contribute to the spread of these deadly challenges (or suicide in general since the game might be a hoax) instead of raising awareness. Our suggestion is parallel with similar studies conducted on the portrait of suicide in traditional media (Fekete & Macsai, 1990; Fekete & Schmidtke, 1995). Most posts on social media romanticized people who have died by following this challenge, and younger vulnerable teens may see the victims as role models, leading them to end their lives in the same way (Fekete & Schmidtke, 1995). The videos presented statistics about the number of suicides believed to be related to this challenge in a way that made suicide seem common (Cialdini, 2003). In addition, the videos presented extensive personal information about the people who have died by suicide while playing the BWC. These videos also provided detailed descriptions of the final task, including pictures of self-harm, material that may encourage vulnerable teens to consider ending their lives and provide them with methods on how to do so (Fekete & Macsai, 1990). On the other hand, these videos both failed to emphasize prevention by highlighting effective treatments for mental health problems and failed to encourage teenagers with mental health problems to seek help and providing information on where to find it. YouTube and Twitter are capable of influencing a large number of teenagers (Khasawneh, Ponathil, Firat Ozkan, & Chalil Madathil, 2018; Pater & Mynatt, 2017). We suggest that it is urgent to monitor social media posts related to BWC and similar self-harm challenges (e.g., the Momo Challenge). Additionally, the SPRC should properly educate social media users, particularly those with more influence (e.g., celebrities) on elements that boost negative contagion effects. While the veracity of these challenges is doubted by some, posting about the challenges in unsafe manners can contribute to contagion regardless of the challlenges’ true nature. 
    more » « less
  2. Background: Research suggests that direct exposure to suicidal behavior and acts of self-harm through social media may increase suicidality through imitation and modeling, with adolescents representing a particularly vulnerable population. One example of viral self-harming behavior that could potentially be propagated through social media is the Blue Whale Challenge (BWC). Objective: We investigate how people portray BWC on social media and the potential harm this may pose to vulnerable populations. Methods: We first used a grounded approach coding 60 publicly posted YouTube videos, 1112 comments on those videos, and 150 Twitter posts that explicitly referenced BWC. We deductively coded the YouTube videos based on the Suicide Prevention Resource Center (SPRC) Messaging guidelines. Results: Overall, 83.33%, 28.33%, and 68.67% of the YouTube videos, comments, and Twitter posts were trying to raise awareness and discourage participation in BWC. Yet, about 37% of the videos violated six or more of the SPRC messaging guidelines. Conclusions: These posts might have the problematic effect of normalizing BWC through repeated exposure, modeling, and reinforcement of self-harming and suicidal behavior, especially among vulnerable adolescents. Greater efforts are needed to educate social media users and content generators on safe messaging guidelines and factors that encourage versus discourage contagion effects. 
    more » « less
  3. null (Ed.)
    Background The COVID-19 pandemic has caused several disruptions in personal and collective lives worldwide. The uncertainties surrounding the pandemic have also led to multifaceted mental health concerns, which can be exacerbated with precautionary measures such as social distancing and self-quarantining, as well as societal impacts such as economic downturn and job loss. Despite noting this as a “mental health tsunami”, the psychological effects of the COVID-19 crisis remain unexplored at scale. Consequently, public health stakeholders are currently limited in identifying ways to provide timely and tailored support during these circumstances. Objective Our study aims to provide insights regarding people’s psychosocial concerns during the COVID-19 pandemic by leveraging social media data. We aim to study the temporal and linguistic changes in symptomatic mental health and support expressions in the pandemic context. Methods We obtained about 60 million Twitter streaming posts originating from the United States from March 24 to May 24, 2020, and compared these with about 40 million posts from a comparable period in 2019 to attribute the effect of COVID-19 on people’s social media self-disclosure. Using these data sets, we studied people’s self-disclosure on social media in terms of symptomatic mental health concerns and expressions of support. We employed transfer learning classifiers that identified the social media language indicative of mental health outcomes (anxiety, depression, stress, and suicidal ideation) and support (emotional and informational support). We then examined the changes in psychosocial expressions over time and language, comparing the 2020 and 2019 data sets. Results We found that all of the examined psychosocial expressions have significantly increased during the COVID-19 crisis—mental health symptomatic expressions have increased by about 14%, and support expressions have increased by about 5%, both thematically related to COVID-19. We also observed a steady decline and eventual plateauing in these expressions during the COVID-19 pandemic, which may have been due to habituation or due to supportive policy measures enacted during this period. Our language analyses highlighted that people express concerns that are specific to and contextually related to the COVID-19 crisis. Conclusions We studied the psychosocial effects of the COVID-19 crisis by using social media data from 2020, finding that people’s mental health symptomatic and support expressions significantly increased during the COVID-19 period as compared to similar data from 2019. However, this effect gradually lessened over time, suggesting that people adapted to the circumstances and their “new normal.” Our linguistic analyses revealed that people expressed mental health concerns regarding personal and professional challenges, health care and precautionary measures, and pandemic-related awareness. This study shows the potential to provide insights to mental health care and stakeholders and policy makers in planning and implementing measures to mitigate mental health risks amid the health crisis. 
    more » « less
  4. null (Ed.)
    Content moderation is a critical service performed by a variety of people on social media, protecting users from offensive or harmful content by reviewing and removing either the content or the perpetrator. These moderators fall into one of two categories: employees or volunteers. Prior research has suggested that there are differences in the effectiveness of these two types of moderators, with the more transparent user-based moderation being useful for educating users. However, direct comparisons between commercially-moderated and user-moderated platforms are rare, and apart from the difference in transparency, we still know little about what other disparities in user experience these two moderator types may create. To explore this, we conducted cross-platform surveys of over 900 users of commercially-moderated (Facebook, Instagram, Twitter, and YouTube) and user-moderated (Reddit and Twitch) social media platforms. Our results indicated that although user-moderated platforms did seem to be more transparent than commercially-moderated ones, this did not lead to user-moderated platforms being perceived as less toxic. In addition, commercially-moderated platform users want companies to take more responsibility for content moderation than they currently do, while user-moderated platform users want designated moderators and those who post on the site to take more responsibility. Across platforms, users seem to feel powerless and want to be taken care of when it comes to content moderation as opposed to engaging themselves. 
    more » « less
  5. Online harassment is pervasive. While substantial research has examined the nature of online harassment and how to moderate it, little work has explored how social media users evaluate the profiles of online harassers. This is important for helping people who may be experiencing or observing harassment to quickly and efficiently evaluate the user doing the harassing. We conducted a lab experiment (N=45) that eye-tracked participants while they viewed profiles of users who engaged in online harassment on mock Facebook, Twitter, and Instagram profiles. We evaluated what profile elements they looked at and for how long relative to a control group, and their qualitative attitudes about harasser profiles. Results showed that participants look at harassing users' post history more quickly than non-harassing users. They are also somewhat more likely to recall harassing profiles than non-harassing profiles. However, they do not spend more time on harassing profiles. Understanding what users pay attention to and recall may offer new design opportunities for supporting people who experience or observe harassment online. 
    more » « less