skip to main content

Title: Sexist Slurs: Reinforcing Feminine Stereotypes Online
Social media platforms are accused repeatedly of creating environments in which women are bullied and harassed. We argue that online aggression toward women aims to reinforce traditional feminine norms and stereotypes. In a mixed methods study, we find that this type of aggression on Twitter is common and extensive and that it can spread far beyond the original target. We locate over 2.9 million tweets in one week that contain instances of gendered insults (e.g., “bitch,” “cunt,” “slut,” or “whore”)—averaging 419,000 sexist slurs per day. The vast majority of these tweets are negative in sentiment. We analyze the social networks of the conversations that ensue in several cases and demonstrate how the use of “replies,” “retweets,” and “likes” can further victimize a target. Additionally, we develop a sentiment classifier that we use in a regression analysis to compare the negativity of sexist messages. We find that words in a message that reinforce feminine stereotypes inflate the negative sentiment of tweets to a significant and sizeable degree. These terms include those insulting someone’s appearance (e.g., “ugly”), intellect (e.g., “stupid”), sexual experience (e.g., “promiscuous”), mental stability (e.g., “crazy”), and age (“old”). Messages enforcing beauty norms tend to be particularly negative. In sum, more » hostile, sexist tweets are strategic in nature. They aim to promote traditional, cultural beliefs about femininity, such as beauty ideals, and they shame victims by accusing them of falling short of these standards. Harassment on social media constitutes an everyday, routine occurrence, with researchers finding 9,764,583 messages referencing bullying on Twitter over the span of two years (Bellmore et al. 2015). In other words, Twitter users post over 13,000 bullying-related messages on a daily basis. Forms of online aggression also carry with them serious, negative consequences. Repeated research documents that bullying victims suffer from a host of deleterious outcomes, such as low self-esteem (Hinduja and Patchin 2010), emotional and psychological distress (Ybarra et al. 2006), and negative emotions (Faris and Felmlee 2014; Juvonen and Gross 2008). Compared to those who have not been attacked, victims also tend to report more incidents of suicide ideation and attempted suicide (Hinduja and Patchin 2010). Several studies document that the targets of cyberbullying are disproportionately women (Backe et al. 2018; Felmlee and Faris 2016; Hinduja and Patchin 2010; Pew Research Center 2017), although there are exceptions depending on definitions and venues. Yet, we know little about the content or pattern of cyber aggression directed toward women in online forums. The purpose of the present research, therefore, is to examine in detail the practice of aggressive messaging that targets women and femininity within the social media venue of Twitter. Using both qualitative and quantitative analyses, we investigate the role of gender norm regulation in these patterns of cyber aggression. « less
Authors:
; ;
Award ID(s):
1818497
Publication Date:
NSF-PAR ID:
10147015
Journal Name:
Sex Roles
ISSN:
0360-0025
Sponsoring Org:
National Science Foundation
More Like this
  1. Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes thatmore »lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethno-racial minorities, and these messages possess the ability to extend to wider audiences.« less
  2. Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes thatmore »lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethno-racial minorities, and these messages possess the ability to extend to wider audiences.« less
  3. Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes thatmore »lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethnoracial minorities, and these messages possess the ability to extend to wider audiences.« less
  4. Introduction Social media has created opportunities for children to gather social support online (Blackwell et al., 2016; Gonzales, 2017; Jackson, Bailey, & Foucault Welles, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Ponathil, Agnisarman, Khasawneh, Narasimha, & Madathil, 2017). However, social media also has the potential to expose children and adolescents to undesirable behaviors. Research showed that social media can be used to harass, discriminate (Fritz & Gonzales, 2018), dox (Wood, Rose, & Thompson, 2018), and socially disenfranchise children (Page, Wisniewski, Knijnenburg, & Namara, 2018). Other research proposes that social media use might be correlated to the significant increase in suicide rates and depressive symptoms among children and adolescents in the past ten years (Mitchell, Wells, Priebe, & Ybarra, 2014). Evidence based research suggests that suicidal and unwanted behaviors can be promulgated through social contagion effects, which model, normalize, and reinforce self-harming behavior (Hilton, 2017). These harmful behaviors and social contagion effects may occur more frequently through repetitive exposure and modelling via social media, especially when such content goes “viral” (Hilton, 2017). One example of viral self-harming behavior that has generated significant media attention is the Blue Whale Challenge (BWC). The hearsay about this challenge is that individuals at allmore »ages are persuaded to participate in self-harm and eventually kill themselves (Mukhra, Baryah, Krishan, & Kanchan, 2017). Research is needed specifically concerning BWC ethical concerns, the effects the game may have on teenagers, and potential governmental interventions. To address this gap in the literature, the current study uses qualitative and content analysis research techniques to illustrate the risk of self-harm and suicide contagion through the portrayal of BWC on YouTube and Twitter Posts. The purpose of this study is to analyze the portrayal of BWC on YouTube and Twitter in order to identify the themes that are presented on YouTube and Twitter posts that share and discuss BWC. In addition, we want to explore to what extent are YouTube videos compliant with safe and effective suicide messaging guidelines proposed by the Suicide Prevention Resource Center (SPRC). Method Two social media websites were used to gather the data: 60 videos and 1,112 comments from YouTube and 150 posts from Twitter. The common themes of the YouTube videos, comments on those videos, and the Twitter posts were identified using grounded, thematic content analysis on the collected data (Padgett, 2001). Three codebooks were built, one for each type of data. The data for each site were analyzed, and the common themes were identified. A deductive coding analysis was conducted on the YouTube videos based on the nine SPRC safe and effective messaging guidelines (Suicide Prevention Resource Center, 2006). The analysis explored the number of videos that violated these guidelines and which guidelines were violated the most. The inter-rater reliabilities between the coders ranged from 0.61 – 0.81 based on Cohen’s kappa. Then the coders conducted consensus coding. Results & Findings Three common themes were identified among all the posts in the three social media platforms included in this study. The first theme included posts where social media users were trying to raise awareness and warning parents about this dangerous phenomenon in order to reduce the risk of any potential participation in BWC. This was the most common theme in the videos and posts. Additionally, the posts claimed that there are more than 100 people who have played BWC worldwide and provided detailed description of what each individual did while playing the game. These videos also described the tasks and different names of the game. Only few videos provided recommendations to teenagers who might be playing or thinking of playing the game and fewer videos mentioned that the provided statistics were not confirmed by reliable sources. The second theme included posts of people that either criticized the teenagers who participated in BWC or made fun of them for a couple of reasons: they agreed with the purpose of BWC of “cleaning the society of people with mental issues,” or they misunderstood why teenagers participate in these kind of challenges, such as thinking they mainly participate due to peer pressure or to “show off”. The last theme we identified was that most of these users tend to speak in detail about someone who already participated in BWC. These videos and posts provided information about their demographics and interviews with their parents or acquaintances, who also provide more details about the participant’s personal life. The evaluation of the videos based on the SPRC safe messaging guidelines showed that 37% of the YouTube videos met fewer than 3 of the 9 safe messaging guidelines. Around 50% of them met only 4 to 6 of the guidelines, while the remaining 13% met 7 or more of the guidelines. Discussion This study is the first to systematically investigate the quality, portrayal, and reach of BWC on social media. Based on our findings from the emerging themes and the evaluation of the SPRC safe messaging guidelines we suggest that these videos could contribute to the spread of these deadly challenges (or suicide in general since the game might be a hoax) instead of raising awareness. Our suggestion is parallel with similar studies conducted on the portrait of suicide in traditional media (Fekete & Macsai, 1990; Fekete & Schmidtke, 1995). Most posts on social media romanticized people who have died by following this challenge, and younger vulnerable teens may see the victims as role models, leading them to end their lives in the same way (Fekete & Schmidtke, 1995). The videos presented statistics about the number of suicides believed to be related to this challenge in a way that made suicide seem common (Cialdini, 2003). In addition, the videos presented extensive personal information about the people who have died by suicide while playing the BWC. These videos also provided detailed descriptions of the final task, including pictures of self-harm, material that may encourage vulnerable teens to consider ending their lives and provide them with methods on how to do so (Fekete & Macsai, 1990). On the other hand, these videos both failed to emphasize prevention by highlighting effective treatments for mental health problems and failed to encourage teenagers with mental health problems to seek help and providing information on where to find it. YouTube and Twitter are capable of influencing a large number of teenagers (Khasawneh, Ponathil, Firat Ozkan, & Chalil Madathil, 2018; Pater & Mynatt, 2017). We suggest that it is urgent to monitor social media posts related to BWC and similar self-harm challenges (e.g., the Momo Challenge). Additionally, the SPRC should properly educate social media users, particularly those with more influence (e.g., celebrities) on elements that boost negative contagion effects. While the veracity of these challenges is doubted by some, posting about the challenges in unsafe manners can contribute to contagion regardless of the challlenges’ true nature.« less
  5. The authors use the timing of a change in Twitter’s rules regarding abusive content to test the effectiveness of organizational policies aimed at stemming online harassment. Institutionalist theories of social control suggest that such interventions can be efficacious if they are perceived as legitimate, whereas theories of psychological reactance suggest that users may instead ratchet up aggressive behavior in response to the sanctioning authority. In a sample of 3.6 million tweets spanning one month before and one month after Twitter’s policy change, the authors find evidence of a modest positive shift in the average sentiment of tweets with slurs targeting women and/or African Americans. The authors further illustrate this trend by tracking the network spread of specific tweets and individual users. Retweeted messages are more negative than those not forwarded. These patterns suggest that organizational “anti-abuse” policies can play a role in stemming hateful speech on social media without inflaming further abuse.