skip to main content


Title: Sexist Slurs: Reinforcing Feminine Stereotypes Online
Social media platforms are accused repeatedly of creating environments in which women are bullied and harassed. We argue that online aggression toward women aims to reinforce traditional feminine norms and stereotypes. In a mixed methods study, we find that this type of aggression on Twitter is common and extensive and that it can spread far beyond the original target. We locate over 2.9 million tweets in one week that contain instances of gendered insults (e.g., “bitch,” “cunt,” “slut,” or “whore”)—averaging 419,000 sexist slurs per day. The vast majority of these tweets are negative in sentiment. We analyze the social networks of the conversations that ensue in several cases and demonstrate how the use of “replies,” “retweets,” and “likes” can further victimize a target. Additionally, we develop a sentiment classifier that we use in a regression analysis to compare the negativity of sexist messages. We find that words in a message that reinforce feminine stereotypes inflate the negative sentiment of tweets to a significant and sizeable degree. These terms include those insulting someone’s appearance (e.g., “ugly”), intellect (e.g., “stupid”), sexual experience (e.g., “promiscuous”), mental stability (e.g., “crazy”), and age (“old”). Messages enforcing beauty norms tend to be particularly negative. In sum, hostile, sexist tweets are strategic in nature. They aim to promote traditional, cultural beliefs about femininity, such as beauty ideals, and they shame victims by accusing them of falling short of these standards. Harassment on social media constitutes an everyday, routine occurrence, with researchers finding 9,764,583 messages referencing bullying on Twitter over the span of two years (Bellmore et al. 2015). In other words, Twitter users post over 13,000 bullying-related messages on a daily basis. Forms of online aggression also carry with them serious, negative consequences. Repeated research documents that bullying victims suffer from a host of deleterious outcomes, such as low self-esteem (Hinduja and Patchin 2010), emotional and psychological distress (Ybarra et al. 2006), and negative emotions (Faris and Felmlee 2014; Juvonen and Gross 2008). Compared to those who have not been attacked, victims also tend to report more incidents of suicide ideation and attempted suicide (Hinduja and Patchin 2010). Several studies document that the targets of cyberbullying are disproportionately women (Backe et al. 2018; Felmlee and Faris 2016; Hinduja and Patchin 2010; Pew Research Center 2017), although there are exceptions depending on definitions and venues. Yet, we know little about the content or pattern of cyber aggression directed toward women in online forums. The purpose of the present research, therefore, is to examine in detail the practice of aggressive messaging that targets women and femininity within the social media venue of Twitter. Using both qualitative and quantitative analyses, we investigate the role of gender norm regulation in these patterns of cyber aggression.  more » « less
Award ID(s):
1818497
PAR ID:
10147015
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Sex Roles
ISSN:
0360-0025
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes that lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethnoracial minorities, and these messages possess the ability to extend to wider audiences. 
    more » « less
  2. Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes that lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethno-racial minorities, and these messages possess the ability to extend to wider audiences. 
    more » « less
  3. Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes that lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethno-racial minorities, and these messages possess the ability to extend to wider audiences. 
    more » « less
  4. The authors use the timing of a change in Twitter’s rules regarding abusive content to test the effectiveness of organizational policies aimed at stemming online harassment. Institutionalist theories of social control suggest that such interventions can be efficacious if they are perceived as legitimate, whereas theories of psychological reactance suggest that users may instead ratchet up aggressive behavior in response to the sanctioning authority. In a sample of 3.6 million tweets spanning one month before and one month after Twitter’s policy change, the authors find evidence of a modest positive shift in the average sentiment of tweets with slurs targeting women and/or African Americans. The authors further illustrate this trend by tracking the network spread of specific tweets and individual users. Retweeted messages are more negative than those not forwarded. These patterns suggest that organizational “anti-abuse” policies can play a role in stemming hateful speech on social media without inflaming further abuse.

     
    more » « less
  5. Sentiment analysis on large-scale social media data is important to bridge the gaps between social media contents and real world activities including political election prediction, individual and public emotional status monitoring and analysis, and so on. Although textual sentiment analysis has been well studied based on platforms such as Twitter and Instagram, analysis of the role of extensive emoji uses in sentiment analysis remains light. In this paper, we propose a novel scheme for Twitter sentiment analysis with extra attention on emojis.We first learn bi-sense emoji embeddings under positive and negative sentimental tweets individually, and then train a sentiment classifier by attending on these bi-sense emoji embeddings with an attention-based long short-term memory network (LSTM). Our experiments show that the bi-sense embedding is effective for extracting sentiment-aware embeddings of emojis and outperforms the state-of-the-art models. We also visualize the attentions to show that the bi-sense emoji embedding provides better guidance on the attention mechanism to obtain a more robust understanding of the semantics and sentiments. 
    more » « less