skip to main content


This content will become publicly available on March 1, 2025

Title: Mental Health Stigma across Diverse Genders in Large Language Models
Mental health stigma manifests differently for different genders, often being more associated with women and overlooked with men. Prior work in NLP has shown that gendered mental health stigmas are captured in large language models (LLMs). However, in the last year, LLMs have changed drastically: newer, generative models not only require different methods for measuring bias, but they also have become widely popular in society, interacting with millions of users and increasing the stakes of perpetuating gendered mental health stereotypes. In this paper, we examine gendered mental health stigma in GPT3.5-Turbo, the model that powers OpenAI’s popular ChatGPT. Building off of prior work, we conduct both quantitative and qualitative analyses to measure GPT3.5-Turbo’s bias between binary genders, as well as to explore its behavior around non-binary genders, in conversations about mental health. We find that, though GPT3.5-Turbo refrains from explicitly assuming gender, it still contains implicit gender biases when asked to complete sentences about mental health, consistently preferring female names over male names. Additionally, though GPT3.5-Turbo shows awareness of the nuances of non-binary people’s experiences, it often over-fixates on non-binary gender identities in free-response prompts. Our preliminary results demonstrate that while modern generative LLMs contain safeguards against blatant gender biases and have progressed in their inclusiveness of non-binary identities, they still implicitly encode gendered mental health stigma, and thus risk perpetuating harmful stereotypes in mental health contexts.  more » « less
Award ID(s):
2142739
NSF-PAR ID:
10520213
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Machine Learning for Cognitive and Mental Health Workshop (ML4CMH), AAAI 2024
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mental health stigma prevents many individuals from receiving the appropriate care, and social psychology studies have shown that mental health tends to be overlooked in men. In this work, we investigate gendered mental health stigma in masked language models. In doing so, we operationalize mental health stigma by developing a framework grounded in psychology research: we use clinical psychology literature to curate prompts, then evaluate the models’ propensity to generate gendered words. We find that masked language models capture societal stigma about gender in mental health: models are consistently more likely to predict female subjects than male in sentences about having a mental health condition (32% vs. 19%), and this disparity is exacerbated for sentences that indicate treatment-seeking behavior. Furthermore, we find that different models capture dimensions of stigma differently for men and women, associating stereotypes like anger, blame, and pity more with women with mental health conditions than with men. In showing the complex nuances of models’ gendered mental health stigma, we demonstrate that context and overlapping dimensions of identity are important considerations when assessing computational models’ social biases. 
    more » « less
  2. Mental health stigma prevents many individuals from receiving the appropriate care, and social psychology studies have shown that mental health tends to be overlooked in men. In this work, we investigate gendered mental health stigma in masked language models. In doing so, we operationalize mental health stigma by developing a framework grounded in psychology research: we use clinical psychology literature to curate prompts, then evaluate the models’ propensity to generate gendered words. We find that masked language models capture societal stigma about gender in mental health: models are consistently more likely to predict female subjects than male in sentences about having a mental health condition (32% vs. 19%), and this disparity is exacerbated for sentences that indicate treatment-seeking behavior. Furthermore, we find that different models capture dimensions of stigma differently for men and women, associating stereotypes like anger, blame, and pity more with women with mental health conditions than with men. In showing the complex nuances of models’ gendered mental health stigma, we demonstrate that context and overlapping dimensions of identity are important considerations when assessing computational models’ social biases. 
    more » « less
  3. We investigated how gender is represented in children’s books using a novel 200,000 word corpus comprising 247 popular, contemporary books for young children. Using human judgments and word co-occurrence data, we quantified gender biases of words in individual books and in the whole corpus. We find that children’s books contain many words that adults judge as gendered. Semantic analyses based on co-occurrence data yielded word clusters related to gender stereotypes (e.g., feminine: emotions; masculine: tools). Co-occurrence data also indicate that many books instantiate gender stereotypes identified in other research (e.g., girls are better at reading and boys at math). Finally, we used large-scale data to estimate the gender distribution of the audience for individual books, and find that children are more often exposed to gender stereotypes for their own gender. Together the data suggest that children’s books may be an early source of gender associations and stereotypes. 
    more » « less
  4. We contribute empirical and conceptual insights regarding the roles of digital labor platforms in online freelancing, focusing attention to social identities such as gender, race, ethnicity, and occupation. Findings highlight how digital labor platforms reinforce and exacerbate identity-based stereotypes, bias and expectations in online freelance work. We focus on online freelancing as this form of working arrangement is becoming more prevalent. Online freelancing also relies on the market-making power of digital platforms to create an online labor market. Many see this as one likely future of work with less bias. Others worry that labor platforms' market power allows them to embed known biases into new working arrangements: a platformization of inequality. Drawing on data from 108 online freelancers, we discuss six findings: 1) female freelance work is undervalued; 2) gendered occupational expectations; 3) gendered treatment; 4) shared expectations of differential values; 5) racial stereotypes and expectations; and 6) race and ethnicity as an asset. We discuss the role of design in the platformization and visibility of social identity dimensions, and the implications of the reinforced identity perceptions and marginalization in digital labor platforms.

     
    more » « less
  5. Abstract

    Informed by decades of literature, water interventions increasingly deploy “gender‐sensitive” or even “gender transformative” approaches that seek to redress the disproportionate harms women face from water insecurity. These efforts recognize the role of gendered social norms and unequal power relations but often focus narrowly on the differences and dynamics between cisgender (cis) men and women. This approach renders less visible the ways that living with water insecurity can differentially affect all individuals through the dynamics of gender, sexuality, and linked intersecting identities. Here, we first share a conceptual toolkit that explains gender as fluid, negotiated, and diverse beyond the cis‐binary. Using this as a starting point, we then review what is known and can be theorized from current literature, identifying limited observations from water‐insecure communities to identify examples of contexts where gendered mechanisms (such as social norms) differentiate experiences of water insecurity, such as elevating risks of social stigma, physical harm, or psychological distress. We then apply this approach to consider expanded ways to include transgender, non‐binary, and gender and sexual diversity to deepen, nuance and expand key thematics and approaches for water insecurity research. Reconceptualizing gender in these ways widens theoretical possibilities, changes how we collect data, and imagines new possibilities for effective and just water interventions.

    This article is categorized under:

    Human Water > Value of Water

    Engineering Water > Water, Health, and Sanitation

    Human Water > Water as Imagined and Represented

    Human Water > Methods

     
    more » « less