skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Reinecke, Katharina"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Mental health stigma prevents many individuals from receiving the appropriate care, and social psychology studies have shown that mental health tends to be overlooked in men. In this work, we investigate gendered mental health stigma in masked language models. In doing so, we operationalize mental health stigma by developing a framework grounded in psychology research: we use clinical psychology literature to curate prompts, then evaluate the models’ propensity to generate gendered words. We find that masked language models capture societal stigma about gender in mental health: models are consistently more likely to predict female subjects than male in sentences about having a mental health condition (32% vs. 19%), and this disparity is exacerbated for sentences that indicate treatment-seeking behavior. Furthermore, we find that different models capture dimensions of stigma differently for men and women, associating stereotypes like anger, blame, and pity more with women with mental health conditions than with men. In showing the complex nuances of models’ gendered mental health stigma, we demonstrate that context and overlapping dimensions of identity are important considerations when assessing computational models’ social biases. 
    more » « less
  2. Lim, Jennifer NW (Ed.)
    In the ongoing COVID-19 pandemic, public health experts have produced guidelines to limit the spread of the coronavirus, but individuals do not always comply with experts’ recommendations. Here, we tested whether a specific psychological belief—identification with all humanity—predicts cooperation with public health guidelines as well as helpful behavior during the COVID-19 pandemic. We hypothesized that peoples’ endorsement of this belief—their relative perception of a connection and moral commitment to other humans—would predict their tendencies to adopt World Health Organization (WHO) guidelines and to help others. To assess this, we conducted a global online study ( N = 2537 participants) of four WHO-recommended health behaviors and four pandemic-related moral dilemmas that we constructed to be relevant to helping others at a potential cost to oneself. We used generalized linear mixed models (GLMM) that included 10 predictor variables (demographic, contextual, and psychological) for each of five outcome measures (a WHO cooperative health behavior score, plus responses to each of our four moral, helping dilemmas). Identification with all humanity was the most consistent and consequential predictor of individuals’ cooperative health behavior and helpful responding. Analyses showed that the identification with all humanity significantly predicted each of the five outcomes while controlling for the other variables ( P range < 10 −22 to < 0.009). The mean effect size of the identification with all humanity predictor on these outcomes was more than twice as large as the effect sizes of other predictors. Identification with all humanity is a psychological construct that, through targeted interventions, may help scientists and policymakers to better understand and promote cooperative health behavior and help-oriented concern for others during the current pandemic as well as in future humanitarian crises. 
    more » « less
  3. Though statistical analyses are centered on research questions and hypotheses, current statistical analysis tools are not. Users must first translate their hypotheses into specific statistical tests and then perform API calls with functions and parameters. To do so accurately requires that users have statistical expertise. To lower this barrier to valid, replicable statistical analysis, we introduce Tea, a high-level declarative language and runtime system. In Tea, users express their study design, any parametric assumptions, and their hypotheses. Tea compiles these high-level specifications into a constraint satisfaction problem that determines the set of valid statistical tests and then executes them to test the hypothesis. We evaluate Tea using a suite of statistical analyses drawn from popular tutorials. We show that Tea generally matches the choices of experts while automatically switching to non-parametric tests when parametric assumptions are not met. We simulate the effect of mistakes made by non-expert users and show that Tea automatically avoids both false negatives and false positives that could be produced by the application of incorrect statistical tests. 
    more » « less