The global rise in mental disorders, particularly in workplaces, necessitated innovative and scalable solutions for delivering therapy. Large Language Model (LLM)-based mental health chatbots have rapidly emerged as a promising tool for overcoming the time, cost, and accessibility constraints often associated with traditional mental health therapy. However, LLM-based mental health chatbots are in their nascency, with significant opportunities to enhance their capabilities to operate within organizational contexts. To this end, this research seeks to examine the role and development of LLMs in mental health chatbots over the past half-decade. Through our review, we identified over 50 mental health-related chatbots, including 22 LLM-based models targeting general mental health, depression, anxiety, stress, and suicide ideation. These chatbots are primarily used for emotional support and guidance but often lack capabilities specifically designed for workplace mental health, where such issues are increasingly prevalent. The review covers their development, applications, evaluation, ethical concerns, integration with traditional services, LLM-as-a-Service, and various other business implications in organizational settings. We provide a research illustration of how LLM-based approaches could overcome the identified limitations and also offer a system that could help facilitate systematic evaluation of LLM-based mental health chatbots. We offer suggestions for future research tailored to workplace mental health needs.
more »
« less
Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support
LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn’t the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.
more »
« less
- Award ID(s):
- 2107391
- PAR ID:
- 10544100
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703300
- Page Range / eLocation ID:
- 1 to 15
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Lesbian, gay, bisexual, transgender, and queer (LGBTQ) youth experience disproportionate mental health challenges due to minority stress. Little research, however, has considered how social support from intragenerational friends impacts the mental health of LGBTQ youth, particularly for LGBTQ youth of color. Based mainly on qualitative interviews from a longitudinal study with 83 LGBTQ youth from California and Texas, we develop the concept of intersectional social support—how multiply marginalized individuals subjectively interpret social support and how they view social support from similar multiply marginalized others. More specifically, the findings of this study capture how the intersecting identities of age, sexuality, gender, and race can shape the meanings and experiences of receiving familial support, emotional support, informational support, and instrumental support. This study is an important contribution to understanding how intersecting identities influence how people perceive social support practices and manage their mental health.more » « less
-
The research paper examines how engineering doctoral students describe their awareness and experiences with stress and mental health during their graduate studies. Despite the known bidirectional relationship between stress and mental health, there is limited research on how engineering doctoral students rationalize the disparity between the health consequences of chronic stress and the veneration of academic endurance in the face of these challenges. Given the dangers of chronic stress to physical and mental health, it is important to understand how students perceive the purpose and impact of stress and mental health within overlapping cultures of normalized stress. We conducted semi-structured interviews to understand participants' awareness, conceptualizations, and interpretations of stress and mental health. The research team analyzed interview transcripts using content analysis with inductive coding. Overall, we found that our participants recognized behavioral changes as an early sign of chronic stress while physical changes were a sign of sustained chronic stress; these cues signaled that participants needed additional support, including social support and campus mental health services. These findings support the need for greater mental health awareness and education within engineering doctoral programs to help students identify and manage chronic stress.more » « less
-
Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.more » « less
-
Social media continues to have an impact on the trajectory of humanity. However, its introduction has also weaponized keyboards, allowing the abusive language normally reserved for in-person bullying to jump onto the screen, i.e., cyberbullying. Cyberbullying poses a significant threat to adolescents globally, affecting the mental health and well-being of many. A group that is particularly at risk is the LGBTQ+ community, as researchers have uncovered a strong correlation between identifying as LGBTQ+ and suffering from greater online harassment. Therefore, it is critical to develop machine learning models that can accurately discern cyberbullying incidents as they happen to LGBTQ+ members. The aim of this study is to compare the efficacy of several transformer models in identifying cyberbullying targeting LGBTQ+ individuals. We seek to determine the relative merits and demerits of these existing methods in addressing complex and subtle kinds of cyberbullying by assessing their effectiveness with real social media data.more » « less
An official website of the United States government

