LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn’t the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.
more »
« less
This content will become publicly available on March 31, 2026
Improving Workplace Well-being in Modern Organizations: A Review of Large Language Model-based Mental Health Chatbots
The global rise in mental disorders, particularly in workplaces, necessitated innovative and scalable solutions for delivering therapy. Large Language Model (LLM)-based mental health chatbots have rapidly emerged as a promising tool for overcoming the time, cost, and accessibility constraints often associated with traditional mental health therapy. However, LLM-based mental health chatbots are in their nascency, with significant opportunities to enhance their capabilities to operate within organizational contexts. To this end, this research seeks to examine the role and development of LLMs in mental health chatbots over the past half-decade. Through our review, we identified over 50 mental health-related chatbots, including 22 LLM-based models targeting general mental health, depression, anxiety, stress, and suicide ideation. These chatbots are primarily used for emotional support and guidance but often lack capabilities specifically designed for workplace mental health, where such issues are increasingly prevalent. The review covers their development, applications, evaluation, ethical concerns, integration with traditional services, LLM-as-a-Service, and various other business implications in organizational settings. We provide a research illustration of how LLM-based approaches could overcome the identified limitations and also offer a system that could help facilitate systematic evaluation of LLM-based mental health chatbots. We offer suggestions for future research tailored to workplace mental health needs.
more »
« less
- PAR ID:
- 10611531
- Publisher / Repository:
- ACM Transactions on Management Information Systems
- Date Published:
- Journal Name:
- ACM Transactions on Management Information Systems
- Volume:
- 16
- Issue:
- 1
- ISSN:
- 2158-656X
- Page Range / eLocation ID:
- 1 to 26
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools for mental well-being. With the increased integration of GenAI, it is important to examine individuals’ attitudes and trust in GenAI chatbots’ support for SA. Through a mixed-method approach that involved surveys (n = 159) and interviews (n = 17), we found that individuals with severe symptoms tended to trust and embrace GenAI chatbots more readily, valuing their non-judgmental support and perceived emotional comprehension. However, those with milder symptoms prioritized technical reliability. We identified factors influencing trust, such as GenAI chatbots’ ability to generate empathetic responses and its context-sensitive limitations, which were particularly important among individuals with SA. We also discuss the design implications and use of GenAI chatbots in fostering cognitive and emotional trust, with practical and design considerations.more » « less
-
While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement.more » « less
-
Cyber-Physical Systems (CPS) integrate computational elements with physical processes via sensors and actuators. While CPS is expected to have human-level intelligence, traditional machine learning which is trained on specific and isolated datasets seems insufficient to meet such expectation. In recent years, Large Language Models (LLMs), like GPT-4, have experienced explosive growth and show significant improvement in reasoning and language comprehension capabilities which promotes LLM-enabled CPS. In this paper, we present a comprehensive review of these studies about LLM-enabled CPS. First, we overview LLM-enabled CPS and the roles that LLM plays in CPS. Second, we categorize existing works in terms of the application domain and discuss their key contributions. Third, we present commonly-used metrics and benchmarks for LLM-enabled CPS evaluation. Finally, we discuss future research opportunities and corresponding challenges of LLM-enabled CPS.more » « less
-
null (Ed.)There is an expansive and growing body of literature that examines the mental health consequences of disasters and large-scale emergencies. There is a need, however, for more explicit incorporation of mental health research into disaster risk reduction practices. Training and education programs can serve as a bridge to connect academic mental health research and the work of disaster risk reduction practitioners. This article describes the development and evaluation of one such intervention, the CONVERGE Disaster Mental Health Training Module, which provides users from diverse academic and professional backgrounds with foundational knowledge on disaster mental health risk factors, mental health outcomes, and psychosocial well-being research. Moreover, the module helps bridge the gap between research and practice by describing methods used to study disaster mental health, showcasing examples of evidence-based programs and tools, and providing recommendations for future research. Since its initial release on 8 October 2019, 317 trainees from 12 countries have completed the Disaster Mental Health Training Module. All trainees completed a pre- and post-training questionnaire regarding their disaster mental health knowledge, skills, and attitudes. Wilcoxon Signed Rank tests demonstrated a significant increase in all three measures after completion of the training module. Students, emerging researchers or practitioners, and trainees with a high school/GED education level experienced the greatest benefit from the module, with Kruskal–Wallis results indicating significant differences in changes in knowledge and skills across the groups. This evaluation research highlights the effectiveness of the Disaster Mental Health Training Module in increasing knowledge, skills, and attitudes among trainees. This article concludes with a discussion of how this training can support workforce development and ultimately contribute to broader disaster risk reduction efforts.more » « less
An official website of the United States government
