Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.
more »
« less
Leveraging Large Language Models and RNNs for Accurate Ontology-Based Text Annotation [Leveraging Large Language Models and RNNs for Accurate Ontology-Based Text Annotation]
- Award ID(s):
- 2522386
- PAR ID:
- 10635832
- Publisher / Repository:
- SCITEPRESS - Science and Technology Publications
- Date Published:
- ISBN:
- 978-989-758-731-3
- Page Range / eLocation ID:
- 489 to 494
- Format(s):
- Medium: X
- Location:
- Porto, Portugal
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We present a novel methodology for crafting effective public messages by combining large language models (LLMs) and conjoint analysis. Our approach personalizes messages for diverse personas – context-specific archetypes representing distinct attitudes and behaviors – while reducing the costs and time associated with traditional surveys. We tested this method in public health contexts (e.g., COVID-19 mandates) and civic engagement initiatives (e.g., voting). A total of 153 distinct messages were generated, each composed of components with varying levels, and evaluated across five personas tailored to each context. Conjoint analysis identified the most effective message components for each persona, validated through a study with 2,040 human participants. This research highlights LLMs’ potential to enhance public communication, providing a scalable, cost-effective alternative to surveys, and offers new directions for HCI, particularly for the design of adaptive, user-centered, persona-driven interfaces and systems.more » « less
An official website of the United States government

