For this year’s career issue, LCGC North America teamed up with the American Chemical Society Subdivision on Chromatography and Separations Chemistry to ask the analytical chemistry community what skills new employees in the field need to succeed. In this report, we analyze the survey results and explore how they can inform the future of analytical chemistry curriculum development.
more »
« less
How Do I Design a Chemical Reaction To Do Useful Work? Reinvigorating General Chemistry by Connecting Chemistry and Society
- Award ID(s):
- 1919282
- PAR ID:
- 10469530
- Publisher / Repository:
- ACS Publications
- Date Published:
- Journal Name:
- Journal of Chemical Education
- Volume:
- 97
- Issue:
- 4
- ISSN:
- 0021-9584
- Page Range / eLocation ID:
- 925 to 933
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged and have been applied in various kinds of areas such as science, finance and software engineering. However, the capability of LLMs to advance the field of chemistry remains unclear. In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain. We identify three key chemistryrelated capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks. Our analysis draws on widely recognized datasets facilitating a broad exploration of the capacities of LLMs within the context of practical chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are evaluated for each chemistry task in zero-shot and few-shot in-context learning settings with carefully selected demonstration examples and specially crafted prompts. Our investigation found that GPT-4 outperformed other models and LLMs exhibit different competitive levels in eight chemistry tasks. In addition to the key findings from the comprehensive benchmark analysis, our work provides insights into the limitation of current LLMs and the impact of in-context learning settings on LLMs’ performance across various chemistry tasks. The code and datasets used in this study are available at https://github.com/ChemFoundationModels/ChemLLMBench.more » « less
An official website of the United States government

