skip to main content


This content will become publicly available on September 1, 2024

Title: Prompts Matter: Insights and Strategies for Prompt Engineering in Automated Software Traceability
Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.  more » « less
Award ID(s):
1909007 2122689
NSF-PAR ID:
10468156
Author(s) / Creator(s):
; ;
Editor(s):
IEEE Requirements Engineering Conference
Publisher / Repository:
IEEE
Date Published:
Volume:
2023
Page Range / eLocation ID:
455 to 464
Subject(s) / Keyword(s):
["automated software traceability","large language models","prompt engineering"]
Format(s):
Medium: X
Location:
Hannover, Germany
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Importance

    The study highlights the potential of large language models, specifically GPT-3.5 and GPT-4, in processing complex clinical data and extracting meaningful information with minimal training data. By developing and refining prompt-based strategies, we can significantly enhance the models’ performance, making them viable tools for clinical NER tasks and possibly reducing the reliance on extensive annotated datasets.

    Objectives

    This study quantifies the capabilities of GPT-3.5 and GPT-4 for clinical named entity recognition (NER) tasks and proposes task-specific prompts to improve their performance.

    Materials and Methods

    We evaluated these models on 2 clinical NER tasks: (1) to extract medical problems, treatments, and tests from clinical notes in the MTSamples corpus, following the 2010 i2b2 concept extraction shared task, and (2) to identify nervous system disorder-related adverse events from safety reports in the vaccine adverse event reporting system (VAERS). To improve the GPT models' performance, we developed a clinical task-specific prompt framework that includes (1) baseline prompts with task description and format specification, (2) annotation guideline-based prompts, (3) error analysis-based instructions, and (4) annotated samples for few-shot learning. We assessed each prompt's effectiveness and compared the models to BioClinicalBERT.

    Results

    Using baseline prompts, GPT-3.5 and GPT-4 achieved relaxed F1 scores of 0.634, 0.804 for MTSamples and 0.301, 0.593 for VAERS. Additional prompt components consistently improved model performance. When all 4 components were used, GPT-3.5 and GPT-4 achieved relaxed F1 socres of 0.794, 0.861 for MTSamples and 0.676, 0.736 for VAERS, demonstrating the effectiveness of our prompt framework. Although these results trail BioClinicalBERT (F1 of 0.901 for the MTSamples dataset and 0.802 for the VAERS), it is very promising considering few training samples are needed.

    Discussion

    The study’s findings suggest a promising direction in leveraging LLMs for clinical NER tasks. However, while the performance of GPT models improved with task-specific prompts, there's a need for further development and refinement. LLMs like GPT-4 show potential in achieving close performance to state-of-the-art models like BioClinicalBERT, but they still require careful prompt engineering and understanding of task-specific knowledge. The study also underscores the importance of evaluation schemas that accurately reflect the capabilities and performance of LLMs in clinical settings.

    Conclusion

    While direct application of GPT models to clinical NER tasks falls short of optimal performance, our task-specific prompt framework, incorporating medical knowledge and training samples, significantly enhances GPT models' feasibility for potential clinical applications.

     
    more » « less
  2. This research explores a novel human-in-the-loop approach that goes beyond traditional prompt engineering approaches to harness Large Language Models (LLMs) with chain-of-thought prompting for grading middle school students’ short answer formative assessments in science and generating useful feedback. While recent efforts have successfully applied LLMs and generative AI to automatically grade assignments in secondary classrooms, the focus has primarily been on providing scores for mathematical and programming problems with little work targeting the generation of actionable insight from the student responses. This paper addresses these limitations by exploring a human-in-the-loop approach to make the process more intuitive and more effective. By incorporating the expertise of educators, this approach seeks to bridge the gap between automated assessment and meaningful educational support in the context of science education for middle school students. We have conducted a preliminary user study, which suggests that (1) co-created models improve the performance of formative feedback generation, and (2) educator insight can be integrated at multiple steps in the process to inform what goes into the model and what comes out. Our findings suggest that in-context learning and human-in-the-loop approaches may provide a scalable approach to automated grading, where the performance of the automated LLM-based grader continually improves over time, while also providing actionable feedback that can support students’ open-ended science learning. 
    more » « less
  3. Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “ Obama is a __ by profession”. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as “ Obama worked as a __ ” may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA . 
    more » « less
  4. When dealing with safety-critical systems, various regulations, standards, and guidelines stipulate stringent requirements for certification and traceability of artifacts, but typically lack \rev{details} with regards to the corresponding software engineering process. Given the industrial practice of only using semi-formal notations for describing engineering processes with the lack of proper tool mapping engineers and developers need to invest a significant amount of time and effort to ensure that all steps mandated by quality assurance are followed. The sheer size and complexity of systems and regulations make manual, timely feedback from Quality Assurance (QA) engineers infeasible. In order to address these issues, in this paper, we propose a novel framework for tracking, and ``passively'' executing processes in the background, automatically checking QA constraints depending on process progress, and informing the developer of unfulfilled QA constraints. We evaluate our approach by applying it to three case studies: a safety-critical open-source community system, a safety-critical system in the air-traffic control domain, and a non-safety-critical, web-based system. Results from our analysis confirm that trace links are often corrected or completed after the work step has been considered finished, and the engineer has already moved on to another step. Thus, support for timely and automated constraint checking has significant potential to reduce rework as the engineer receives continuous feedback already during their work step. 
    more » « less
  5. When dealing with safety–critical systems, various regulations, standards, and guidelines stipulate stringent requirements for certification and traceability of artifacts, but typically lack details with regards to the corresponding software engineering process. Given the industrial practice of only using semi-formal notations for describing engineering processes – with the lack of proper tool mapping – engineers and developers need to invest a significant amount of time and effort to ensure that all steps mandated by quality assurance are followed. The sheer size and complexity of systems and regulations make manual, timely feedback from Quality Assurance (QA) engineers infeasible. In order to address these issues, in this paper, we propose a novel framework for tracking, and “passively” executing processes in the background, automatically checking QA constraints depending on process progress, and informing the developer of unfulfilled QA constraints. We evaluate our approach by applying it to three case studies: a safety–critical open-source community system, a safety–critical system in the air-traffic control domain, and a non-safety–critical, web-based system. Results from our analysis confirm that trace links are often corrected or completed after the work step has been considered finished, and the engineer has already moved on to another step. Thus, support for timely and automated constraint checking has significant potential to reduce rework as the engineer receives continuous feedback already during their work step. 
    more » « less