skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Forsyth, Carol M"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Benjamin, Paaßen; Carrie, Demmans Epp (Ed.)
    This paper explores the differences between two types of natural language conversations between a student and pedagogical agent(s). Both types of conversations were created for formative assessment purposes. The first type is conversation-based assessment created via knowledge engineering which requires a large amount of human effort. The second type, which is less costly to produce, uses prompt engineering for LLMs based on Evidence-Centered design to create these conversations and glean evidence about students¿½f knowledge, skills and abilities. The current work compares linguistic features of the artificial agent(s) discourse moves in natural language conversations created by the two methodologies. Results indicate that more complex conversations are created by the prompt engineering method which may be more adaptive than the knowledge engineering approach. However, the affordances of prompt engi-neered, LLM generated conversation-based assessment may create more challenges for scoring than the original knowledge engineered conversations. Limitations and implications are dis-cussed. 
    more » « less
  2. New challenges in today’s world have contributed to increased attention toward evaluating individuals’ collaborative problem solving (CPS) skills. One difficulty with this work is identifying evidence of individuals’ CPS capabilities, particularly when interacting in digital spaces. Often human-driven approaches are used but are limited in scale. Machine-driven approaches can save time and money, but their reliability relative to human approaches can be a challenge. In the current study, we compare CPS skill profiles derived from human and semiautomated annotation methods across two tasks. Results showed that the same clusters emerged for both tasks and annotation methods, with the annotation methods showing agreement on labeling most students according to the same profile membership. Additionally, validation of cluster results using external survey measures yielded similar results across annotation methods. 
    more » « less