skip to main content


Search for: All records

Creators/Authors contains: "Biswas, Gautam"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper explores the use of large language models (LLMs) to score and explain short-answer assessments in K-12 science. While existing methods can score more structured math and computer science assessments, they often do not provide explanations for the scores. Our study focuses on employing GPT-4 for automated assessment in middle school Earth Science, combining few-shot and active learning with chain-of-thought reasoning. Using a human-in-the-loop approach, we successfully score and provide meaningful explanations for formative assessment responses. A systematic analysis of our method's pros and cons sheds light on the potential for human-in-the-loop techniques to enhance automated grading for open-ended science assessments.

     
    more » « less
    Free, publicly-accessible full text available March 25, 2025
  2. Abstract

    The EngageAI Institute focuses on AI‐driven narrative‐centered learning environments that create engaging story‐based problem‐solving experiences to support collaborative learning. The institute's research has three complementary strands. First, the institute creates narrative‐centered learning environments that generate interactive story‐based problem scenarios to elicit rich communication, encourage coordination, and spark collaborative creativity. Second, the institute creates virtual embodied conversational agent technologies with multiple modalities for communication (speech, facial expression, gesture, gaze, and posture) to support student learning. Embodied conversational agents are driven by advances in natural language understanding, natural language generation, and computer vision. Third, the institute is creating an innovative multimodal learning analytics framework that analyzes parallel streams of multimodal data derived from students’ conversations, gaze, facial expressions, gesture, and posture as they interact with each other, with teachers, and with embodied conversational agents. Woven throughout the institute's activities is a strong focus on ethics, with an emphasis on creating AI‐augmented learning that is deeply informed by considerations of fairness, accountability, transparency, trust, and privacy. The institute emphasizes broad participation and diverse perspectives to ensure that advances in AI‐augmented learning address inequities in STEM. The institute brings together a multistate network of universities, diverse K‐12 school systems, science museums, and nonprofit partners. Key to all of these endeavors is an emphasis on diversity, equity, and inclusion.

     
    more » « less
    Free, publicly-accessible full text available March 1, 2025
  3. This research explores a novel human-in-the-loop approach that goes beyond traditional prompt engineering approaches to harness Large Language Models (LLMs) with chain-of-thought prompting for grading middle school students’ short answer formative assessments in science and generating useful feedback. While recent efforts have successfully applied LLMs and generative AI to automatically grade assignments in secondary classrooms, the focus has primarily been on providing scores for mathematical and programming problems with little work targeting the generation of actionable insight from the student responses. This paper addresses these limitations by exploring a human-in-the-loop approach to make the process more intuitive and more effective. By incorporating the expertise of educators, this approach seeks to bridge the gap between automated assessment and meaningful educational support in the context of science education for middle school students. We have conducted a preliminary user study, which suggests that (1) co-created models improve the performance of formative feedback generation, and (2) educator insight can be integrated at multiple steps in the process to inform what goes into the model and what comes out. Our findings suggest that in-context learning and human-in-the-loop approaches may provide a scalable approach to automated grading, where the performance of the automated LLM-based grader continually improves over time, while also providing actionable feedback that can support students’ open-ended science learning. 
    more » « less
    Free, publicly-accessible full text available October 9, 2024
  4. Martin Fred ; Norouzi, Narges ; Rosenthal, Stephanie (Ed.)
    This paper examines the use of LLMs to support the grading and explanation of short-answer formative assessments in K12 science topics. While significant work has been done on programmatically scoring well-structured student assessments in math and computer science, many of these approaches produce a numerical score and stop short of providing teachers and students with explanations for the assigned scores. In this paper, we investigate few-shot, in-context learning with chain-of-thought reasoning and active learning using GPT-4 for automated assessment of students’ answers in a middle school Earth Science curriculum. Our findings from this human-in-the-loop approach demonstrate success in scoring formative assessment responses and in providing meaningful explanations for the assigned score. We then perform a systematic analysis of the advantages and limitations of our approach. This research provides insight into how we can use human-in-the-loop methods for the continual improvement of automated grading for open-ended science assessments. 
    more » « less
  5. Grieff, S. (Ed.)
    Recently there has been increased development of curriculum and tools that integrate computing (C) into Science, Technology, Engineering, and Math (STEM) learning environments. These environments serve as a catalyst for authentic collaborative problem-solving (CPS) and help students synergistically learn STEM+C content. In this work, we analyzed students’ collaborative problem-solving behaviors as they worked in pairs to construct computational models in kinematics. We leveraged social measures, such as equity and turn-taking, along with a domain-specific measure that quantifies the synergistic interleaving of science and computing concepts in the students’ dialogue to gain a deeper understanding of the relationship between students’ collaborative behaviors and their ability to complete a STEM+C computational modeling task. Our results extend past findings identifying the importance of synergistic dialogue and suggest that while equitable discourse is important for overall task success, fluctuations in equity and turn-taking at the segment level may not have an impact on segment-level task performance. To better understand students’ segment-level behaviors, we identified and characterized groups’ planning, enacting, and reflection behaviors along with monitoring processes they employed to check their progress as they constructed their models. Leveraging Markov Chain (MC) analysis, we identified differences in high- and low-performing groups’ transitions between these phases of students’ activities. We then compared the synergistic, turn-taking, and equity measures for these groups for each one of the MC model states to gain a deeper understanding of how these collaboration behaviors relate to their computational modeling performance. We believe that characterizing differences in collaborative problem-solving behaviors allows us to gain a better understanding of the difficulties students face as they work on their computational modeling tasks. 
    more » « less
    Free, publicly-accessible full text available September 15, 2024
  6. Abstract  
    more » « less
  7. null (Ed.)
    Strategies are an important component of self-regulated learning frameworks. However, the characterization of strategies in these frameworks is often incomplete: (1) they lack an operational definition of strategies; (2) there is limited understanding of how students develop and apply strategies; and (3) there is a dearth of systematic and generalizable approaches to measure and evaluate strategies when students’ work in open-ended learning environments (OELEs). This paper develops systematic methods for detecting, interpreting, and analyzing students’ use of strategies in OELEs, and demonstrates how students’ strategies evolve across tasks. We apply this framework in the context of tasks that students perform as they learn science topics by building conceptual and computational models in an OELE. Data from a classroom study, where sixth-grade students (N = 52) worked on science model-building activities in our Computational Thinking using Simulation and Modeling (CTSiM) environment demonstrates how we interpret students’ strategy use, and how strategy use relates to their learning performance. We also demonstrate how students’ strategies evolve as they work on multiple model-building tasks. The results demonstrate the effectiveness of our strategy framework in analyzing students’ behaviors and performance in CTSiM. 
    more » « less