skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Assistant Dashboard Plus – Enhancing an Existing Instructor Dashboard with Difficulty Detection and GPT-based Code Clustering
As interest in programming as a major grows, instructors must accommodate more students in their programming courses. One particularly challenging aspect of this growth is providing quality assistance to students during in-class and out-of-class programming exercises. Prior work proposes using instructor dashboards to help instructors combat these challenges. Further, the introduction of ChatGPT represents an exciting avenue to assist instructors with programming exercises but needs a delivery method for this assistance. We propose a revision of a current instructor dashboard Assistant Dashboard Plus that extends an existing dashboard with two new features: (a) identifying students in difficulty so that instructors can effectively assist them, and (b) providing instructors with pedagogically relevant groupings of students’ exercise solutions with similar implementations so that instructors can provide overlapping code style feedback to students within the same group. For difficulty detection, it uses a state-of-the-art algorithm for which a visualization has not been created. For code clustering, it uses GPT. We present a first-pass implementation of this dashboard  more » « less
Award ID(s):
1924059 1829752
PAR ID:
10526721
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400705090
Page Range / eLocation ID:
54 to 57
Subject(s) / Keyword(s):
Computer programming, Dashboards, Learning at scale, ChatGPT, GPT
Format(s):
Medium: X
Location:
Greenville SC USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Harms, Kyle; Cunha, Jácome; Oney, Steve; Kelleher, Caitlin (Ed.)
    Analytics about how students navigate online learning tools throughout the duration of an assignment is scarce. Knowledge about how students use online tools before a course’s end could positively impact students’ learning outcomes. We introduce PEDI (Piazza Explorer Dashboard for Intervention), a tool which analyzes and presents visualizations of forum activity on Piazza, a question and answer forum, to instructors. We outline the design principles and data-informed recommendations used to design PEDI. Our prior research revealed two critical periods in students’ forum engagement over the duration of an assignment. Early engagement in the first half of an assignment duration positively correlates with class average performance. Whereas, extremely high engagement toward the deadline predicted lower class average performance. PEDI uses these findings to detect and flag troubling engagement levels and informs instructors through clear visualizations to promote data-informed interventions. By providing insights to instructors, PEDI may improve class performance and pave the way for a new generation of online tools. 
    more » « less
  2. Peer assessment, as a form of collaborative learning, can engage students in active learning and improve their learning gains. However, current teaching platforms and programming environments provide little support to integrate peer assessment for in-class programming exercises. We identified challenges in conducting such exercises and adopting peer assessment through formative interviews with instructors of introductory programming courses. To address these challenges, we introduce PuzzleMe, a tool to help Computer Science instructors to conduct engaging in-class programming exercises. PuzzleMe leverages peer assessment to support a collaboration model where students provide timely feedback on their peers' work. We propose two assessment techniques tailored to in-class programming exercises: live peer testing and live peer code review. Live peer testing can improve students' code robustness by allowing them to create and share lightweight tests with peers. Live peer code review can improve code understanding by intelligently grouping students to maximize meaningful code reviews. A two-week deployment study revealed that PuzzleMe encourages students to write useful test cases, identify code problems, correct misunderstandings, and learn a diverse set of problem-solving approaches from peers. 
    more » « less
  3. null (Ed.)
    Student-to-instructor interaction is crucial in online education. Through this interaction, the instructors guide students on contents and create a learning environment that allows students to communicate between themselves. The student-to-instructor interaction is of the factors that determine student participation and satisfaction. Thus, this study uses a systematic review to discuss the strategies that instructors can use to increase student-to-instructor interaction in online courses. This study highlights four strategies, including the instructor’s participation in online courses, the feedback from an online instructor, the availability of an online instructor and timely response, and pre- and during class communications in online courses. The online instructors and instructional designers may use these strategies to improve interaction with the students. These strategies may also assist instructional designers and instructors who develop and teach hybrid or blended courses. Keywords: student-to-instructor, online course, instructor participation, instructor feedback, instructor availability, and class communication 
    more » « less
  4. Worked code examples are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide line-by-line explanations of a large number of examples typically used in a programming class. This paper explores the opportunity to facilitate the development of worked examples for Java programming through a human-AI collaborative authoring approach. The idea of collaborative authoring is to generate a starting version of code explanations using LLM and present it to the instructor to edit if necessary. The critical step towards implementing this idea is to ensure that LLM can produce code explanations that look meaningful and acceptable to instructors and students. To achieve this goal, we performed an extensive prompt engineering study and evaluated the explanation produced by the selected prompt in a user study with students and authors. 
    more » « less
  5. The success of GPT with coding tasks has made it important to consider the impact of GPT and similar models on teaching programming. Students’ use of GPT to solve programming problems can hinder their learning. However, they might also get significant benefits such as quality feedback on programming style, explanations of howa given piece of codeworks, helpwith debugging code, and the ability to see valuable alternatives to their code solutions. We propose a newdesign for interactingwith GPT calledMediated GPT with the goals of (a) providing students with access to GPT but allowing instructors to programmatically modify responses to prevent hindrances to student learning and combat common GPT response concerns, (b) helping students generate and learn to create effective prompts to GPT, and (c) tracking how students use GPT to get help on programming exercises. We demonstrate a first-pass implementation of this design called NotebookGPT. 
    more » « less