The recent public releases of AI tools such as ChatGPT have forced computer science educators to reconsider how they teach. These tools have demonstrated considerable ability to generate code and answer conceptual questions, rendering them incredibly useful for completing CS coursework. While overreliance on AI tools could hinder students’ learning, we believe they have the potential to be a helpful resource for both students and instructors alike. We propose a novel system for instructor-mediated GPT interaction in a class discussion board. By automatically generating draft responses to student forum posts, GPT can help Teaching Assistants (TAs) respond to student questions in a more timely manner, giving students an avenue to receive fast, quality feedback on their solutions without turning to ChatGPT directly. Additionally, since they are involved in the process, instructors can ensure that the information students receive is accurate, and can provide students with incremental hints that encourage them to engage critically with the material, rather than just copying an AI-generated snippet of code. We utilize Piazza—a popular educational forum where TAs help students via text exchanges—as a venue for GPT-assisted TA responses to student questions. These student questions are sent to GPT-4 alongside assignment instructions and a customizable prompt, both of which are stored in editable instructor-only Piazza posts. We demonstrate an initial implementation of this system, and provide examples of student questions that highlight its benefits.
more »
« less
Traditional and AI Tools for Teaching Concurrency
Today, AI tools are generally considered as education disrupters. In this paper, we put them in context with more traditional tools, showing how they complement the pedagogical potential of the former. We motivate a set of specific novel ways in which state-of-the-art tools, individually and together, can influence the teaching of concurrency. The pedagogy tasks we consider are illustrating concepts, creating motivating and debuggable assignments, assessing the runtime behavior and source code of solutions manually and automatically, generating model solutions for code and essay questions, discussing conceptual questions in class, and being aware of in-progress work. We use examples from past courses and training sessions in which we have been involved to illustrate the potential and actual influence of tools on these tasks. Some of the tools we consider are popular tools such as interactive programming environments and chat tools - we show novel uses of them. Some of the others such as testing and visualization tools are in-use novel tools - we discuss how they been used. The final group consists of AI tools such as ChatGPT 3.5 and 4.0 - we discuss their potential and how they can be integrated with traditional tools to realize this potential. We also show that version 4.0 has a better understanding of advanced concepts in synchronization and coordination than version 3.5, and both have a remarkable ability to understand concepts in concurrency, which can be expected to grow with advances in AI.
more »
« less
- PAR ID:
- 10526697
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-8378-2
- Page Range / eLocation ID:
- 38 to 45
- Subject(s) / Keyword(s):
- concurrency, visualization, automatic assessment, student progress, clicker system, ChatGPT
- Format(s):
- Medium: X
- Location:
- Goa, India
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While AI programming tools hold the promise of increasing programmers’ capabilities and productivity to a remarkable degree, they often exclude users from essential decision making processes, causing many to effectively “turn off their brains” and over-rely on solutions provided by these systems. These behaviors can have severe consequences in critical domains, like software security. We propose Human-in-the-Loop Decoding, a novel interaction technique that allows users to observe and directly influence LLM decisions during code generation, in order to align the model’s output with their personal requirements. We implement this technique in HILDE, a code completion assistant that highlights critical decisions made by the LLM and provides local alternatives for the user to explore. In a within-subjects study (N=18) on security-related tasks, we found that HILDE led participants to generate significantly fewer vulnerabilities and better align code generation with their goals compared to a traditional code completion assistant.more » « less
-
While AI programming tools hold the promise of increasing programmers’ capabilities and productivity to a remarkable degree, they often exclude users from essential decision making processes, causing many to effectively “turn off their brains” and over-rely on solutions provided by these systems. These behaviors can have severe consequences in critical domains, like software security. We propose Human-in-the-Loop Decoding, a novel interaction technique that allows users to observe and directly influence LLM decisions during code generation, in order to align the model’s output with their personal requirements. We implement this technique in HILDE, a code completion assistant that highlights critical decisions made by the LLM and provides local alternatives for the user to explore. In a within-subjects study (N=18) on security-related tasks, we found that HILDE led participants to generate significantly fewer vulnerabilities and better align code generation with their goals compared to a traditional code completion assistant.more » « less
-
Programming efficient distributed, concurrent systems requires new abstractions that go beyond traditional sequential programming. But programmers already have trouble getting sequential code right, so simplicity is essential. The core problem is that low-latency, high-availability access to data requires replication of mutable state. Keeping replicas fully consistent is expensive, so the question is how to expose asynchronously replicated objects to programmers in a way that allows them to reason simply about their code. We propose an answer to this question in our ongoing work designing a new language, Gallifrey, which provides orthogonal replication through restrictions with merge strategies, contingencies for conflicts arising from concurrency, and branches, a novel concurrency control construct inspired by version control, to contain provisional behavior.more » « less
-
Artificial Intelligence (AI) enhanced systems are widely adopted in post-secondary education, however, tools and activities have only recently become accessible for teaching AI and machine learning (ML) concepts to K-12 students. Research on K-12 AI education has largely included student attitudes toward AI careers, AI ethics, and student use of various existing AI agents such as voice assistants; most of which has focused on high school and middle school. There is no consensus on which AI and Machine Learning concepts are grade-appropriate for elementary-aged students or how elementary students explore and make sense of AI and ML tools. AI is a rapidly evolving technology and as future decision-makers, children will need to be AI literate[1]. In this paper, we will present elementary students’ sense-making of simple machine-learning concepts. Through this project, we hope to generate a new model for introducing AI concepts to elementary students into school curricula and provide tangible, trainable representations of ML for students to explore in the physical world. In our first year, our focus has been on simpler machine learning algorithms. Our desire is to empower students to not only use AI tools but also to understand how they operate. We believe that appropriate activities can help late elementary-aged students develop foundational AI knowledge namely (1) how a robot senses the world, and (2) how a robot represents data for making decisions. Educational robotics programs have been repeatedly shown to result in positive learning impacts and increased interest[2]. In this pilot study, we leveraged the LEGO® Education SPIKE™ Prime for introducing ML concepts to upper elementary students. Through pilot testing in three one-week summer programs, we iteratively developed a limited display interface for supervised learning using the nearest neighbor algorithm. We collected videos to perform a qualitative evaluation. Based on analyzing student behavior and the process of students trained in robotics, we found some students show interest in exploring pre-trained ML models and training new models while building personally relevant robotic creations and developing solutions to engineering tasks. While students were interested in using the ML tools for complex tasks, they seemed to prefer to use block programming or manual motor controls where they felt it was practical.more » « less
An official website of the United States government

