skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: Single Template vs. Multiple Templates: Examining the Effects of Problem Format on Performance
Classroom and lab-based research have shown the advantages of exposing students to a variety of problems with format differences between them, compared to giving students problem sets with a single problem format. In this paper, we investigate whether this approach can be effectively deployed in an intelligent tutoring system, which affords the opportunity to automatically generate and adapt problem content for practice and assessment purposes. We conducted a randomized controlled trial to compare students who practiced problems based on a single template to students who practiced problems based on multiple templates within the same intelligent tutoring system. No conclusive evidence was found for differences in the two conditions on students’ post-test performance and hint request behavior. However, students who saw multiple templates spent more time answering practice items compared to students who solved problems of a single structure, making the same degree of progress but taking longer to do so.  more » « less
Award ID(s):
1931523
NSF-PAR ID:
10191146
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
International Conference on the Learning Sciences
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Confrustion, a mix of confusion and frustration sometimes experienced while grappling with instructional materials, is not necessarily detrimental to learning. Prior research has shown that studying erroneous examples can increase students’ experiences of confrustion, while at the same time helping them learn and overcome their misconceptions. In the study reported in this paper, we examined students’ knowledge and misconceptions about decimal numbers before and after they interacted with an intelligent tutoring system presenting either erroneous examples targeting misconceptions (erroneous example condition) or practice problems targeting the same misconceptions (problem-solving condition). While students in both conditions significantly improved their performance from pretest to posttest, students in the problem-solving condition improved significantly more and experienced significantly less confrustion. When controlling for confrustion levels, there were no differences in performance. This study is interesting in that, unlike prior studies, the higher confrustion that resulted from studying erroneous examples was not associated with better learning outcomes; instead, it was associated with poorer learning. We propose several possible explanations for this different outcome and hypothesize that revisions to the explanation prompts to make them more expert-like may have also made them – and the erroneous examples that they targeted – less understandable and less effective. Whether prompted self-explanation options should be modeled after the shorter, less precise language students tend to use or the longer, more precise language of experts is an open question, and an important one both for understanding the mechanisms of self-explanation and for designing self-explanation options deployed in instructional materials. 
    more » « less
  2. Confrustion, a mix of confusion and frustration sometimes experienced while grappling with instructional materials, is not necessarily detrimental to learning. Prior research has shown that studying erroneous examples can increase students’ experiences of confrustion, while at the same time helping them learn and overcome their misconceptions. In the study reported in this paper, we examined students’ knowledge and misconceptions about decimal numbers before and after they interacted with an intelligent tutoring system presenting either erroneous examples targeting misconceptions (erroneous example condition) or practice problems targeting the same misconceptions (problem-solving condition). While students in both conditions significantly improved their performance from pretest to posttest, students in the problem-solving condition improved significantly more and experienced significantly less confrustion. When controlling for confrustion levels, there were no differences in performance. This study is interesting in that, unlike prior studies, the higher confrustion that resulted from studying erroneous examples was not associated with better learning outcomes; instead, it was associated with poorer learning. We propose several possible explanations for this different outcome and hypothesize that revisions to the explanation prompts to make them more expert-like may have also made them – and the erroneous examples that they targeted – less understandable and less effective. Whether prompted self-explanation options should be modeled after the shorter, less precise language students tend to use or the longer, more precise language of experts is an open question, and an important one both for understanding the mechanisms of self-explanation and for designing self-explanation options deployed in instructional materials. 
    more » « less
  3. In recent years, we have seen the continuous and rapid increase of job openings in Science, Technology, Engineering and Math (STEM)-related fields. Unfortunately, these positions are not met with an equal number of workers ready to fill them. Efforts are being made to find durable solutions for this phenomena, and they start by encouraging young students to enroll in STEM college majors. However, enrolling in a STEM major requires specific skills in math and science that are learned in schools. Hopefully, institutions are adopting educational software that collects data from the students' usage. This gathered data will serve to conduct analysis and detect students' behaviors, predict their performances and their eventual college enrollment. As we will outline in this paper, we used data collected from the students' usage of an Intelligent Tutoring System to predict whether they would pursue a career in STEM-related fields. We conducted different types of analysis called "problem-based approach" and "skill-based approach". The problem- based approach focused on evaluating students' actions based on the problems they solved. Likewise, in the skill-based approach we evaluated their usage based on the skills they had practiced. Furthermore, we investigated whether comparing students' features with those of their peer schoolmates can improve the prediction models in both the skill-based and the problem-based approaches. The experimental re- sults showed that the skill-based approach with school aggregation achieved the best results with regard to a combination of two metrics which are the Area Under the Receiver Operating Characteristic Curve (AUC) and the Root Mean Squared Error (RMSE). 
    more » « less
  4. null (Ed.)
    Knowledge Tracing (KT), which aims to model student knowledge level and predict their performance, is one of the most important applications of user modeling. Modern KT approaches model and maintain an up-to-date state of student knowledge over a set of course concepts according to students’ historical performance in attempting the problems. However, KT approaches were designed to model knowledge by observing relatively small problem-solving steps in Intelligent Tutoring Systems. While these approaches were applied successfully to model student knowledge by observing student solutions for simple problems, such as multiple-choice questions, they do not perform well for modeling complex problem solving in students. Most importantly, current models assume that all problem attempts are equally valuable in quantifying current student knowledge. However, for complex problems that involve many concepts at the same time, this assumption is deficient. It results in inaccurate knowledge states and unnecessary fluctuations in estimated student knowledge, especially if students guess the correct answer to a problem that they have not mastered all of its concepts or slip in answering the problem that they have already mastered all of its concepts. In this paper, we argue that not all attempts are equivalently important in discovering students’ knowledge state, and some attempts can be summarized together to better represent student performance. We propose a novel student knowledge tracing approach, Granular RAnk based TEnsor factorization (GRATE), that dynamically selects student attempts that can be aggregated while predicting students’ performance in problems and discovering the concepts presented in them. Our experiments on three real-world datasets demonstrate the improved performance of GRATE, compared to the state-of-the-art baselines, in the task of student performance prediction. Our further analysis shows that attempt aggregation eliminates the unnecessary fluctuations from students’ discovered knowledge states and helps in discovering complex latent concepts in the problems. 
    more » « less
  5. null (Ed.)
    Within intelligent tutoring systems, considerable research has in-vestigated hints, including how to generate data-driven hints, what hint con-tent to present, and when to provide hints for optimal learning outcomes. How-ever, less attention has been paid to how hints are presented. In this paper, we propose a new hint delivery mechanism called “Assertions” for providing unsolicited hints in a data-driven intelligent tutor. Assertions are partially-worked example steps designed to appear within a student workspace, and in the same format as student-derived steps, to show students a possible subgoal leading to the solution. We hypothesized that Assertions can help address the well-known hint avoidance problem. In systems that only provide hints upon request, hint avoidance results in students not receiving hints when they are needed. Our unsolicited Assertions do not seek to improve student help-seeking, but rather seek to ensure students receive the help they need. We contrast Assertions with Messages, text-based, unsolicited hints that appear after student inactivity. Our results show that Assertions significantly increase unsolicited hint usage compared to Messages. Further, they show a signifi-cant aptitude-treatment interaction between Assertions and prior proficiency, with Assertions leading students with low prior proficiency to generate shorter (more efficient) posttest solutions faster. We also present a clustering analysis that shows patterns of productive persistence among students with low prior knowledge when the tutor provides unsolicited help in the form of Assertions. Overall, this work provides encouraging evidence that hint presentation can significantly impact how students use them and using Assertions can be an effective way to address help avoidance. 
    more » « less