Would providing choice lead to improved learning with a tutor? We had conducted and reported a controlled study earlier, wherein, introductory programing students were given the choice of skipping the line-by-line feedback provided after each incorrect answer in a tutor on if/if-else statements. Contrary to expectations, the study found that the choice to skip feedback did not lead to greater learning. We tried to reproduce these results using two tutors on if/if-else and switch statements, and with a larger subject pool. We found that whereas choice did not lead to greater learning on if/if-else tutor in this reproducibility study either, it resulted in decreased learning on switch tutor. We hypothesize that skipping feedback is indeed detrimental to learning. But, inter-relationships among the concepts covered by a tutor and the transfer of learning facilitated by these relationships compensate for the negative effect of skipping line-by-line feedback. We also found contradictory results between the two studies which highlight the need for reproducibility studies in empirical research.
more »
« less
Does choosing the concept on which to solve each practice problem in an adaptive tutor affect learning?
We conducted a controlled study to investigate whether having students choose the concept on which to solve each practice problem in an adaptive tutor helped improve learning. We analyzed data from an adaptive tutor used by introductory programming students over three semesters. The tutor presented code-tracing problems, used pretest-practice-post-test protocol, and presented line-by-line explanation of the correct solution as feedback. We found that choice did not in-crease the amount of learning or pace of learning. But, it resulted in greater improvement in score on learned concepts, and the effect size was medium.
more »
« less
- Award ID(s):
- 1432190
- PAR ID:
- 10155756
- Date Published:
- Journal Name:
- Proceedings of AI-ED 2019
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)We conducted a study to see if using Bayesian Knowledge Tracing (BKT) models would save time and problems in programming tutors. We used legacy data collected by two programming tutors to compute BKT models for every concept covered by each tutor. The novelty of our model was that slip and guess parameters were computed for every problem presented by each tutor. Next, we used cross-validation to evaluate whether the resulting BKT model would have reduced the number of practice problems solved and time spent by the students represented in the legacy data. We found that in 64.23% of the concepts, students would have saved time with the BKT model. The savings varied among concepts. Overall, students would have saved a mean of 1.28 minutes and 1.23 problems per concept. We also found that BKT models were more effective at saving time and problems on harder concepts.more » « less
-
In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning. This work leverages Deep Reinforcement Learning (\textit{DRL}) in providing \textit{adaptive} metacognitive interventions to bridge the gap between the three knowledge types and prepare students for future learning across Intelligent Tutoring Systems (ITSs). Students received these interventions that taught \textit{how} and \textit{when} to use a backward-chaining (BC) strategy on a logic tutor that supports a default forward-chaining strategy. Six weeks later, we trained students on a probability tutor that only supports BC without interventions. Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers. Furthermore, the DRL policy adapted to the metacognitive development on the logic tutor across declarative, procedural, and conditional students, causing their strategic decisions to be more autonomous.more » « less
-
Olney, AM; Chounta, IA; Liu, Z; Santos, OC; Bittencourt, II (Ed.)This work investigates how tutoring discourse interacts with students’ proximal knowledge to explain and predict students’ learning outcomes. Our work is conducted in the context of high-dosage human tutoring where 9th-grade students attended small group tutorials and individually practiced problems on an Intelligent Tutoring System (ITS). We analyzed whether tutors’ talk moves and students’ performance on the ITS predicted scores on math learning assessments. We trained Random Forest Classifiers (RFCs) to distinguish high and low assessment scores based on tutor talk moves, student’s ITS performance metrics, and their combination. A decision tree was extracted from each RFC to yield an interpretable model. We found AUCs of 0.63 for talk moves, 0.66 for ITS, and 0.77 for their combination, suggesting interactivity among the two feature sources. Specifically, the best decision tree emerged from combining the tutor talk moves that encouraged rigorous thinking and students’ ITS mastery. In essence, tutor talk that encouraged mathematical reasoning predicted achievement for students who demonstrated high mastery on the ITS, whereas tutors’ revoicing of students’ mathematical ideas and contributions was predictive for students with low ITS mastery. Implications for practice are discussed.more » « less
-
Olney, AM; Chounta, IA; Liu, Z; Santos; OC; Bittencourt, II (Ed.)This work investigates how tutoring discourse interacts with students’ proximal knowledge to explain and predict students’ learning outcomes. Our work is conducted in the context of high-dosage human tutoring where 9th-grade students (N = 1080) attended small group tutorials and individually practiced problems on an Intelligent Tutoring System (ITS). We analyzed whether tutors’ talk moves and students’ performance on the ITS predicted scores on math learning assessments. We trained Random Forest Classifiers (RFCs) to distinguish high and low assessment scores based on tutor talk moves, student’s ITS performance metrics, and their combination. A decision tree was extracted from each RFC to yield an interpretable model. We found AUCs of 0.63 for talk moves, 0.66 for ITS, and 0.77 for their combination, suggesting interactivity among the two feature sources. Specifically, the best decision tree emerged from combining the tutor talk moves that encouraged rigorous thinking and students’ ITS mastery. In essence, tutor talk that encouraged mathematical reasoning predicted achievement for students who demonstrated high mastery on the ITS, whereas tutors’ revoicing of students’ mathematical ideas and contributions was predictive for students with low ITS mastery. Implications for practice are discussed.more » « less
An official website of the United States government

