skip to main content

Search for: All records

Editors contains: "McNamara, D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Roll I., McNamara D. (Ed.)
    Simulations of human learning have shown potential for supporting ITS authoring and testing, in addition to other use cases. To date, simulated learner technologies have often failed to robustly achieve perfect performance with considerable training. In this work we identify an impediment to producing perfect asymptotic learning performance in simulated learners and introduce one significant improvement to the Apprentice Learner Framework to this end.
  2. Roll, I. ; McNamara, D. (Ed.)
    It has been shown that answering questions contributes to students learning effectively. However, generating questions is an expensive task and requires a lot of effort. Although there has been research reported on the automa- tion of question generation in the literature of Natural Language Processing, these technologies do not necessarily generate questions that are useful for educational purposes. To fill this gap, we propose QUADL, a method for generating questions that are aligned with a given learning objective. The learning objective reflects the skill or concept that students need to learn. The QUADL method first identifies a key concept, if any, in a given sentence that has a strong connection with the given learning objective. It then converts the given sentence into a question for which the predicted key concept becomes the answer. The results from the survey using Amazon Mechanical Turk suggest that the QUADL method can be a step towards generating questions that effectively contribute to students’ learning.
  3. Roll, I ; McNamara, D ; Sosnovsky, S ; Luckin, R ; Dimitrova, V. (Ed.)
    Knowledge tracing refers to a family of methods that estimate each student’s knowledge component/skill mastery level from their past responses to questions. One key limitation of most existing knowledge tracing methods is that they can only estimate an overall knowledge level of a student per knowledge component/skill since they analyze only the (usually binary-valued) correctness of student responses. Therefore, it is hard to use them to diagnose specific student errors. In this paper, we extend existing knowledge tracing methods beyond correctness prediction to the task of predicting the exact option students select in multiple choice questions. We quantitatively evaluate the performance of our option tracing methods on two large-scale student response datasets. We also qualitatively evaluate their ability in identifying common student errors in the form of clusters of incorrect options across different questions that correspond to the same error.
  4. Roll, I. ; McNamara, D. ; Sosnovsky, S. ; Luckin, R. ; Dimitrova, V. (Ed.)
    Scaffolding and providing feedback on problem-solving activities during online learning has consistently been shown to improve performance in younger learners. However, less is known about the impacts of feedback strategies on adult learners. This paper investigates how two computer-based support strategies, hints and required scaffolding questions, contribute to performance and behavior in an edX MOOC with integrated assignments from ASSISTments, a web-based platform that implements diverse student supports. Results from a sample of 188 adult learners indicated that those given scaffolds benefited less from ASSISTments support and were more likely to request the correct answer from the system.