skip to main content

Search for: All records

Creators/Authors contains: "Brusilovsky, Peter"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Crossley, Scott ; Popescu, Elvira (Ed.)
    We present here a novel instructional resource, called DeepCode, to support deep code comprehension and learning in intro-to-programming courses (CS1 and CS2). DeepCode is a set of instructional code examples which we call a codeset and which was annotated by our team with comments (e.g., explaining the logical steps of the underlying problem being solved) and related instructional questions that can play the role of hints meant to help learners think about and articulate explanations of the code. While DeepCode was designed primarily to serve our larger efforts of developing an intelligent tutoring system (ITS) that fosters the monitoring, assessment, and development of code comprehension skills for students learning to program, the codeset can be used for other purposes such as assessment, problem-solving, and in various other learning activities such as studying worked-out code examples with explanations and code visualizations. We present here the underlying principles, theories, and frameworks behind our design process, the annotation guidelines, and summarize the resulting codeset of 98 annotated Java code examples which include 7,157 lines of code (including comments), 260 logical steps, 260 logical step details, 408 statement level comments, and 590 scaffolding questions.
    Free, publicly-accessible full text available June 29, 2023
  2. The paper focuses on a new type of interactive learning content for SQL programming - worked examples of SQL code. While worked examples are popular in learning programming, their application for learning SQL is limited. Using a novel tool for presenting interactive worked examples, Database Query Analyzer (DBQA), we performed a large-scale randomized controlled study assessing the value of worked examples as a new type of practice content in a database course. We report the results of the classroom study examining the usage and the impact of DBQA. Among other aspects, we explored the effect of textual step explanations provided by DBQA.
    Free, publicly-accessible full text available July 7, 2023
  3. The paper focuses on a new type of interactive learning content for SQL programming - worked examples of SQL code. While worked examples are popular in learning programming, their application for learning SQL is limited. Using a novel tool for presenting interactive worked examples, Database Query Analyzer (DBQA), we performed a large-scale randomized controlled study assessing the value of worked examples as a new type of practice content in a database course.
    Free, publicly-accessible full text available July 1, 2023
  4. Educational data mining research has demonstrated that the large volume of learning data collected by modern e-learning systems could be used to recognize student behavior patterns and group students into cohorts with similar behavior. However, few attempts have been done to connect and compare behavioral patterns with known dimensions of individual differences. To what extent learner behavior is defined by known individual differences? Which of them could be a better predictor of learner engagement and performance? Could we use behavior patterns to build a data-driven model of individual differences that could be more useful for predicting critical outcomes of the learning process than traditional models? Our paper attempts to answer these questions using a large volume of learner data collected in an online practice system. We apply a sequential pattern mining approach to build individual models of learner practice behavior and reveal latent student subgroups that exhibit considerably different practice behavior. Using these models we explored the connections between learner behavior and both, the incoming and outgoing parameters of the learning process. Among incoming parameters we examined traditionally collected individual differences such as self-esteem, gender, and knowledge monitoring skills. We also attempted to bridge the gap between cluster-based behavior patternmore »models and traditional scale-based models of individual differences by quantifying learner behavior on a latent data-driven scale. Our research shows that this data-driven model of individual differences performs significantly better than traditional models of individual differences in predicting important parameters of the learning process, such as performance and engagement.« less
    Free, publicly-accessible full text available February 15, 2023
  5. In this paper, we describe the integration of a step-by-step interactive trace table into an existing practice system for introductory Java programming. These autogenerated trace problems provide help and scaffolding for students who have trouble in solving traditional one-step code tracing problems, accommodating a wider variety of learners. Findings from classroom deployments suggest the scaffolding provided by the trace table is a plausible form of help, most notably increases in performance and persistence and lower task difficulty. Based on usage data, we propose future implications for an adaptive version of the interactive trace table based on learner modeling.
  6. Individual differences have been recognized as an important factor in the learning process. However, there are few successes in using known dimensions of individual differences in solving an important problem of predicting student performance and engagement in online learning. At the same time, learning analytics research has demonstrated that the large volume of learning data collected by modern e-learning systems could be used to recognize student behavior patterns and could be used to connect these patterns with measures of student performance. Our paper attempts to bridge these two research directions. By applying a sequence mining approach to a large volume of learner data collected by an online learning system, we build models of student learning behavior. However, instead of following modern work on behavior mining (i.e., using this behavior directly for performance prediction tasks), we attempt to follow traditional work on modeling individual differences in quantifying this behavior on a latent data-driven personality scale. Our research shows that this data-driven model of individual differences performs significantly better than several traditional models of individual differences in predicting important parameters of the learning process, such as success and engagement.
  7. With the increased popularity of electronic textbooks, there is a growing interest in developing a new generation of “intelligent textbooks,” which have the ability to guide readers according to their learning goals and current knowledge. Intelligent textbooks extend regular textbooks by integrating machine-manipulable knowledge, and the most popular type of integrated knowledge is a list of relevant concepts mentioned in the textbooks. With these concepts, multiple intelligent operations, such as content linking, content recommendation, or student modeling, can be performed. However, existing automatic keyphrase extraction methods, even supervised ones, cannot deliver sufficient accuracy to be practically useful in this task. Manual annotation by experts has been demonstrated to be a preferred approach for producing high-quality labeled data for training supervised models. However, most researchers in the education domain still consider the concept annotation process as an ad-hoc activity rather than a carefully executed task, which can result in low-quality annotated data. Using the annotation of concepts for the Introduction to Information Retrieval textbook as a case study, this paper presents a knowledge engineering method to obtain reliable concept annotations. As demonstrated by the data we collected, the inter-annotator agreement gradually increased along with our procedure, and the concept annotations wemore »produced led to better results in document linking and student modeling tasks. The contributions of our work include a validated knowledge engineering procedure, a codebook for technical concept annotation, and a set of concept annotations for the target textbook, which could be used as a gold standard in further intelligent textbook research.« less
  8. Knowledge Tracing (KT), which aims to model student knowledge level and predict their performance, is one of the most important applications of user modeling. Modern KT approaches model and maintain an up-to-date state of student knowledge over a set of course concepts according to students’ historical performance in attempting the problems. However, KT approaches were designed to model knowledge by observing relatively small problem-solving steps in Intelligent Tutoring Systems. While these approaches were applied successfully to model student knowledge by observing student solutions for simple problems, such as multiple-choice questions, they do not perform well for modeling complex problem solving in students. Most importantly, current models assume that all problem attempts are equally valuable in quantifying current student knowledge. However, for complex problems that involve many concepts at the same time, this assumption is deficient. It results in inaccurate knowledge states and unnecessary fluctuations in estimated student knowledge, especially if students guess the correct answer to a problem that they have not mastered all of its concepts or slip in answering the problem that they have already mastered all of its concepts. In this paper, we argue that not all attempts are equivalently important in discovering students’ knowledge state, andmore »some attempts can be summarized together to better represent student performance. We propose a novel student knowledge tracing approach, Granular RAnk based TEnsor factorization (GRATE), that dynamically selects student attempts that can be aggregated while predicting students’ performance in problems and discovering the concepts presented in them. Our experiments on three real-world datasets demonstrate the improved performance of GRATE, compared to the state-of-the-art baselines, in the task of student performance prediction. Our further analysis shows that attempt aggregation eliminates the unnecessary fluctuations from students’ discovered knowledge states and helps in discovering complex latent concepts in the problems.« less
  9. Over the last 10 years, learning analytics have provided educators with both dashboards and tools to understand student behaviors within specific technological environments. However, there is a lack of work to support educators in making data-informed design decisions when designing a blended course and planning appropriate learning activities. In this paper, we introduce knowledge-based design analytics that uncover facets of the learning activities that are being created. A knowledge-based visualization is integrated into edCrumble, a (blended) learning design authoring tool. This new approach is explored in the context of a higher education programming course, where instructors design labs and home practice sessions with online smart learning content on a weekly basis. We performed a within-subjects user study to compare the use of the design tool both with and without visualization. We studied the differences in terms of cognitive load, controllability, confidence and ease of choice, design outcomes, and user actions within the system to compare both conditions with the objective of evaluating the impact of using design analytics during the decision-making phase of course design. Our results indicate that the use of a knowledge-based visualization allows the teachers to reduce the cognitive load (especially in terms of mental demand) andmore »that it facilitates the choice of the most appropriate activities without affecting the overall design time. In conclusion, the use of knowledge-based design analytics improves the overall learning design quality and helps teachers avoid committing design errors.« less
  10. This paper contributes to the research on explainable educational recommendations by investigating explainable recommendations in the context of personalized practice system for introductory Java programming. We present the design of two types of explanations to justify recommendation of next learning activity to practice. The value of these explainable recommendations was assessed in a semester-long classroom study. The paper analyses the observed impact of explainable recommendations on various aspects of student behavior and performance.