skip to main content


Title: Learning Association between Learning Objectives and Key Concepts to Generate Pedagogically Valuable Questions
It has been shown that answering questions contributes to students learning effectively. However, generating questions is an expensive task and requires a lot of effort. Although there has been research reported on the automa- tion of question generation in the literature of Natural Language Processing, these technologies do not necessarily generate questions that are useful for educational purposes. To fill this gap, we propose QUADL, a method for generating questions that are aligned with a given learning objective. The learning objective reflects the skill or concept that students need to learn. The QUADL method first identifies a key concept, if any, in a given sentence that has a strong connection with the given learning objective. It then converts the given sentence into a question for which the predicted key concept becomes the answer. The results from the survey using Amazon Mechanical Turk suggest that the QUADL method can be a step towards generating questions that effectively contribute to students’ learning.  more » « less
Award ID(s):
2016966
NSF-PAR ID:
10253902
Author(s) / Creator(s):
;
Editor(s):
Roll, I.; McNamara, D.
Date Published:
Journal Name:
Proceedings of the International Conference on Artificial Intelligence in Education
Page Range / eLocation ID:
320-324
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In mechanics, the standard 3-credit, 45-hour course is sufficient to deliver standard lectures with prepared examples and questions. Moreover, it is not only feasible, but preferable, to employ any of a variety of active learning and teaching techniques. Nevertheless, even when active learning is strategically used, students and instructors alike experience pressure to accomplish their respective learning and teaching goals under the constraints of the academic calendar, raising questions as to whether the allocated time is sufficient to enable authentic learning. One way to assess learning progress is to examine the learning cycles through which students attempt, re-think, and re-attempt their work. This article provides data to benchmark the time required to learn key Statics concepts based on results of instruction of approximately 50 students in a Statics class at a public research university during the Fall 2020 semester. Two parallel techniques are employed to foster and understand student learning cycles. • Through a Mastery Based Learning model, 15 weekly pass/fail “Mastery Tests” are given. Students who do not pass may re-test with a different but similar test on the same topic each week until the semester’s conclusion. The tests are highly structured in that they are well posed and highly focused. For example, some tests focus only on drawing Free Body Diagrams, with no equations or calculations. Other tests focus on writing equilibrium equations from a given Free Body Diagram. Passing the first six tests is required to earn the grade of D; passing the next three for C; the next three for B; and the final three for A. Evaluations include coding of student responses to infer student reasoning. Learning cycles occur as students repeat the same topics, and their progress is assessed by passing rates and by comparing evolving responses to the same test topics. • Concept Questions that elicit qualitative responses and written explanations are deployed at least weekly. The learning cycle here consists of students answering a question, seeing the overall class results (but without the correct answer), having a chance to explore the question with other students and the instructor, and finally an opportunity to re-answer the same question, perhaps a few minutes or up to a couple days later. Sometimes, that same question is given a third time to encourage further effort or progress. To date, results from both cycles appear to agree on one important conclusion: the rate of demonstrated learning is quite low. For example, each Mastery Test has a passing rate of 20%-30%, including for students with several repeats. With the Concept Questions, typically no more than half of the students who answered incorrectly change to the correct answer by the time of the final poll. The final article will provide quantitative and qualitative results from each type of cycle, including tracking coded responses on Mastery Tests, written responses on Concept Questions, and cross-comparisons thereof. Additional results will be presented from student surveys. Since the Mastery Tests and Concept Questions follow typical Statics topics, this work has potential to lead to a standardized set of benchmarks and standards for measuring student learning – and its rate – in Statics. 
    more » « less
  2. This theory paper focuses on understanding how mastery learning has been implemented in undergraduate engineering courses through a systematic review. Academic environments that promote learning, mastery, and continuous improvement rather than inherent ability can promote performance and persistence. Scholarship has argued that students could achieve mastery of the course material when the time available to master concepts and the quality of instruction was made appropriate to each learner. Increasing time to demonstrate mastery involves a course structure that allows for repeated attempts on learning assessments (i.e., homework, quizzes, projects, exams). Students are not penalized for failed attempts but are rewarded for achieving eventual mastery. The mastery learning approach recognizes that mastery is not always achieved on first attempts and learning from mistakes and persisting is fundamental to how we learn. This singular concept has potentially the greatest impact on students’ mindset in terms of their belief they can be successful in learning the course material. A significant amount of attention has been given to mastery learning courses in secondary education and mastery learning has shown an exceptionally positive effect on student achievement. However, implementing mastery learning in an undergraduate course can be a cumbersome process as it requires instructors to significantly restructure their assignments and exams, evaluation process, and grading practices. In light of these challenges, it is unclear the extent to which mastery learning has been implemented in undergraduate engineering courses or if similar positive effects can be found. Therefore, we conducted a systematic review to elucidate, how in the U.S., (1) has mastery learning been implemented in undergraduate engineering courses from 1990 to the present time and (2) the student outcomes that have been reported for these implementations. Using the systematic process outlined by Borrego et al. (2014), we surveyed seven databases and a total of 584 articles consisting of engineering and non-engineering courses were identified. We focused our review on studies that were centered on applying the mastery learning pedagogical method in undergraduate engineering courses. All peer-reviewed and practitioner articles and conference proceedings that were within our scope were included in the synthetization phase of the review. Most articles were excluded based on our inclusion and exclusion criteria. Twelve studies focused on applying mastery learning to undergraduate engineering courses. The mastery learning method was mainly applied on midterm exams, few studies used the method on homework assignments, and no study applied the method to the final exam. Students reported an increase in learning as a result of applying mastery learning. Several studies reported that students’ grades in a traditional final exam were not affected by mastery learning. Students’ self-reported evaluation of the course suggests that students prefer the mastery learning approach over traditional methods. Although a clear consensus on the effect of the mastery learning approach could not be achieved as each article applied different survey instruments to capture students’ perspectives. Responses to open-ended questions have mixed results. Two studies report more positive student comments on opened-ended questions, while one study report receiving more negative comments regarding the implementation of the mastery learning method. In the full paper we more thoroughly describe the ways in which mastery learning was implemented along with clear examples of common and divergent student outcomes across the twelve studies. 
    more » « less
  3. The landscapes of many elementary, middle, and high school math classrooms have undergone major transformations over the last half-century, moving from drill-and-skill work to more conceptual reasoning and hands-on manipulative work. However, if you look at a college level calculus class you are likely to find the main difference is the professor now has a whiteboard marker in hand rather than a piece of chalk. It is possible that some student work may be done on the computer, but much of it contains the same type of repetitive skill building problems. This should seem strange given the advancements in technology that allow more freedom than ever to build connections between different representations of a concept. Several class activities have been developed using a combination of approaches, depending on the topic. Topics covered in the activities include Riemann Sums, Accumulation, Center of Mass, Volumes of Revolution (Discs, Washers, and Shells), and Volumes of Similar Cross-section. All activities use student note outlines that are either done in a whole group interactive-lecture approach, or in a group work inquiry-based approach. Some of the activities use interactive graphs designed on desmos.com and others use physical models that have been designed in OpenSCAD and 3D-printed for students to use in class. Tactile objects were developed because they should provide an advantage to students by enabling them to physically interact with the concepts being taught, deepening their involvement with the material, and providing more stimuli for the brain to encode the learning experience. Web-based activities were developed because the topics involved needed substantial changes in graphical representations (i.e. limits with Riemann Sums). Assessment techniques for each topic include online homework, exams, and online concept questions with an explanation response area. These concept questions are intended to measure students’ ability to use multiple representations in order to answer the question, and are not generally computational in nature. Students are also given surveys to rate the overall activities as well as finer grained survey questions to try and elicit student thoughts on certain aspects of the models, websites, and activity sheets. We will report on student responses to the activity surveys, looking for common themes in students’ thoughts toward specific attributes of the activities. We will also compare relevant exam question responses and online concept question results, including common themes present or absent in student reasoning. 
    more » « less
  4. This work-in-progress paper describes a collaborative effort between engineering education and machine learning researchers to automate analysis of written responses to conceptually challenging questions in mechanics. These qualitative questions are often used in large STEM classes to support active learning pedagogies; they require minimum calculations and focus on the application of underlying physical phenomena to various situations. Active learning pedagogies using this type of questions has been demonstrated to increase student achievement (Freeman et al., 2014; Hake, 1998) and engagement (Deslauriers, et al., 2011) of all students (Haak et al., 2011). To emphasize reasoning and sense-making, we use the Concept Warehouse (Koretsky et al., 2014), an audience response system where students provide written justifications to concept questions. Written justifications better prepare students for discussions with peers and in the whole class and can also improve students’ answer choices (Koretsky et al., 2016a, 2016b). In addition to their use as a tool to foster learning, written explanations can also provide valuable information to concurrently assess that learning (Koretsky and Magana, 2019). However, in practice, there has been limited deployment of written justifications with concept questions, in part, because they provide a daunting amount of information for instructors to process and for researchers to analyze. In this study, we describe the initial evaluation of large pre-trained generative sequence-to-sequence language models (Raffel et al., 2019; Brown et al., 2020) to automate the laborious coding process of student written responses. Adaptation of machine learning algorithms in this context is challenging since each question targets specific concepts which elicit their own unique reasoning processes. This exploratory project seeks to utilize responses collected through the Concept Warehouse to identify viable strategies for adapting machine learning to support instructors and researchers in identifying salient aspects of student thinking and understanding with these conceptually challenging questions. 
    more » « less
  5. null ; null ; null ; null (Ed.)
    We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less