skip to main content


Title: Towards an Adaptive Learning Module for Materials Science: Comparing Expert Predictions to Student Performance
The emphasis on conceptual learning and the development of adaptive instructional design are both emerging areas in science and engineering education. Instructors are writing their own conceptual questions to promote active learning during class and utilizing pools of these questions in assessments. For adaptive assessment strategies, these questions need to be rated based on difficulty level (DL). Historically DL has been determined from the performance of a suitable number of students. The research study reported here investigates whether instructors can save time by predicting DL of newly made conceptual questions without the need for student data. In this paper, we report on the development of one component in an adaptive learning module for materials science – specifically on the topic of crystallography. The summative assessment element consists of five DL scales and 15 conceptual questions This adaptive assessment directs students based on their previous performances and the DL of the questions. Our five expert participants are faculty members who have taught the introductory Materials Science course multiple times. They provided predictions for how many students would answer each question correctly during a two-step process. First, predictions were made individually without an answer key. Second, experts had the opportunity to revise their predictions after being provided an answer key in a group discussion. We compared expert predictions with actual student performance using results from over 400 students spanning multiple courses and terms. We found no clear correlation between expert predictions of the DL and the measured DL from students. Some evidence shows that discussion during the second step made expert predictions closer to student performance. We suggest that, in determining the DL for conceptual questions, using predictions of the DL by experts who have taught the course is not a valid route. The findings in this paper can be applied to assessments in both in-person, hybrid, and online settings and is applicable to subject matter beyond materials science.  more » « less
Award ID(s):
2135190
NSF-PAR ID:
10352599
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ASEE Annual Conference proceedings
ISSN:
1524-4644
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null ; null ; null ; null (Ed.)
    We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  2. We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  3. This project aims to enhance students’ learning in foundational engineering courses through oral exams based on the research conducted at the University of California San Diego. The adaptive dialogic nature of oral exams provides instructors an opportunity to better understand students’ thought processes, thus holding promise for improving both assessments of conceptual mastery and students’ learning attitudes and strategies. However, the issues of oral exam reliability, validity, and scalability have not been fully addressed. As with any assessment format, careful design is needed to maximize the benefits of oral exams to student learning and minimize the potential concerns. Compared to traditional written exams, oral exams have a unique design space, which involves a large range of parameters, including the type of oral assessment questions, grading criteria, how oral exams are administered, how questions are communicated and presented to the students, how feedback were provided, and other logistical perspectives such as weight of oral exam in overall course grade, frequency of oral assessment, etc. In order to address the scalability for high enrollment classes, key elements of the project are the involvement of the entire instructional team (instructors and teaching assistants). Thus the project will create a new training program to prepare faculty and teaching assistants to administer oral exams that include considerations of issues such as bias and students with disabilities. The purpose of this study is to create a framework to integrate oral exams in core undergraduate engineering courses, complementing existing assessment strategies by (1) creating a guideline to optimize the oral exam design parameters for the best students learning outcomes; and (2) Create a new training program to prepare faculty and teaching assistants to administer oral exams. The project will implement an iterative design strategy using an evidence-based approach of evaluation. The effectiveness of the oral exams will be evaluated by tracking student improvements on conceptual questions across consecutive oral exams in a single course, as well as across other courses. Since its start in January 2021, the project is well underway. In this poster, we will present a summary of the results from year 1: (1) exploration of the oral exam design parameters, and its impact in students’ engagement and perception of oral exams towards learning; (2) the effectiveness of the newly developed instructor and teaching assistants training programs (3) The development of the evaluation instruments to gauge the project success; (4) instructors and teaching assistants experience and perceptions. 
    more » « less
  4. Since the 2014 high-profile meta-analysis of undergraduate STEM courses, active learning has become a standard in higher education pedagogy. One way to provide active learning is through the flipped classroom. However, finding suitable pre-class learning activities to improve student preparation and the subsequent classroom environment, including student engagement, can present a challenge in the flipped modality. To address this challenge, adaptive learning lessons were developed for pre-class learning for a course in Numerical Methods. The lessons would then be used as part of a study to determine their cognitive and affective impacts. Before the study could be started, it involved constructing well-thought-out adaptive lessons. This paper discusses developing, refining, and revising the adaptive learning platform (ALP) lessons for pre-class learning in a Numerical Methods flipped course. In a prior pilot study at a large public southeastern university, the first author had developed ALP lessons for the pre-class learning for four (Nonlinear Equations, Matrix Algebra, Regression, Integration) of the eight topics covered in a Numerical Methods course. In the current follow-on study, the first author and two other instructors who teach Numerical Methods, one from a large southwestern urban university and another from an HBCU, collaborated on developing the adaptive lessons for the whole course. The work began in Fall 2020 by enumerating the various chapters and breaking each one into individual lessons. Each lesson would include five sections (introduction, learning objectives, video lectures, textbook content, assessment). The three instructors met semi-monthly to discuss the content that would form each lesson. The main discussion of the meetings centered on what a student would be expected to learn before coming to class, choosing appropriate content, agreeing on prerequisites, and choosing and making new assessment questions. Lessons were then created by the first author and his student team using a commercially available platform called RealizeIT. The content was tested by learning assistants and instructors. It is important to note that significant, if not all, parts of the content, such as videos and textbook material, were available through previously done work. The new adaptive lessons and the revised existing ones were completed in December 2020. The adaptive lessons were tested for implementation in Spring 2021 at the first author's university and made 15% of the students' grade calculation. Questions asked by students during office hours, on the LMS discussion board, and via emails while doing the lessons were used to update content, clarify questions, and revise hints offered by the platform. For example, all videos in the ALP lessons were updated to HD quality based on student feedback. In addition, comments from the end-of-semester surveys conducted by an independent assessment analyst were collated to revise the adaptive lessons further. Examples include changing the textbook content format from an embedded PDF file to HTML to improve quality and meet web accessibility standards. The paper walks the reader through the content of a typical lesson. It also shows the type of data collected by the adaptive learning platform via three examples of student interactions with a single lesson. 
    more » « less
  5. Since the 2014 high-profile meta-analysis of undergraduate STEM courses, active learning has become a standard in higher education pedagogy. One way to provide active learning is through the flipped classroom. However, finding suitable pre-class learning activities to improve student preparation and the subsequent classroom environment, including student engagement, can present a challenge in the flipped modality. To address this challenge, adaptive learning lessons were developed for pre-class learning for a course in Numerical Methods. The lessons would then be used as part of a study to determine their cognitive and affective impacts. Before the study could be started, it involved constructing well-thought-out adaptive lessons. This paper discusses developing, refining, and revising the adaptive learning platform (ALP) lessons for pre-class learning in a Numerical Methods flipped course. In a prior pilot study at a large public southeastern university, the first author had developed ALP lessons for the pre-class learning for four (Nonlinear Equations, Matrix Algebra, Regression, Integration) of the eight topics covered in a Numerical Methods course. In the current follow-on study, the first author and two other instructors who teach Numerical Methods, one from a large southwestern urban university and another from an HBCU, collaborated on developing the adaptive lessons for the whole course. The work began in Fall 2020 by enumerating the various chapters and breaking each one into individual lessons. Each lesson would include five sections (introduction, learning objectives, video lectures, textbook content, assessment). The three instructors met semi-monthly to discuss the content that would form each lesson. The main discussion of the meetings centered on what a student would be expected to learn before coming to class, choosing appropriate content, agreeing on prerequisites, and choosing and making new assessment questions. Lessons were then created by the first author and his student team using a commercially available platform called RealizeIT. The content was tested by learning assistants and instructors. It is important to note that significant, if not all, parts of the content, such as videos and textbook material, were available through previously done work. The new adaptive lessons and the revised existing ones were completed in December 2020. The adaptive lessons were tested for implementation in Spring 2021 at the first author's university and made 15% of the students' grade calculation. Questions asked by students during office hours, on the LMS discussion board, and via emails while doing the lessons were used to update content, clarify questions, and revise hints offered by the platform. For example, all videos in the ALP lessons were updated to HD quality based on student feedback. In addition, comments from the end-of-semester surveys conducted by an independent assessment analyst were collated to revise the adaptive lessons further. Examples include changing the textbook content format from an embedded PDF file to HTML to improve quality and meet web accessibility standards. The paper walks the reader through the content of a typical lesson. It also shows the type of data collected by the adaptive learning platform via three examples of student interactions with a single lesson. 
    more » « less