Quantitative reasoning is an essential learning objective of physics instruction. The Physics Inventory for Quantitative Literacy (PIQL) is a published assessment tool that has been developed for calculus-based physics courses to help instructors evaluate whether their students learn to reason this way. However, the PIQL is not appropriate for the large population of students taking physics who are not enrolled in, or have not completed, calculus. To address this need, we have developed the General Equation-based Reasoning inventory of QuaNtity (GERQN). The GERQN is an of the PIQL and is appropriate for most physics students; the only requirement is that students have taken algebra, so they are familiar with the use of variables, negative quantities, and linear functions. In this paper, we present the development and validation of the GERQN, and a short discussion on how the GERQN can be used by instructors to help their students learn.
more »
« less
A New Paradigm for Research-Based Assessment Development
Research based assessments have a productive and storied history in PER. While useful for conducting research on student learning, their utility is limited for instructors interested in improving their own courses. We have developed a new assessment design process that leverages three-dimensional learning, evidence-centered design, and self-regulated learning to deliver actionable feedback to instructors about supporting their students' learning. We are using this approach to design the Thermal and Statistical Physics Assessment (TaSPA), which also allows instructors to choose learning goals that align with their teaching. Perhaps more importantly, this system will be completely automated when it is completed, making the assessment scalable with minimal burden on instructors and researchers. This work represents an advancement in how we assess physics learning at a large scale and how the PER community can better support physics instructors and students.
more »
« less
- PAR ID:
- 10446617
- Date Published:
- Journal Name:
- Physics Education Research Conference Proceedings
- Page Range / eLocation ID:
- 279 to 284
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Pamucar, Dragan (Ed.)Critical thinking is the process by which people make decisions about what to trust and what to do. Many undergraduate courses, such as those in biology and physics, include critical thinking as an important learning goal. Assessing critical thinking, however, is non-trivial, with mixed recommendations for how to assess critical thinking as part of instruction. Here we evaluate the efficacy of assessment questions to probe students’ critical thinking skills in the context of biology and physics. We use two research-based standardized critical thinking instruments known as the Biology Lab Inventory of Critical Thinking in Ecology (Eco-BLIC) and Physics Lab Inventory of Critical Thinking (PLIC). These instruments provide experimental scenarios and pose questions asking students to evaluate what to trust and what to do regarding the quality of experimental designs and data. Using more than 3000 student responses from over 20 institutions, we sought to understand what features of the assessment questions elicit student critical thinking. Specifically, we investigated (a) how students critically evaluate aspects of research studies in biology and physics when they are individually evaluating one study at a time versus comparing and contrasting two and (b) whether individual evaluation questions are needed to encourage students to engage in critical thinking when comparing and contrasting. We found that students are more critical when making comparisons between two studies than when evaluating each study individually. Also, compare-and-contrast questions are sufficient for eliciting critical thinking, with students providing similar answers regardless of if the individual evaluation questions are included. This research offers new insight on the types of assessment questions that elicit critical thinking at the introductory undergraduate level; specifically, we recommend instructors incorporate more compare-and-contrast questions related to experimental design in their courses and assessments.more » « less
-
Students are often tasked in engaging with activities where they have to learn skills that are tangential to the learning outcomes of a course, such as learning a new software. The issue is that instructors may not have the time or the expertise to help students with such tangential learning. In this paper, we explore how AI-generated feedback can provide assistance. Specifically, we study this technology in the context of a constructionist curriculum where students learn about experimental research through the creation of a gamified experiment. The AI-generated feedback gives a formative assessment on the narrative design of student-designed gamified experiments, which is important to create an engaging experience. We find that students critically engaged with the feedback, but that responses varied among students. We discuss the implications for AI-generated feedback systems for tangential learning.more » « less
-
This project aims to enhance students’ learning in foundational engineering courses through oral exams based on the research conducted at the University of California San Diego. The adaptive dialogic nature of oral exams provides instructors an opportunity to better understand students’ thought processes, thus holding promise for improving both assessments of conceptual mastery and students’ learning attitudes and strategies. However, the issues of oral exam reliability, validity, and scalability have not been fully addressed. As with any assessment format, careful design is needed to maximize the benefits of oral exams to student learning and minimize the potential concerns. Compared to traditional written exams, oral exams have a unique design space, which involves a large range of parameters, including the type of oral assessment questions, grading criteria, how oral exams are administered, how questions are communicated and presented to the students, how feedback were provided, and other logistical perspectives such as weight of oral exam in overall course grade, frequency of oral assessment, etc. In order to address the scalability for high enrollment classes, key elements of the project are the involvement of the entire instructional team (instructors and teaching assistants). Thus the project will create a new training program to prepare faculty and teaching assistants to administer oral exams that include considerations of issues such as bias and students with disabilities. The purpose of this study is to create a framework to integrate oral exams in core undergraduate engineering courses, complementing existing assessment strategies by (1) creating a guideline to optimize the oral exam design parameters for the best students learning outcomes; and (2) Create a new training program to prepare faculty and teaching assistants to administer oral exams. The project will implement an iterative design strategy using an evidence-based approach of evaluation. The effectiveness of the oral exams will be evaluated by tracking student improvements on conceptual questions across consecutive oral exams in a single course, as well as across other courses. Since its start in January 2021, the project is well underway. In this poster, we will present a summary of the results from year 1: (1) exploration of the oral exam design parameters, and its impact in students’ engagement and perception of oral exams towards learning; (2) the effectiveness of the newly developed instructor and teaching assistants training programs (3) The development of the evaluation instruments to gauge the project success; (4) instructors and teaching assistants experience and perceptions.more » « less
-
Physics instructors and education researchers use research-based assessments (RBAs) to evaluate students' preparation for physics courses. This preparation can cover a wide range of constructs including mathematics and physics content. Using separate mathematics and physics RBAs consumes course time. We are developing a new RBA for introductory mechanics as an online test using both computerized adaptive testing and cognitive diagnostic models. This design allows the adaptive RBA to assess mathematics and physics content knowledge within a single assessment. In this article, we used an evidence-centered design framework to inform the extent to which our models of skills students develop in physics courses fit the data from three mathematics RBAs. Our dataset came from the LASSO platform and includes 3,491 responses from the Calculus Concept Assessment, Calculus Concept Inventory, and Pre-calculus Concept Assessment. Our model included five skills: apply vectors, conceptual relationships, algebra, visualizations, and calculus. The "deterministic inputs, noisy 'and' gate'' (DINA) analyses demonstrated a good fit for the five skills. The classification accuracies for the skills were satisfactory. Including items from the three mathematics RBAs in the item bank for the adaptive RBA will provide a flexible assessment of these skills across mathematics and physics content areas that can adapt to instructors' needs.more » « less
An official website of the United States government

