- Award ID(s):
- 2047756
- PAR ID:
- 10401772
- Date Published:
- Journal Name:
- 27th ACM Conference on on Innovation and Technology in Computer Science Education
- Page Range / eLocation ID:
- 221 to 227
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
This is a board presentation at the 2023 ASEE Annual Conference describing the HSI Implementation and Evaluation Project: Commitment to Learning Instilled by Mastery-Based Undergraduate Program (CLIMB-UP). Commitment to Learning Instilled by a Mastery-Based Undergraduate Program (CLIMBUP) is an NSF IUSE:HSI project centered on re-designing courses with high non-completion rates (C- or lower) that have implications towards students’ graduation, transfer ability and retention. Despite decades of effort to create active, inquiry-based learning practices in classrooms, our institution continues to see equity gaps and many required courses with noncompletion rates exceeding 50%. Grading practices have been identified as one of the main culprits in the persistence of equity gaps. As a Hispanic Serving Institution, we recognize and value the diversity of experience that our students bring to our campuses and are committed to utilizing their strengths by creating datadriven, equitable grading practices that give students space to take risks and bring alternative viewpoints to our classrooms and be rewarded. We believe a Mastery-Based Grading (MBG) approach can address problems that a traditional grading approach has caused. The CLIMB-UP project is building the infrastructure to support and train STEM faculty (both tenure-line and adjuncts) to redesign and teach a Mastery-Based Graded (MGB) course, and is conducting research on faculty experiences and on the change in student attitudes, mindsets, and outcomes.more » « less
-
This project aims to enhance students’ learning in foundational engineering courses through oral exams based on the research conducted at the University of California San Diego. The adaptive dialogic nature of oral exams provides instructors an opportunity to better understand students’ thought processes, thus holding promise for improving both assessments of conceptual mastery and students’ learning attitudes and strategies. However, the issues of oral exam reliability, validity, and scalability have not been fully addressed. As with any assessment format, careful design is needed to maximize the benefits of oral exams to student learning and minimize the potential concerns. Compared to traditional written exams, oral exams have a unique design space, which involves a large range of parameters, including the type of oral assessment questions, grading criteria, how oral exams are administered, how questions are communicated and presented to the students, how feedback were provided, and other logistical perspectives such as weight of oral exam in overall course grade, frequency of oral assessment, etc. In order to address the scalability for high enrollment classes, key elements of the project are the involvement of the entire instructional team (instructors and teaching assistants). Thus the project will create a new training program to prepare faculty and teaching assistants to administer oral exams that include considerations of issues such as bias and students with disabilities. The purpose of this study is to create a framework to integrate oral exams in core undergraduate engineering courses, complementing existing assessment strategies by (1) creating a guideline to optimize the oral exam design parameters for the best students learning outcomes; and (2) Create a new training program to prepare faculty and teaching assistants to administer oral exams. The project will implement an iterative design strategy using an evidence-based approach of evaluation. The effectiveness of the oral exams will be evaluated by tracking student improvements on conceptual questions across consecutive oral exams in a single course, as well as across other courses. Since its start in January 2021, the project is well underway. In this poster, we will present a summary of the results from year 1: (1) exploration of the oral exam design parameters, and its impact in students’ engagement and perception of oral exams towards learning; (2) the effectiveness of the newly developed instructor and teaching assistants training programs (3) The development of the evaluation instruments to gauge the project success; (4) instructors and teaching assistants experience and perceptions.more » « less
-
This study reports the development, validation, and implementation of a practical exam to assess science practices in an introductory physics laboratory. The exam asks students to design and conduct an investigation, perform data analysis, and write an argument. The exam was validated with advanced physics undergraduate students and undergraduate students in introductory physics lecture courses. Face validity has been established by administering the practical in 65 laboratory sections over the course of three semesters. We found that the greatest source of variability in this exam was due to instructor grading issues and discuss the implications of this result for our ongoing assessment efforts.more » « less
-
ABSTRACT Specifications grading is a student-centered assessment method that enables flexibility and opportunities for revision. Here, we describe the first known full implementation of specifications grading in an upper-division chemical biology course. Due to the rapid development of relevant knowledge in this discipline, the overarching goal of this class is to prepare students to interpret and communicate about current research. In the past, a conventional points-based assessment method made it challenging to ensure that satisfactory standards for student work were consistently met, particularly for comprehensive written assignments. Specifications grading was chosen because the core tenet requires students to demonstrate minimum learning objectives to achieve a passing grade and complete more content of increased cognitive complexity to achieve higher grades. This strict adherence to determining grades based on demonstrated skills is balanced by opportunities for revision or flexibility in assignment deadlines. These options are made manageable for the instructors through the use of a token economy with a limited number of tokens that students can choose to use when needed. Over the duration of the course, a validated survey on self-efficacy showed slight positive trends, student comprehension and demonstrated skills qualitatively improved, and final grade distributions were not negatively affected. Instructors noticed that discussions with students were more focused on course concepts and feedback, rather than grades, while overall grading time was reduced. Responses to university-administered student feedback surveys revealed some self-reported reduction in anxiety, as well as increased confidence in managing time and course material. Recommendations are provided on how to continue to improve the overall teaching and learning experience for both instructors and students.
-
James, C (Ed.)Effective writing is important for communicating science ideas, and for writing-to-learn in science. This paper investigates lab reports from a large-enrollment college physics course that integrates scientific reasoning and science writing. While analytic rubrics have been shown to define expectations more clearly for students, and to improve reliability of assessment, there has been little investigation of how well analytic rubrics serve students and instructors in large-enrollment science classes. Unsurprisingly, we found that grades administered by teaching assistants (TAs) do not correlate with reliable post-hoc assessments from trained raters. More important, we identified lost learning opportunities for students, and misinformation for instructors about students’ progress. We believe our methodology to achieve post-hoc reliability is straightforward enough to be used in classrooms. A key element is the development of finer-grained rubrics for grading that are aligned with the rubrics provided to students to define expectations, but which reduce subjectivity of judgements and grading time. We conclude that the use of dual rubrics, one to elicit independent reasoning from students and one to clarify grading criteria, could improve reliability and accountability of lab report assessment, which could in turn elevate the role of lab reports in the instruction of scientific inquiry.more » « less