Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The purpose of this work was to test the inter-rater reliability (IRR) of a rubric used to grade technical reports in a senior-level chemical engineering laboratory course that has multiple instructors that grade deliverables. The rubric consisted of fifteen constructs that provided students detailed guidance on instructor expectations with respect to the report sections, formatting and technical writing aspects such as audience, context and purpose. Four student reports from previous years were scored using the rubric, and IRR was assessed using a two-way mixed, consistency, average-measures intraclass correlation (ICC) for each construct. Then, the instructors met as a group to discuss their scoring and reasoning. Multiple revisions were made to the rubric based on instructor feedback and constructs rated by ICC as poor. When fair or poor constructs were combined, the ICCs improved. In addition, the overall score construct continued to be rated as excellent, indicating that while different instructors may have variation at the individual construct level, they evaluate the overall quality of the report consistently. A key learning from this process was the importance of the instructor discussion around their reasoning for the scores and the importance of an ‘instructor orientation’ involving discussion and practice using the rubrics in the case of multiple instructors or a change in instructors. The developed rubric has the potential for broad applicability to engineering laboratory courses with technical writing components and could be adapted for alternative styles of technical writing genre.more » « less
-
Laboratory experimentation is a key component of the development of professional engineers. However, experiments conducted in chemical engineering laboratory classes are commonly more prescriptive than the problems faced by practicing engineers, who have agency to make consequential decisions across the experiment and communication of results. Thus, understanding how experiments in laboratory courses vary in offering students opportunities to make such decisions, and how students navigate higher agency learning experiences is important for preparing graduates ready to direct these practices. In this study, we sought to answer the following research question: What factors are measured by the Consequential Agency in Laboratory Experiments survey? To better understand student perceptions of their agency in relation to laboratory experiments, developed an initial version of the Consequential Agency in Laboratory Experiments survey, following research-based survey development guidelines. We implemented it in six upper-division laboratory courses across two universities. We used exploratory factor analysis to investigate the validity of the data from the survey for measuring relevant constructs of authenticity, agency in specific domains, responsibility, and opportunity to make decisions. We found strong support for items measuring agency as responsibility, authenticity, agency in the communication domain, agency in the experimental design domain, and opportunity to make decisions. These findings provide a foundation for developing a more precise survey capable of measuring agency across various laboratory experiment practices. Such a survey will enable future studies that investigate the impacts of increasing agency in just one domain versus in several. In turn, this can aid faculty in developing higher agency learning experiences that are more feasible to implement, compared to authentic research experiences.more » « less