skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Development of the Assessment of Student Knowledge of Green Chemistry Principles (ASK-GCP)
As implementation of green chemistry into university-level courses increases, it is vital that educators have a tool to rapidly measure student knowledge of green chemistry principles. We report the development of the Assessment of Student Knowledge of Green Chemistry Principles (ASK-GCP) and evaluation of its sensitivity and effectiveness for measuring student knowledge of green chemistry. The 24-item true–false instrument was given to a total of 448 students to gather data on the reliability, validity, and sensitivity. The instrument proved to be sensitive for distinguishing known groups with various levels of green chemistry knowledge and instructional exposure. The instrument was able to detect gains in green chemistry knowledge in pre- and post- conditions. Psychometric analysis revealed that the item difficulty range matches the sample ability range. The findings verified that the ASK-GCP is an efficient and accurate instrument to measure student knowledge of green chemistry principles.  more » « less
Award ID(s):
1852045
PAR ID:
10527479
Author(s) / Creator(s):
; ;
Publisher / Repository:
Royal Society of Chemistry
Date Published:
Journal Name:
Chemistry Education Research and Practice
Volume:
23
Issue:
3
ISSN:
1109-4028
Page Range / eLocation ID:
531 to 544
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This research paper describes the development of an assessment instrument for use with middle school students that provides insight into students’ interpretive understanding by looking at early indicators of developing expertise in students’ responses to solution generation, reflection, and concept demonstration tasks. We begin by detailing a synthetic assessment model that served as the theoretical basis for assessing specific thinking skills. We then describe our process of developing test items by working with a Teacher Design Team (TDT) of instructors in our partner school system to set guidelines that would better orient the assessment in that context and working within the framework of standards and disciplinary core ideas enumerated in the Next Generation Science Standards (NGSS). We next specify our process of refining the assessment from 17 items across three separate item pools to a final total of three open-response items. We then provide evidence for the validity and reliability of the assessment instrument from the standards of (1) content, (2) meaningfulness, (3) generalizability, and (4) instructional sensitivity. As part of the discussion from the standards of generalizability and instructional sensitivity, we detail a study carried out in our partner school system in the fall of 2019. The instrument was administered to students in treatment (n= 201) and non-treatment (n = 246) groups, wherein the former participated in a two-to-three-week, NGSS-aligned experimental instructional unit introducing the principles of engineering design that focused on engaging students using the Imaginative Education teaching approach. The latter group were taught using the district’s existing engineering design curriculum. Results from statistical analysis of student responses showed that the interrater reliability of the scoring procedures were good-to-excellent, with intra-class correlation coefficients ranging between .72 and .95. To gauge the instructional sensitivity of the assessment instrument, a series of non-parametric comparative analyses (independent two-group Mann-Whitney tests) were carried out. These found statistically significant differences between treatment and non-treatment student responses related to the outcomes of fluency and elaboration, but not reflection. 
    more » « less
  2. null (Ed.)
    This research paper describes the development of an assessment instrument for use with middle school students that provides insight into students’ interpretive understanding by looking at early indicators of developing expertise in students’ responses to solution generation, reflection, and concept demonstration tasks. We begin by detailing a synthetic assessment model that served as the theoretical basis for assessing specific thinking skills. We then describe our process of developing test items by working with a Teacher Design Team (TDT) of instructors in our partner school system to set guidelines that would better orient the assessment in that context and working within the framework of standards and disciplinary core ideas enumerated in the Next Generation Science Standards (NGSS). We next specify our process of refining the assessment from 17 items across three separate item pools to a final total of three open-response items. We then provide evidence for the validity and reliability of the assessment instrument from the standards of (1) content, (2) meaningfulness, (3) generalizability, and (4) instructional sensitivity. As part of the discussion from the standards of generalizability and instructional sensitivity, we detail a study carried out in our partner school system in the fall of 2019. The instrument was administered to students in treatment (n= 201) and non- treatment (n = 246) groups, wherein the former participated in a two-to-three- week, NGSS-aligned experimental instructional unit introducing the principles of engineering design that focused on engaging students using the Imaginative Education teaching approach. The latter group were taught using the district’s existing engineering design curriculum. Results from statistical analysis of student responses showed that the interrater reliability of the scoring procedures were good-to-excellent, with intra-class correlation coefficients ranging between .72 and .95. To gauge the instructional sensitivity of the assessment instrument, a series of non-parametric comparative analyses (independent two-group Mann- Whitney tests) were carried out. These found statistically significant differences between treatment and non-treatment student responses related to the outcomes of fluency and elaboration, but not reflection. 
    more » « less
  3. Education researchers often compare performance across race and gender on research-based assessments of physics knowledge to investigate the impacts of racism and sexism on physics student learning. These investigations' claims rely on research-based assessments providing reliable, unbiased measures of student knowledge across social identity groups. We used classical test theory and differential item functioning (DIF) analysis to examine whether the items on the Force Concept Inventory (FCI) provided unbiased data across social identifiers for race, gender, and their intersections. The data was accessed through the Learning About STEM Student Outcomes platform and included responses from 4,848 students posttests in 152 calculus-based introductory physics courses from 16 institutions. The results indicated that the majority of items (22) on the FCI were biased towards a group. These results point to the need for instrument validation to account for item bias and the identification or development of fair research-based assessments. 
    more » « less
  4. Makerspaces have become a rather common structure within engineering education programs. The spaces are used in a wide range of configurations but are typically intended to facilitate student collaboration, communication, creativity, and critical thinking, essentially giving students the opportunity to learn 21st century skills and develop deeper understanding of the processes of engineering. Makerspace structure, layout, and use has been fairly well researched, yet the impact of makerspaces on student learning is understudied, somewhat per a lack of tools to measure student learning in these spaces. We developed a survey tool to assess undergraduate engineering students’ perceptions and learning in makerspaces, considering levels of students’ motivation, professional identity, engineering knowledge, and belongingness in the context of makerspaces. Our survey consists of multiple positively-phrased (supporting a condition) and some negatively-phrased (refuting a condition) survey items correlated to each of our four constructs. Our final survey contained 60 selected response items including demographic data. We vetted the instrument with an advisory panel for an additional level of validation and piloted the survey with undergraduate engineering students at two universities collecting completed responses from 196 participants. Our reliability analysis and additional statistical calculations revealed our tool was statistically sound and was effectively gathering the data we designed the instrument to measure. 
    more » « less
  5. The dimensionality of the epistemic orientation survey (EOS) was examined across four occasions with item factor analysis (IFA). Because of an emphasis on the knowledge generation of epistemic orientation (EO), four factors were selected and built into a short form of EOS (EOS-SF) including knowledge generation, knowledge replication, epistemic nature of knowledge, and student ability. To track the stability of the factor structure for each factor of EOS-SF, longitudinal invariance models were conducted. Partial measurement invariance was obtained for each of the four factors of EOS-SF. This study provides an example of ongoing instrument development in the field of applied assessment research. 
    more » « less