skip to main content


This content will become publicly available on April 1, 2024

Title: Comparing the Use of Two Different Approaches to Assess Teachers’ Knowledge of Models and Modeling in Science Teaching
Science teacher knowledge for effective teaching consists of multiple knowledge bases, one of which includes science content knowledge and pedagogical knowledge. With the inclusion of science and engineering practices into the national science education standards in the US, teachers’ content knowledge goes beyond subject matter knowledge and into the realm of how scientists use practices for scientific inquiry. This study compares two approaches to constructing and validating two different versions of a survey that aims to measure the construct of teachers’ knowledge of models and modeling in science teaching. In the first version, a 24-item Likert scale survey containing content and pedagogical knowledge items was found to lack the ability to distinguish different knowledge levels for respondents, and validation through factor analysis indicated content and pedagogical knowledge items could not be separated. Findings from the validation results of the first survey influenced revisions to the second version of the survey, a 25-item multiple-choice instrument. The second survey employed a competence model framework for models and modeling for item specifications, and results from exploratory factor analysis revealed this approach to assessing the construct to be more appropriate. Recommendations for teacher assessment of science practices using competence models and points to consider in survey design, including norm-referenced or criterion-referenced tests, are discussed.  more » « less
Award ID(s):
2101590
NSF-PAR ID:
10441200
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Education Sciences
Volume:
13
Issue:
4
ISSN:
2227-7102
Page Range / eLocation ID:
405
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    The Science Teaching Efficacy Belief Instrument A (STEBI-A; Riggs & Enochs, 1990 in Science Education, 74(6), 625-637) has been the dominant measurement tool of in-service science teacher self-efficacy and outcome expectancy for nearly 30 years. However, concerns about certain aspects of the STEBI-A have arisen, including the wording, validity, reliability, and dimensionality. In the present study, we revised the STEBI-A by addressing many concerns research has identified, and developed a new instrument called the T-STEM Science Scale. The T-STEM Science Scale was reviewed by expert panels and piloted first before it was administered to 727 elementary and secondary science teachers. The combination of classical test theory (CTT) and item response theory (IRT) approaches were used to validate the instrument. Multidimensional Rasch analysis and confirmatory factor analysis were run.

    Results

    Based on the results, the negatively worded items were found to be problematic and thus removed from the instrument. We also found that the three-dimensional model fit our data the best, in line with our theoretical conceptualization. Based on the literature review and analysis, although the personal science teaching efficacy beliefs (PTSEB) construct remained intact, the original outcome expectancy construct was renamed science teacher responsibility for learning outcomes beliefs (STRLOB) and was divided into two dimensions, above- and below-average student interest or performance. The T-STEM Science Scale had satisfactory reliability values as well.

    Conclusions

    Through the development and validation of the T-STEM Science Scale, we have addressed some critical concerns emergent from prior research concerning the STEBI-A. Psychometrically, the refinement of the wording, item removal, and the separation into three constructs have resulted in better reliability values compared to STEBI-A. While two distinct theoretical foundations are now used to explain the constructs of the new T-STEM instrument, prior literature and our empirical results note the important interrelationship of these constructs. The preservation of these constructs preserves a bridge, though imperfect, to the large body of legacy research using the STEBI-A.

     
    more » « less
  2. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineering statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool. 
    more » « less
  3. In teaching mechanics, we use multiple representations of vectors to develop concepts and analysis techniques. These representations include pictorials, diagrams, symbols, numbers and narrative language. Through years of study as students, researchers, and teachers, we develop a fluency rooted in a deep conceptual understanding of what each representation communicates. Many novice learners, however, struggle to gain such understanding and rely on superficial mimicry of the problem solving procedures we demonstrate in examples. The term representational competence refers to the ability to interpret, switch between, and use multiple representations of a concept as appropriate for learning, communication and analysis. In engineering statics, an understanding of what each vector representation communicates and how to use different representations in problem solving is important to the development of both conceptual and procedural knowledge. Science education literature identifies representational competence as a marker of true conceptual understanding. This paper presents development work for a new assessment instrument designed to measure representational competence with vectors in an engineering mechanics context. We developed the assessment over two successive terms in statics courses at a community college, a medium-sized regional university, and a large state university. We started with twelve multiple-choice questions that survey the vector representations commonly employed in statics. Each question requires the student to interpret and/or use two or more different representations of vectors and requires no calculation beyond single digit integer arithmetic. Distractor answer choices include common student mistakes and misconceptions drawn from the literature and from our teaching experience. We piloted these twelve questions as a timed section of the first exam in fall 2018 statics courses at both Whatcom Community College (WCC) and Western Washington University. Analysis of students’ unprompted use of vector representations on the open-ended problem-solving section of the same exam provides evidence of the assessment’s validity as a measurement instrument for representational competence. We found a positive correlation between students’ accurate and effective use of representations and their score on the multiple choice test. We gathered additional validity evidence by reviewing student responses on an exam wrapper reflection. We used item difficulty and item discrimination scores (point-biserial correlation) to eliminate two questions and revised the remaining questions to improve clarity and discriminatory power. We administered the revised version in two contexts: (1) again as part of the first exam in the winter 2019 Statics course at WCC, and (2) as an extra credit opportunity for statics students at Utah State University. This paper includes sample questions from the assessment to illustrate the approach. The full assessment is available to interested instructors and researchers through an online tool. 
    more » « less
  4. null (Ed.)
    Through school-university partnerships that situate learning within culturally relevant educational experiences, faculty, preservice teachers, and school-based educators are able to co-construct and share scientific knowledge. This knowledge consists of pedagogical content knowledge and funds of knowledge that include both knowledge and skills developed in cultural context that have evolved historically. In early childhood education, culturally relevant Science, Technology, Engineering, Arts, and Mathematics (STEAM) learning experiences are particularly important for young children's cognitive and social emotional development. This paper describes how intentional co-planning and collaboration to celebrate the US Read across America Day provided over 100 preschool children in eight classrooms with access to STEAM lessons virtually led by university preservice teachers in partnership with educators in the school. These activities engaged children in exploring art, computer science, physical science, engineering, and mathematics within the context of a culturally relevant version of the fairy tale Goldilocks and the Three Bears. Lessons implemented as part of school-university partnerships support Black and Latinx children's development of a sense of belonging in STEAM. Further, these experiences enhance teacher candidates' abilities to engage in culturally responsive STEAM teaching while receiving ongoing guidance and education from university faculty and school-based educators. Teacher education programs within higher education institutions should embrace school- university partnerships as contexts for the development of shared scientific knowledge and discourse since the benefits are twofold. First, children and teachers gain access to, and engage with, innovative STEAM experiences. Second, preservice teachers learn culturally relevant research-based instructional strategies through university coursework situated in authentic learning experiences; thus, their learning as teacher candidates is enhanced through planning, implementation, evaluation, and critical reflection. 
    more » « less
  5. Abstract

    There is strong agreement in science teacher education of the importance of teachers' content knowledge for teaching (CKT), which includes their subject matter knowledge and their pedagogical content knowledge. However, there are limited instruments that can be easily administered and scored on a large scale to assess and study elementary science teachers' CKT. Such measures would support strategic monitoring of large groups of science teachers' CKT and the investigation of comparative questions about science teachers' CKT longitudinally across the professional continuum or across teacher education or professional development sites. To address this gap, this study focused on designing an automatically scorable summative assessment that can be used to measure preservice elementary teachers' (PSETs') CKT in one high‐leverage science content area: matter and its interactions. We conducted a field test of this CKT instrument with 822 PSETs from across the United States and used the response data to examine how this instrument functions as a potential tool for measuring PSETs' CKT in this science content area. Results suggest this instrument is reliable and can be used on large scale to support valid inferences about PSETs' CKT in this content area. In addition, the dimensionality analysis showed that all items measure a single construct of CKT about matter and its interactions, as participants did not show any differential performance by content topic or work of teaching science instructional tool categories. Implications for progressing the field's understanding of the nature of CKT and approaches to developing summative instruments to assess science teachers' CKT are discussed.

     
    more » « less