skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Using the Partial Credit Model and Rasch Model to Examine the FOCIS Survey
Abstract This study examined the dimensionality and effectiveness of the five categories Likert Scale of the framework for observing and categorizing instructional strategies (FOCIS), a survey that measures students' preference for learning activities in science instructions, developed by Tai et al. in 2012. The data included 6546 students from 3rd to 12th grade including 4 school districts. The results show that the FOCIS survey has 7 dimensions measuring students’ preferences. This study only tests the effectiveness of the Competing dimension. Compared to the Partial Credit Model (PCM) model and Rasch model, condensing down the categories to dichotomous items fits the data better. The AIC and BIC decreased, and the infit outfit improved on the Rasch model.  more » « less
Award ID(s):
1811265
PAR ID:
10544483
Author(s) / Creator(s):
;
Publisher / Repository:
IAFOR Research Archive
Date Published:
Journal Name:
IAFOR International Conference on Education official conference proceedings
ISSN:
2189-1036
Page Range / eLocation ID:
631 to 644
Format(s):
Medium: X
Location:
https://papers.iafor.org/submission77292/
Sponsoring Org:
National Science Foundation
More Like this
  1. This Research paper discusses the opportunities that utilizing a computer program can present in analyzing large amounts of qualitative data collected through a survey tool. When working with longitudinal qualitative data, there are many challenges that researchers face. The coding scheme may evolve over time requiring re-coding of early data. There may be long periods of time between data analysis. Typically, multiple researchers will participate in the coding, but this may introduce bias or inconsistencies. Ideally the same researchers would be analyzing the data, but often there is some turnover in the team, particularly when students assist with the coding. Computer programs can enable automated or semi-automated coding helping to reduce errors and inconsistencies in the coded data. In this study, a modeling survey was developed to assess student awareness of model types and administered in four first-year engineering courses across the three universities over the span of three years. The data collected from this survey consists of over 4,000 students’ open-ended responses to three questions about types of models in science, technology, engineering, and mathematics (STEM) fields. A coding scheme was developed to identify and categorize model types in student responses. Over two years, two undergraduate researchers analyzed a total of 1,829 students’ survey responses after ensuring intercoder reliability was greater than 80% for each model category. However, with much data remaining to be coded, the research team developed a MATLAB program to automatically implement the coding scheme and identify the types of models students discussed in their responses. MATLAB coded results were compared to human-coded results (n = 1,829) to assess reliability; results matched between 81%-99% for the different model categories. Furthermore, the reliability of the MATLAB coded results are within the range of the interrater reliability measured between the 2 undergraduate researchers (86-100% for the five model categories). With good reliability of the program, all 4,358 survey responses were coded; results showing the number and types of models identified by students are presented in the paper. 
    more » « less
  2. Abstract Teachers must know how to use language to support students in knowledge generation environments that align to the Next Generation Science Standards. To measure this knowledge, this study refines a survey on teachers’ knowledge of language as an epistemic tool. Rasch modelling was used to examine 15 items’ fit statistics and the functioning of a previously-designed questionnaire’s response categories. Cronbach’s alpha reliability was also examined. Additionally, interviews were used to investigate teachers’ interpretations of each item to identify ambiguous items. The results indicated that three ambiguous items were deleted based on qualitative data and three more items were deleted because of negative correlation and mismatched fit statistics. Finally, we present a revised language questionnaire with nine items and acceptable correlation and good fit statistics, with utility for science education researchers and teacher educators. This research contributes a revised questionnaire to measure teachers’ knowledge of language that could inform professional development efforts. This research also describes instrument refinement processes that could be applied elsewhere. 
    more » « less
  3. Two project-based learning approaches were implemented in a 100-level information literacy class in the Mechanical Engineering program at a mid-Atlantic university. One approach, the treatment group, partnered engineering students with education students to develop and deliver engineering lessons that guide elementary school students through the engineering design process. In the second approach, the comparison group, engineering students were partnered with their engineering classmates to work on an engineering problem using the engineering design process. The two projects were designed to have similar durations and course point values. For both projects, teams were formed, and peer evaluations were completed, using the Comprehensive Assessment of Team Member Effectiveness (CATME) survey. This study examined how the two project-based learning approaches affected students' teamwork effectiveness. Data was collected from undergraduate engineering students assigned to groups in the comparison and treatment conditions from Fall 2019 to Fall 2022. Data was collected electronically through the CATME teammate evaluations and project reflections (treatment, n = 137; comparison, n = 112). CATME uses a series of questions assessed on a 5-point Likert scale. Quantitative analysis using Analysis of Variance (ANOVA) and Covariance (ANCOVA) showed that engineering students in the treatment group expected more quality, were more satisfied, and had more task commitment than engineering students working within their discipline. However, no statistically significant differences were observed for teamwork effectiveness categories such as contribution to the team’s work, interaction with teammates, keeping the team on track, and having relevant knowledge, skills, and abilities. This result suggests that engineering students who worked in interdisciplinary teams with an authentic audience (i.e., children) perceived higher quality in their projects and had higher levels of commitment to the task than their peers in the comparison group. A thematic analysis of the written reflections was conducted to further explain the results obtained for the three categories: expecting quality, satisfaction, and task commitment. The thematic analysis revealed that the treatment, or interdisciplinary, groups exhibited considerably more positive reflections than their comparison peers regarding the project in all three categories, supporting results obtained quantitatively. 
    more » « less
  4. Quantum mechanics is a subject rife with student conceptual difficulties. In order to study and devise better strategies for helping students overcome them, we need ways of assessing on a broad level how students are thinking. This is possible with the use of standardized, research-validated assessments like the Quantum Mechanics Concept Assessment (QMCA). These assessments are useful, but they lack rigorous population independence, and the question ordering cannot be rearranged without throwing into question the validity of the results. One way to overcome these two issues is to design the exam to be compatible with Rasch measurement theory which calibrates individual items and is capable of assessing item difficulty and person ability independently. In this paper, we present a Rasch analysis of the QMCA and discuss estimated item difficulties and person abilities, item and person fit to the Rasch model, and unidimensionality of the instrument. This work will lay the foundation for more robust and potentially generalizable assessments in the future. 
    more » « less
  5. The Rasch model is widely used for item response analysis in applications ranging from recommender systems to psychology, education, and finance. While a number of estimators have been proposed for the Rasch model over the last decades, the associated analytical performance guarantees are mostly asymptotic. This paper provides a framework that relies on a novel linear minimum mean-squared error (L-MMSE) estimator which enables an exact, nonasymptotic, and closed-form analysis of the parameter estimation error under the Rasch model. The proposed framework provides guidelines on the number of items and responses required to attain low estimation errors in tests or surveys. We furthermore demonstrate its efficacy on a number of real-world collaborative filtering datasets, which reveals that the proposed L-MMSE estimator performs on par with state-of-the-art nonlinear estimators in terms of predictive performance. 
    more » « less