This study introduces the statistical theory of using the Standardized Root Mean Squared Error (SRMR) to test close fit in ordinal factor analysis. We also compare the accuracy of confidence intervals (CIs) and tests of close fit based on the Standardized Root Mean Squared Error (SRMR) with those obtained based on the Root Mean Squared Error of Approximation (RMSEA). We use Unweighted Least Squares (ULS) estimation with a mean and variance corrected test statistic. The current (biased) implementation for the RMSEA never rejects that a model fits closely when data are binary and almost invariably rejects the model in large samples if data consist of five categories. The unbiased RMSEA produces better rejection rates, but it is only accurate enough when the number of variables is small (e.g., p = 10) and the degree of misfit is small. In contrast, across all simulated conditions, the tests of close fit based on the SRMR yield acceptable type I error rates. SRMR tests of close fit are also more powerful than those using the unbiased RMSEA.
more »
« less
The Effect of Estimation Methods on SEM Fit Indices
We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). We considered different types and levels of misspecification in factor analysis models: misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations. Estimation methods had substantial impacts on the RMSEA and CFI so that different cutoff values need to be employed for different estimators. In contrast, SRMR is robust to the method used to estimate the model parameters. The same criterion can be applied at the population level when using the SRMR to evaluate model fit, regardless of the choice of estimation method.
more »
« less
- Award ID(s):
- 1659936
- PAR ID:
- 10545810
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Educational and Psychological Measurement
- Volume:
- 80
- Issue:
- 3
- ISSN:
- 0013-1644
- Format(s):
- Medium: X Size: p. 421-445
- Size(s):
- p. 421-445
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The transformative learning scale for the innovation mindset (TLSIM) is an instrument that effectively assesses both process-related experiences and outcome-oriented shifts in students’ self-awareness, open-mindedness, and innovation capabilities resulting from participation in innovation competitions and programs (ICPs), namely, experiential learning opportunities. It was developed using transformative learning theory (TLT) and the Kern Entrepreneurial Engineering Network’s (KEEN) 3Cs framework (Curiosity, Connections, and Creating Value). The study involved developing scale items, validating content and face validity through expert reviews and student focus groups, as well as conducting psychometric analysis using confirmatory factor analysis (CFA) on data collected from 291 STEM students (70.2% from engineering) who participated in ICPs. The CFA results showed strong factor loadings across most constructs, with Root Mean Square Error of Approximation (RMSEA) values within acceptable limits, confirming the robustness of the TLSIM for measuring both process-oriented (RMSEA = 0.047, CFI = 0.929) and outcome-oriented constructs (RMSEA = 0.052, CFI = 0.901) in the development of an innovation mindset. The analysis showed that TLSIM is a reliable and valid instrument with strong psychometric properties for measuring key constructs related to the innovation mindset. TLSIM can capture significant changes in students’ beliefs, attitudes, and self-perceptions regarding innovation. Future research should refine TLSIM across various disciplines.more » « less
-
Ghenaiet, Adel (Ed.)In the design of a large deployable mesh reflector, high surface accuracy is one of ultimate goals since it directly determines overall performance of the reflector. Therefore, evaluation of surface accuracy is needed in many cases of design and analysis of large deployable mesh reflectors. The surface accuracy is usually specified as root-mean-square error, which measures deviation of a mesh geometry from a desired working surface. In this paper, methods of root-mean-square error calculation for large deployable mesh reflectors are reviewed. Concept of reflector gain, which describes reflector performance, and its relationship with the root-mean-square error is presented. Approaches to prediction or estimation of root-mean-square error in preliminary design of a large deployable mesh reflector are shown. Three methods of root-mean-square error calculation for large deployable mesh reflectors, namely, the nodal deviation root-mean-square error, the best-fit surface root-mean-square error, and the direct root-mean-square error, are presented. Concept of effective region is introduced. An adjusted calculation of root-mean-square error is suggested when the concept of effective region is involved. Finally, these reviewed methods of root-mean-square error calculation are applied to surface accuracy evaluation of a two-facet mesh geometry, a center-feed mesh reflector, and an offset-feed mesh reflector for demonstration and comparison.more » « less
-
This paper introduces the pilot implementation of the Evidence Based Personas survey instrument for assessing non-cognitive attributes of relevance from undergraduate students at different stages of their engineering degree for the purpose of informing proactive advising processes. The survey instrument was developed with two key objectives: first, to assess its potential for streamlining and shortening existing instruments, and second, to explore the possibility of consolidating items from different surveys that measure the same or closely related constructs. A proactive advising system is being developed that uses the Mediation Model of Research Experiences (MMRE) as a framework. Within this framework, participation in various educational activities is linked to increased Commitment to Engineering via three mediating parameters: Self-Efficacy, Teamwork/Leadership Self-Efficacy, and Engineering Identity. The existing, validated MMRE survey instrument was used as a starting point for development of the current instrument with a goal of streamlining / shortening the number of questions. Ultimately, we envision augmenting the shortened instrument with items related to broader non-cognitive and affective constructs from the SUCCESS instrument. Noting that both the MMRE and SUCCESS instruments include measures of Self-Efficacy and Engineering Identity, selected questions from both were included and compared. Data was collected from 395 total respondents, and subsequent data analysis was based on 337 valid participants. Factor Analysis techniques, both exploratory and confirmatory, were employed to uncover underlying or latent variables within the results, particularly in the areas of Self-Efficacy where the combined items of the SUCCESS instrument and the MMRE instrument were used. Cronbach’s alpha analysis was employed to assess the internal consistency of the survey instrument. The Teamwork, Engineering Identity, and Commitment to Engineering constructs all produced a Cronbach’s alpha value in excess of 0.80. The Self-Efficacy construct fell below the 0.80 threshold at 0.77 which is considered to be respectable but is indicative of some short comings compared to that of the other constructs. The results of the EFA four-factor pattern matrix show the SUCCESS instrument items breaking out into their own components while the MMRE items merge with some of the items from the Engineering Identity construct suggesting a distinction in the underlying concepts these items may be measuring. This finding is further supported in the CFA through an assessment of the Goodness of Fit (GFI), Tucker-Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA) of these constructs. The initial groupings of the four constructs produced a robust CFI value of 0.853, robust TLI value of 0.838, and a robust RMSEA value of 0.075. Self-Efficacy is broken out into two sub-scales one defined by the three items from the SUCCESS instrument and the other defined by the four remaining items from the MMRE instrument. Engineering Identity was also broken into two sub-scales. The robust CFI and TLI report values of 0.928 and 0.919 respectively, and the robust RMSEA is reported to be 0.053. The findings of the factor analyses indicate that a shortened form of the MMRE survey instrument will provide reliable measures of the underlying constructs. Additionally, the results suggest that the self-efficacy as measured by items from the MMRE and from the SUCCESS instruments are related to two separate aspects of self-efficacy and do not load well into a single factor.more » « less
-
This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a wide range of conditions, including number of response categories, sample size, percent of missingness, and degree of model misfit. Results indicate that both PD and MI yield parameter estimates similar to those from analysis of complete data under conditions where the data are missing completely at random (MCAR). When the data are missing at random (MAR), PD parameter estimates are shown to be severely biased across parameter combinations in the study. When the percentage of missingness is less than 50%, MI yields parameter estimates that are similar to results from complete data. However, the fit indices (i.e., χ2, RMSEA, and WRMR) yield estimates that suggested a worse fit than results observed in complete data. We recommend that applied researchers use MI when fitting ordinal factor models with missing data. We further recommend interpreting model fit based on the TLI and CFI incremental fit indices.more » « less
An official website of the United States government
