We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). We considered different types and levels of misspecification in factor analysis models: misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations. Estimation methods had substantial impacts on the RMSEA and CFI so that different cutoff values need to be employed for different estimators. In contrast, SRMR is robust to the method used to estimate the model parameters. The same criterion can be applied at the population level when using the SRMR to evaluate model fit, regardless of the choice of estimation method.
more »
« less
Assessing Fit in Ordinal Factor Analysis Models: SRMR vs. RMSEA
This study introduces the statistical theory of using the Standardized Root Mean Squared Error (SRMR) to test close fit in ordinal factor analysis. We also compare the accuracy of confidence intervals (CIs) and tests of close fit based on the Standardized Root Mean Squared Error (SRMR) with those obtained based on the Root Mean Squared Error of Approximation (RMSEA). We use Unweighted Least Squares (ULS) estimation with a mean and variance corrected test statistic. The current (biased) implementation for the RMSEA never rejects that a model fits closely when data are binary and almost invariably rejects the model in large samples if data consist of five categories. The unbiased RMSEA produces better rejection rates, but it is only accurate enough when the number of variables is small (e.g., p = 10) and the degree of misfit is small. In contrast, across all simulated conditions, the tests of close fit based on the SRMR yield acceptable type I error rates. SRMR tests of close fit are also more powerful than those using the unbiased RMSEA.
more »
« less
- Award ID(s):
- 1659936
- PAR ID:
- 10099860
- Date Published:
- Journal Name:
- Structural equation modeling
- ISSN:
- 1070-5511
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We examine the accuracy of p values obtained using the asymptotic mean and variance (MV) correction to the distribution of the sample standardized root mean squared residual (SRMR) proposed by Maydeu-Olivares to assess the exact fit of SEM models. In a simulation study, we found that under normality, the MV-corrected SRMR statistic provides reasonably accurate Type I errors even in small samples and for large models, clearly outperforming the current standard, that is, the likelihood ratio (LR) test. When data shows excess kurtosis, MV-corrected SRMR p values are only accurate in small models ( p = 10), or in medium-sized models ( p = 30) if no skewness is present and sample sizes are at least 500. Overall, when data are not normal, the MV-corrected LR test seems to outperform the MV-corrected SRMR. We elaborate on these findings by showing that the asymptotic approximation to the mean of the SRMR sampling distribution is quite accurate, while the asymptotic approximation to the standard deviation is not.more » « less
-
This study, data driven machine learning model was developed to estimate the partitioning of Per- and Poly-fluoroalkyl Substances (PFAS) compounds during aqueous adsorption on various adsorbent materials with a vision to potentially replace the time-consuming and labor-intensive adsorption experiments. Various regression models were trained and tested using previously published data. 290 data points and 170 data points for activated carbon and mineral adsorbents, respectively, were mined for training the models and 10 data points were used to test the trained models. Statistical parameters, such as Root-Mean-Square Error (RSME), R-Squared, Mean Average Error (MAE), Mean Squared Error (MSE), etc., were used to compare the regression models. It was found that rational quadratic GPR (R-squared = 0.9966) and fine regression tree (R-Squared = 0.9427) models had the highest estimation accuracy for carbon-based and mineral-based adsorbents, respectively. These models were then validated for prediction accuracy using 10 data points from previous studies as an outer test set. Rational quadratic GPR was able to achieve 99% prediction accuracy for carbon-based adsorbent, while fine tree regression model was able to achieve 94% prediction accuracy. Despite such high estimation accuracy, the data mining process revealed the data shortage and the need for more research on PFAS adsorption to present real-world models. This study, as one of the first, shed a light on the determination of key parameters in aquatic chemistry with data mining and machine learning approaches.more » « less
-
Ravikumar, Pradeep (Ed.)We consider the task of evaluating a policy for a Markov decision process (MDP). The standard unbiased technique for evaluating a policy is to deploy the policy and observe its performance. We show that the data collected from deploying a different policy, commonly called the behavior policy, can be used to produce unbiased estimates with lower mean squared error than this standard technique. We derive an analytic expression for a minimal variance behavior policy -- a behavior policy that minimizes the mean squared error of the resulting estimates. Because this expression depends on terms that are unknown in practice, we propose a novel policy evaluation sub-problem, behavior policy search: searching for a behavior policy that reduces mean squared error. We present two behavior policy search algorithms and empirically demonstrate their effectiveness in lowering the mean squared error of policy performance estimates.more » « less
-
PurposeTo improve the performance of neural networks for parameter estimation in quantitative MRI, in particular when the noise propagation varies throughout the space of biophysical parameters. Theory and MethodsA theoretically well‐founded loss function is proposed that normalizes the squared error of each estimate with respective Cramér–Rao bound (CRB)—a theoretical lower bound for the variance of an unbiased estimator. This avoids a dominance of hard‐to‐estimate parameters and areas in parameter space, which are often of little interest. The normalization with corresponding CRB balances the large errors of fundamentally more noisy estimates and the small errors of fundamentally less noisy estimates, allowing the network to better learn to estimate the latter. Further, proposed loss function provides an absolute evaluation metric for performance: A network has an average loss of 1 if it is a maximally efficient unbiased estimator, which can be considered the ideal performance. The performance gain with proposed loss function is demonstrated at the example of an eight‐parameter magnetization transfer model that is fitted to phantom and in vivo data. ResultsNetworks trained with proposed loss function perform close to optimal, that is, their loss converges to approximately 1, and their performance is superior to networks trained with the standard mean‐squared error (MSE). The proposed loss function reduces the bias of the estimates compared to the MSE loss, and improves the match of the noise variance to the CRB. This performance gain translates to in vivo maps that align better with the literature. ConclusionNormalizing the squared error with the CRB during the training of neural networks improves their performance in estimating biophysical parameters.more » « less
An official website of the United States government

