skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Testlet Diagnostic Classification Model with Attribute Hierarchies
In this article, a testlet hierarchical diagnostic classification model (TH-DCM) was introduced to take both attribute hierarchies and item bundles into account. The expectation-maximization algorithm with an analytic dimension reduction technique was used for parameter estimation. A simulation study was conducted to assess the parameter recovery of the proposed model under varied conditions, and to compare TH-DCM with testlet higher-order CDM (THO-DCM; Hansen, M. (2013). Hierarchical item response models for cognitive diagnosis (Unpublished doctoral dissertation). UCLA; Zhan, P., Li, X., Wang, W.-C., Bian, Y., & Wang, L. (2015). The multidimensional testlet-effect cognitive diagnostic models. Acta Psychologica Sinica, 47(5), 689. https://doi.org/10.3724/SP.J.1041.2015.00689 ). Results showed that (1) ignoring large testlet effects worsened parameter recovery, (2) DCMs assuming equal testlet effects within each testlet performed as well as the testlet model assuming unequal testlet effects under most conditions, (3) misspecifications in joint attribute distribution had an differential impact on parameter recovery, and (4) THO-DCM seems to be a robust alternative to TH-DCM under some hierarchical structures. A set of real data was also analyzed for illustration.  more » « less
Award ID(s):
2150601
PAR ID:
10402587
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Applied Psychological Measurement
Volume:
47
Issue:
3
ISSN:
0146-6216
Format(s):
Medium: X Size: p. 183-199
Size(s):
p. 183-199
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Selected response items and constructed response (CR) items are often found in the same test. Conventional psychometric models for these two types of items typically focus on using the scores for correctness of the responses. Recent research suggests, however, that more information may be available from the CR items than just scores for correctness. In this study, we describe an approach in which a statistical topic model along with a diagnostic classification model (DCM) was applied to a mixed item format formative test of English and Language Arts. The DCM was used to estimate students’ mastery status of reading skills. These mastery statuses were then included in a topic model as covariates to predict students’ use of each of the latent topics in their written answers to a CR item. This approach enabled investigation of the effects of mastery status of reading skills on writing patterns. Results indicated that one of the skills, Integration of Knowledge and Ideas, helped detect and explain students’ writing patterns with respect to students’ use of individual topics. 
    more » « less
  2. null (Ed.)
    Results of a comprehensive simulation study are reported investigating the effects of sample size, test length, number of attributes and base rate of mastery on item parameter recovery and classification accuracy of four DCMs (i.e., C-RUM, DINA, DINO, and LCDMREDUCED). Effects were evaluated using bias and RMSE computed between true (i.e., generating) parameters and estimated parameters. Effects of simulated factors on attribute assignment were also evaluated using the percentage of classification accuracy. More precise estimates of item parameters were obtained with larger sample size and longer test length. Recovery of item parameters decreased as the number of attributes increased from three to five but base rate of mastery had a varying effect on the item recovery. Item parameter and classification accuracy were higher for DINA and DINO models. 
    more » « less
  3. Attribute hierarchy, the underlying prerequisite relationship among attributes, plays an important role in applying cognitive diagnosis models (CDM) for designing efficient cognitive diagnostic assessments. However, there are limited statistical tools to directly estimate attribute hierarchy from response data. In this study, we proposed a Bayesian formulation for attribute hierarchy within CDM framework and developed an efficient Metropolis within Gibbs algorithm to estimate the underlying hierarchy along with the specified CDM parameters. Our proposed estimation method is flexible and can be adapted to a general class of CDMs. We demonstrated our proposed method via a simulation study, and the results from which show that the proposed method can fully recover or estimate at least a subgraph of the underlying structure across various conditions under a specified CDM model. The real data application indicates the potential of learning attribute structure from data using our algorithm and validating the existing attribute hierarchy specified by content experts. 
    more » « less
  4. Recent studies show increasing interest in using process data (e.g., response time, response actions) to enhance measurement accuracy for respondents’ latent traits. Yet, few have explored the possibility of incorporating process information into cognitive diagnostic models (CDMs). This study proposes a novel CDM approach that utilizes a four-component joint modeling approach with response action sequences (i.e., similarity and efficiency), response time, and item responses. We employed the Markov Chain Monte Carlo method for parameter estimation and evaluated the performance of the proposed model using both an empirical study and two simulation studies. The results suggest that the process data can improve respondents’ classification accuracy under varied conditions and support the interpretation of the association between process and response data. 
    more » « less
  5. The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test length (10, 30, and 50), number of classes (2 and 3), the degree of latent class separation (normal/no separation, small, medium, and large), and class sizes (equal vs. nonequal). Effects were assessed using root mean square error (RMSE) and classification accuracy percentage computed between true parameters and estimated parameters. The results of this simulation study showed that more precise estimates of item parameters were obtained with larger sample sizes and longer test lengths. Recovery of item parameters decreased as the number of classes increased with the decrease in sample size. Recovery of classification accuracy for the conditions with two-class solutions was also better than that of three-class solutions. Results of both item parameter estimates and classification accuracy differed by model type. More complex models and models with larger class separations produced less accurate results. The effect of the mixture proportions also differentially affected RMSE and classification accuracy results. Groups of equal size produced more precise item parameter estimates, but the reverse was the case for classification accuracy results. Results suggested that dichotomous mixture IRT models required more than 2,000 examinees to be able to obtain stable results as even shorter tests required such large sample sizes for more precise estimates. This number increased as the number of latent classes, the degree of separation, and model complexity increased. 
    more » « less