I am thrilled to announce the release of measr 1.0.0. The goal of measr is to provide a user-friendly interface for estimating and evaluating diagnostic classification models (DCMs; also called cognitive diagnostic models [CDMs]). This is a major release to mark the conclusion of the initial development work that was funded by the Institute of Education Sciences. Importantly, this does not mean that measr is going dormant!
Published in Educational Measurement: Issues and Practice
Authors Jonathan Templin, Lesa Hoffman
Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we demonstrate how diagnostic classification models may be estimated as confirmatory latent class models using Mplus, thus bridging the gap between the technical presentation of these models and their practical use for assessment in research and applied settings. Using a sample English test of three grammatical skills, we describe how diagnostic classification models can be phrased as latent class models within Mplus and how to obtain the syntax and output needed for estimation and interpretation of the model parameters. We also have written a freely available SAS program that can be used to automatically generate the Mplus syntax. We hope this work will ultimately result in greater access to diagnostic classification models throughout the testing community, from researchers to practitioners.
Published in Educational and Psychological Measurement
Authors Sandip Sinharay, Russell G. Almond
A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains provides feedback to educators about the underlying cognitive theory. This article suggests a number of graphics and statistics for diagnosing problems with cognitive diagnostic models expressed as Bayesian networks. The suggested diagnostics allow the authors to identify the inadequacy of an earlier cognitive diagnostic model and to hypothesize an improved model that provides better fit to the data.
Published in Journal of Educational and Behavioral Statistics
Authors Yanlou Liu, Wei Tian, Tao Xin
The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M2 and the associated root mean square error of approximation (RMSEA2) in item factor analysis were extended to evaluate the fit of CDMs. The findings suggested that the M2 statistic has proper empirical Type I error rates and good statistical power, and it could be used as a general statistical tool. More importantly, we found that there was a strong linear relationship between mean marginal misclassification rates and RMSEA2 when there was model–data misfit. The evidence demonstrated that .030 and .045 could be reasonable thresholds for excellent and good fit, respectively, under the saturated log-linear cognitive diagnosis model.
Diagnostic assessments measure the knowledge, skills, and understandings of students at a smaller and more actionable grain size than traditional scale-score assessments. Results of diagnostic assessments are reported as a mastery profile, indicating which knowledge, skills, and understandings the student has mastered and which ones may need more instruction. These mastery decisions are based on probabilities of mastery derived from diagnostic classification models (DCMs).This report outlines a Bayesian framework for the estimation and evaluation of DCMs. Findings illustrate the utility of the Bayesian framework for estimating and evaluating DCMs in applied settings. Specifically, the findings demonstrate how a variety of DCMs can be defined within the same conceptual framework. Additionally, using this framework, the evaluation of model fit is more straightforward and easier to interpret with intuitive graphics. Throughout, recommendations are made for specific implementation decisions for the estimation process and the assessment of model fit.