Abstract
This chapter introduces cognitive diagnosis models as a component of an alternative psychometric framework for diagnostic modeling and assessment and contrasts them to traditional item response models. The focus of this chapter is on the family of cognitive diagnosis models which represent various formulations and extensions of the deterministic, input, noisy “and” gate (DINA) model. The assumptions of, related examples pertaining to, and appropriate uses of these models are covered. In addition, several issues pertaining to the DINA models in particular and diagnostic modeling in general are also discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
de la Torre, J. (2006, June). Skills profile comparisons at the state level: An application and extension of cognitive diagnosis modeling in NAEP. Paper presented at the international meeting of the Psychometric Society, Montreal, Canada.
de la Torre, J. (2008). An empirically based method of Q-matrix validation for the DINA model: Development and applications. Journal of Educational Measurement, 45, 343–362.
de la Torre, J. (2009a). A cognitive diagnosis model for cognitively based multiple-choice options. Applied Psychological Measurement, 33, 163–183.
de la Torre, J. (2009b). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34, 115–130.
de la Torre, J. (2010, July). The partial-credit DINA model. Paper presented at the international meeting of the Psychometric Society, Athens, GA.
de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199.
de la Torre, J., & Chiu, C. Y. (2010, April). General empirical method of Q-matrix validation. Paper presented at the annual meeting of the National Council on Measurement in Education, Denver, CO.
de la Torre, J., & Douglas, J. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333–353.
de la Torre, J., & Douglas, J. (2008). Model evaluation and selection in cognitive diagnosis: An analysis of fraction subtraction data. Psychometrika, 73, 595–624.
de la Torre, J., & Karelitz, T. (2009). Impact of diagnosticity on the adequacy of models for cognitive diagnosis under a linear attribute structure. Journal of Educational Measurement, 46, 450–469.
de la Torre, J., & Liu, Y. (2008, March). A cognitive diagnosis model for continuous response. Paper presented at the annual meeting of the National Council on Measurement in Education, New York, NY.
de la Torre, J., Hong, Y., & Deng, W. (2010). Factors affecting the item parameter estimation and classification accuracy of the DINA model. Journal of Educational Measurement, 47, 227–249.
Doornik, J. A. (2003). Object-oriented matrix programming using Ox (Version 3.1). [Computer software]. London: Timberlake Consultants Press.
Haertel, E. H., & Wiley, D. E. (1993). Representations of ability structures: Implications for testing. In N. Frederiksen, R. J. Mislevy, & I. Bejar (Eds.), Test theory for a new generation of tests (pp. 359–384). Hillsdale: Erlbaum.
Hagenaars, J. A. (1990). Categorical longitudinal data: Log-linear panel, trend, and cohort analysis. Newbury Park: Sage.
Hagenaars, J. A. (1993). Log-linear models with latent variables. Newbury Park: Sage.
Hartz, S. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality. Unpublished doctoral dissertation, University of Illinois at Urbana-Champaign.
Junker, B. W. (1999, November 30). Some statistical models and computational methods that may be useful for cognitively-relevant assessment. Paper prepared for the Committee on the Foundations of Assessment, National Research Council.
Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.
Lee, J., Grigg, W., & Dion, G. (2007). The Nation’s report card: Mathematics 2007 (NCES 2007–494). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Available online at: http://nces.ed.gov/nationsreportcard/itemmaps/?subj=Mathematics&year=2007&grade=4
Leighton, J. P., & Gierl, M. J. (2007a). Cognitive diagnostic assessment for education: Theory and application. Cambridge: Cambridge University Press.
Leighton, J. P., & Gierl, M. J. (2007b). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees’ thinking processes. Educational Measurement: Issues and Practice, 26, 3–16.
Leighton, J. P., Gierl, M. J., & Hunka, S. (2004). The attribute hierarchy model: An approach for integrating cognitive theory with assessment practice. Journal of Educational Measurement, 41, 205–236.
Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187–212.
Mislevy, R. J. (1995). Probability-based inference in cognitive diagnosis. In P. D. Nichols, S. F. Chipman, & R. L. Brennan (Eds.), Cognitively diagnostic assessment (pp. 43–71). Hillsdale: Erlbaum.
Mislevy, R. J. (1996). Test theory reconceived. Journal of Education al Measurement, 33, 379–416.
Muthén, L. K., & Muthén, B. O. (2010). Mplus user’s guide (Version 6) [Computer software and manual]. Los Angeles: Muthén & Muthén.
Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Columbus: Merrill Prentice Hall.
Pellegrino, J. W., Baxter, G. P., & Glaser, R. (1999). Addressing the “two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practices. In A. Iran-Nejad & P. D. Pearson (Eds.), Review of research in education (pp. 307–353). Washington, DC: American Educational Research Association.
Rupp, A., & Templin, J. (2008). Unique characteristics of cognitive diagnosis models: A comprehensive review of the current state-of-the-art. Measurement: Interdisciplinary Research & Perspectives, 6, 219–262.
Rupp, A., Templin, J., & Henson, R. (2010). Diagnostic measurement: Theory, methods, and applications. New York: Guilford Press.
Sadler, P. M. (1998). Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of Research in Science Teaching, 35, 265–296.
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83, 758–765.
Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354.
Tatsuoka, K. K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In N. Frederiksen, R. Glaser, A. Lesgold, & M. Safto (Eds.), Monitoring skills and knowledge acquisition (pp. 453–488). Hillsdale: Erlbaum.
Tatsuoka, K. K. (2009). Cognitive assessment: An introduction to the rule space method. New York: Routledge Academic.
Tatsuoka, K. K., Corter, J., & Tatsuoka, C. (2004). Patterns of diagnosed mathematical content and process skills in TIMSS-R across a sample of 20 countries. American Educational Research Journal, 41, 901–906.
Templin, J., & Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305.
Templin, J., Henson, R., Rupp, A., & Jang, E. (2008, March). Cognitive diagnosis models for nominal response data. Paper presentation at the annual meeting of the National Council on Measurement in Education Society, New York, NY.
Templin, J., Henson, R., Douglas, J., & Hoffman, L. (2009, April). Estimating a family of diagnostic classification models with Mplus. Paper presentation at the annual meeting of the American Educational Research Association, San Diego, CA.
van der Linden, W. J., & van Krimpen-Stoop, E. M. L. A. (2003). Using response times to detect aberrant response patterns in computerized adaptive testing. Psychometrika, 68, 251–265.
van der Linden, W. J., Scrams, D. J., & Schnipke, D. L. (1999). Using response-time constraints to control for speededness in computerized adaptive testing. Applied Psychological Measurement, 23, 195–210.
van der Linden, W. J., Klein Entink, R. H., & Fox, J.-P. (2010). IRT parameter estimation with response times as collateral information. Applied Psychological Measurement, 34, 327–347.
Vermunt, J. K., & Magidson, J. (2005). Technical guide for Latent GOLD 4.0: Basic and advanced [Computer software and manual]. Belmont: Statistical Innovations Inc.
Wiggins, G. (1998). Educative assessment: Designing assessment to inform and improve performance. San Francisco: Jossey-Bass.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
de la Torre, J. (2012). Application of the DINA Model Framework to Enhance Assessment and Learning. In: Mok, M. (eds) Self-directed Learning Oriented Assessments in the Asia-Pacific. Education in the Asia-Pacific Region: Issues, Concerns and Prospects, vol 18. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4507-0_5
Download citation
DOI: https://doi.org/10.1007/978-94-007-4507-0_5
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-4506-3
Online ISBN: 978-94-007-4507-0
eBook Packages: Humanities, Social Sciences and LawEducation (R0)