Abstract
This chapter presents assessments of the content and convergent validity of the opportunity to learn (OTL) measures in the PISA and TIMSS surveys. Assessments of content validity are based on the correspondence between the OTL measures and the frameworks that guided the development of the mathematics tests in both surveys. Conclusions with regard to convergent validity are based on the statistical association between OTL and mathematics achievement. The chapter points to remarkable differences between PISA and TIMSS about the way OTL is measured. The content validity of the OTL measure in TIMSS appears to be more credible, but the PISA measure shows a much stronger association with mathematics achievement.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bruggencate, C. G., Pelgrum, W. J., & Plomp, T. (1986). First results of the second IEA science study in the Netherlands. In W. J. Nijhof & E. Warries (Eds.), Outcomes of educationand training. Swets & Zeitlinger.
Carmines, E. G., & Zeller, R. A. (1979). Reliability and validitity assessment. SAGE. https://doi.org/10.4135/9781412985642
Caroll, J. B. (1963). A model of school learning. Teachers College Record, 64, 722–733.
Caroll, J. B. (1989). The Caroll model, a 25-year retrospective and prospective view. Educational Researcher, 18, 26–31.
Cohen. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). Lawrence Erlbaum.
Faber, J. M., Luyten, H., & Vischer, A. (2017). The effects of a digital formative assessment tool on mathematics achievement and student motivation: Results of a randomized experiment. Computers & Education, 106, 83–96. https://doi.org/10.1016/j.compedu.2016.12.001
Guilford, J.P. (1946). New standards for test evaluation. First Published December 1, 1946. Research Article .https://doi.org/10.1177/001316444600600401
Hattie, J. (2009). Visible learning. Routledge.
Horn, A., & Walberg, H. J. (1984). Achievement and interest as a function of quantity andquality of instruction. Journal of Educational Research, 77, 227–237.
Husen, T. (1967). International study of achievement in mathematics: A comparison of twelve countries. Wiley.
Kane, M. T. (1992). An argument-based approach to validity. Psychological Bulletin, 112(3), 527–535. https://doi.org/10.1037/0033-2909.112.3.527
Kane, M. (2006). Content-related validity evidence in test development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 131–153). Lawrence Erlbaum Associates Publishers.
Kane, M. T. (2008). Terminology, emphasis, and utility in validation. Educational Researcher, 37(2), 76–82. https://doi.org/10.3102/0013189X08315390
Koretz, D. M., McCaffrey, D. F., & Hamilton, L. S. (2001). Towards a framework for validating under high-stakes conditions. CSE Technical Report, 551. Center for the Study of Evaluation. http://cresst.org/wp-content/uploads/TR551.pdf
Kurz, A., Elliott, S. N., Wehby, J. H., & Smithson, J. L. (2010). Alignment of the intended, planned, and enacted curriculum in general and special education and its relation to student achievement. The Journal of Special Education, 44(3), 131–145.
Kyriakides, L., Christoforou, C., & Charalambous, C. I. (2013). What matters for student learning outcomes: A meta-analysis of studies exploring factors of effective teaching. Teacher and Teacher Education, 36, 143–152.
Lamain, M., Scheerens, J., & Noort, P. (2017). Review and “vote-count” analysis of OTL-effect studies. In J. Scheerens (Ed.), Opportunity to learn, curriculum alignment and test preparation, a research review. Springer.
Luyten, H. (2017). Predictive power of OTL measures in TIMSS and PISA. In J. Scheerens (Ed.), Opportunity to learn, curriculum alignment and test preparation, a research review. Springer.
Marzano. (2003). What works in schools. Translating research into action. Association for Supervision and Curriculum Development.
Mehrens, W. (1997). The consequences of consequential validity. Educational Measurement: Issues and Practice, 16(2), 16–8.
Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational measurement: Issues and practice., 14(4), 5–8.
Mullis, I. V. S., Martin, M. O., Ruddock, G. J., O’Sullivan, Y., & Preuschoff, C. (2009). TIMSS 2011 assessment frameworks. TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College. https://timss.bc.edu/timss2011/downloads/TIMSS2011_Frameworks.pdf
Mullis, I. V. S., Martin, M. O., Foy, P., & Arora, A. (2012). TIMSS 2011 international results in mathematics. TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.
OECD. (2009). PISA data analysis manual SPSS Second Edition. OECD. https://doi.org/10.1787/9789264056275-en
OECD. (2013). PISA 2012 Assessment and analytical framework: Mathematics, reading, science, problem solving and financial literacy. OECD Publishing. https://doi.org/10.1787/9789264190511-en
OECD. (2014a). PISA 2012 results: What students know and can do – student performance in mathematics, reading and science (Volume I, Revised edition, February 2014). OECD Publishing. https://doi.org/10.1787/9789262408780-en
OECD. (2014b). PISA 2012 technical report. OECD. https://www.oecd.org/pisa/pisaproducts/PISA-2012-technical-report-final.pdf
Pelgrum, W. J., Th. Eggen, J. H. M., & Plomp, T. J. (1983). The second mathematics study: Results. Twente University.
Polikoff, M. S., & Porter, A. C. (2014). Instructional alignment as a measure of teaching quality. Educational Evaluation and Policy Analysis, 20, 1–18. https://doi.org/10.3102/0162373714531851
Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards: The new U.S. intended curriculum. Educational Researcher, 40(3), 103–116. http://journals.sagepub.com/doi/pdf/10.3102/0013189X11405038
Porter, A. C., & Smithson, J. L. (2001). Defining, developing, and using curriculum indicators. Consortium for Policy Research in Education University of Pennsylvania Graduate School of Education. https://www.cpre.org/sites/default/files/researchreport/788_rr48.pdf
Robinson, W. S. (1950). Ecological correlations and the behavior of individuals. American Sociological Review, 15(3), 351–357.
Scheerens, J. (2016). Educational Effectiveness and Ineffectiveness. A critical review of the knowledge base. Springer. http://www.springer.com/gp/book/9789401774574
Scheerens, J. (Ed.). (2017). Opportunity to learn, curriculum alignment and test preparation, a research review. Springer. http://www.springer.com/gp/book/9783319431093
Scheerens, J., & Bosker, R. J. (1997). The foundations of educational effectiveness. Elsevier Science Ltd.
Scheerens, J., Luyten, H., Steen, R., & Thouars, Y. L.-d. (2007). Review and meta-analyses of school and teaching effectiveness. University of Twente, Department of Educational Organisation and Management.
Schmidt, W. H. (2009). Exploring the relationship between content coverage and achievement: Unpacking the meaning of tracking in eighth grade mathematics. Michigan State University. http://education.msu.edu/epc/forms/Schmidt_2009_Relationship_between_Content_Coverage_and_Achievement.pdf
Schmidt, W. H., Burroughs, N. A., Zoido, P., & Houang, R. H. (2015). The role of schooling in perpetuating educational inequality: An international perspective. Educational Researcher, 20(10), 1–16. http://journals.sagepub.com/doi/pdf/10.3102/0013189X15603982
Schmidt, W. B., Cogan, L. S., Houang, R. T., & McKnight, C. C. (2011). Content coverage across countries/states: A persisting challenge for US educational policy. American Journal of Education, 117(May, 2011), 399–427.
Schmidt, W. B., McKnight, C. C., Houang, R. T., Wiley, D. E., Cogan, L. S., & Wolfe, R. G. (2001). Why schools matter. A cross-national comparison of curriculum and learning. Jossey-Bass.
Seidel, T., & Shavelson, R. J. (2007). Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77(4), 454–499.
Wolming, S., & Wikstrӧm, C. (2010). The concept of validity in theory and practice. Assessment in Education: Principles, Policy & Practice, 17(2), 117–132. https://doi.org/10.1080/09695941003693856
Wu, M. & Adams, R.J. (2002, April 6–7). Plausible values – why they are important. Paper presented at the international objective measurement workshop. New Orleans.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Section Editor information
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this entry
Cite this entry
Luyten, H., Scheerens, J. (2022). Measures of Opportunity to Learn Mathematics in PISA and TIMSS: Can We Be Sure that They Measure What They Are Supposed to Measure?. In: Nilsen, T., Stancel-Piątak, A., Gustafsson, JE. (eds) International Handbook of Comparative Large-Scale Studies in Education. Springer International Handbooks of Education. Springer, Cham. https://doi.org/10.1007/978-3-030-88178-8_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-88178-8_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88177-1
Online ISBN: 978-3-030-88178-8
eBook Packages: EducationReference Module Humanities and Social SciencesReference Module Education