Skip to main content
Log in

The assessment of quantitative problem-solving skills with “none of the above”-items (NOTA items)

  • Published:
European Journal of Psychology of Education Aims and scope Submit manuscript

Abstract

In this contribution we concentrate on the features of a particular item format: items having as the last option “none of the above” (NOTA items). There is considerable dispute on the advisability of the usage of NOTA items in testing. Some authors come to the conclusion that NOTA items should be avoided, some come to neutral conclusions while others argue that NOTA items are optimal test items. In this article, we provide evidence to this discussion by conducting protocol analysis on written statements of examinees while answering NOTA items. In our investigation, a test containing 30 multiple-choice items was administered from 169 university students. The results show that NOTA options appear to be more attractive than options with specified solutions in those cases where a problemsolver fails. Also, a relationship is found between the quality of (incorrect) problemsolving and the choice of NOTA items: the more qualitative the incorrect problemsolving process is, the more likely the student is to choose for NOTA items. Overall, our research supports the statement that ‘the more confidence an examinee has in his worked solution, which is inconsistent with one of the specified solutions, the more eager he seems to choose “none of the above”.

Résumé

Dans cette contribution, nous nous concentrons sur les caractéristiques d’un format d’éléments particulier: éléments qui ont comme dernière option “aucun des éléments précités” (NOTA). Il y a une grande discussion concernant la désirabilité de l’usage des éléments NOTA en examinant. Certains auteurs conclurent que les éléments NOTA doivent être évités, certains auteurs sont neutres, tandis que d’autres prétendent qu’il est optimal d’utiliser les éléments NOTA pour examiner. Dans cet article, nous apportons des preuves à cette discussion en conduisant une analyse protocole sur des examins écrits des étudiants qui répondent à des éléments NOTA. Dans notre recherche, 169 étudiants d’université ont fait un test contenant 30 questions à choix multiple. Les résultats montrent que les options NOTA semblent être plus attractives que les options avec des solutions spécifiques dans des cas où un étudiant ne connaît pas la réponse. En plus, une relation a été trouvée entre la qualité de la solution (incorrecte) d’un problème et le choix des éléments NOTA: le plus que le procès de résoudre un problème est qualitatif, le plus l’étudiant est tenté à choisir les éléments NOTA. En général, notre recherche scientifique supporte la thèse que le plus un étudiant est confident de sa solution, qui est inconsistante avec une des solutions spécifiées, le plus qu’il est tenté à choisir “none of the above”.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Birenbaum, M., & Tatsuoka, K.K. (1987). Open-ended versus multiple-choice response formats: It does make a difference for diagnostic purposes.Applied Psychological Measurement, 11, 385–395.

    Article  Google Scholar 

  • Birenbaum, M., Tatsuoka, K.K., & Gutvirtz, Y. (1992). Effects of response format on diagnostic assessment of scholastic achievement.Applied Psychological Measurement, 16, 353–363.

    Article  Google Scholar 

  • Crehan, K.D., & Haladyna, T.M. (1991). The validity of two item-writing rules.Journal of Experimental Education, 59, 183–192.

    Google Scholar 

  • Crehan, K.D., Haladyna, T.M., & Brewer, B.W. (1993). Use of an inclusive option and the optimal number of options for multiple-choice items.Educational and Psychological Measurement, 53, 241–247.

    Article  Google Scholar 

  • Dochy, F., & Mc Dowell, L. (1997). Assessment as a tool for learning.Studies in Educational Evaluation, 23, 279–298.

    Article  Google Scholar 

  • Dochy, F., & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment.International Journal of Educational Research, 27, 415–432.

    Google Scholar 

  • Dudycha, A.L., & Carpenter, J.B. (1973). Effects of item format on item discrimination and difficulty.Journal of Applied Psychology, 58, 116–121.

    Article  Google Scholar 

  • Forsyth, R.A., & Spratt, K.F. (1980). Measuring problem solving ability in mathematics with multiple-choice items: The effect of item format on selected item and test characteristics.Journal of Educational Measurement, 17, 31–43.

    Article  Google Scholar 

  • Frary, R.B. (1995). The none-of-the-above option: An empirical study.Applied Measurement in Education, 4, 115–124.

    Google Scholar 

  • García-Pérez, M.A. (1993). In defense of “none of the above”.British Journal of Mathematical and Statistical Psychology, 46, 213–229.

    Google Scholar 

  • Gronlund, N.E. (1988).How to construct achievement tests (4th ed.). Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • Gross, L.J. (1994). Logical versus empirical guidelines for writing test items: The case of “None of the Above”.Evaluation and the Health Professions, 17, 1, 123–126.

    Article  Google Scholar 

  • Haladyna, T.M., & Downing, S.M. (1989a). A taxonomy of multiple-choice item-writing rules.Applied Measurement in Education, 2, 37–50.

    Google Scholar 

  • Haladyna, T.M., & Downing, S.M. (1989b). Validity of a taxonomy of multiple-choice item-writing rules.Applied Measurement in Education, 2, 51–78.

    Google Scholar 

  • Horgan, J.D. (1978). Engineering reality in single-answer problems.IEEE Transactions on Education, Vol. E-21, No. 2, 65–68.

    Article  Google Scholar 

  • Kane, M.T. (1992). An argument-based approach to validity.Psychological Bulletin, 112, 527–535.

    Article  Google Scholar 

  • Knowles, S.L., & Welch, C.A. (1992). A meta-analytic review of item discrimination and difficulty in multiple-choice items using “none of the above”.Educational and Psychological Measurement, 52, 571–577.

    Article  Google Scholar 

  • Kramers-Pals, H., & Pilot, A. (1988). Solving quantitative problems: Guidelines for teaching derived from research.International Journal of Science Education, 10, 511–521.

    Article  Google Scholar 

  • Larkin, J., McDermott, J., Simon, D.P., & Simon, H.A. (1980). Expert and novice performance in solving physics problems.Science, 208, 1335–1342.

    Article  Google Scholar 

  • Leclercq, D.A. (1990). Confidence marking. In H.J. Walberg & G.D. Haertel (Eds.),The International encyclopedia of educational evaluation (pp. 343–345). Oxford: Pergamon Press.

    Google Scholar 

  • Lord, F.M., & Novick, M.R. (1986).Statistical theories of mental test scores. Reading: Addison-Welsey.

    Google Scholar 

  • Messick, S. (1989). Validity. In R.L. Linn (Ed.),Educational Measurement (3rd ed., pp. 13–103). New York: Macmillan.

    Google Scholar 

  • Mettes, C.T.C.W., Pilot, A., & Roossink, H.J. (1981). Linking factual and procedural knowledge in solving science problems: A case study in a thermodynamics course.Instructional Science, 10, 333–361.

    Article  Google Scholar 

  • Mettes, C.T.C.W., Pilot, A., Roossink, H.J., & Kramers-Pals, H. (1980). Teaching and learning problem solving in science, part I.Journal of Chemical Education, 57, 882–885.

    Google Scholar 

  • Mettes, C.T.C.W., Pilot, A., Roossink, H.J., & Kramers-Pals, H. (1981). Teaching and learning problem solving in science, part II.Journal of Chemical Education, 58, 51–55.

    Article  Google Scholar 

  • Millman, I., & Greene, J. (1989). The specification and development of tests of achievement and ability. In R.L. Linn (Ed.),Educational Measurement (3rd ed., pp. 335–366). New York: Macmillan Publishing Company.

    Google Scholar 

  • Norris, S.P. (1991). Informal reasoning assessment: Using verbal reports of thinking to improve multiple-choice test validity. In J.F. Voss, D.N. Perkins & J.W. Segal (Eds.),Informal reasoning and education (pp. 451–472). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Oosterhof, A.C., & Coats, P.K. (1984). Comparison of difficulties and reliability of quantitative word problems in completior and multiple-choice formats.Applied Psychological Measurement, 8, 287–294.

    Article  Google Scholar 

  • Rich, C.E., & Johanson, G.A. (1990).An item-level analysis of “None-of-the-above”. Paper presented at the annual meeting of the American Educational Research Association (Boston, MA, April 16–20).

  • Shepard, L.A. (1993). Evaluating test validity. In L. Darling-Hammond (Ed.),Review of Research in Education (vol. 19, pp. 405–450). Washington, DC: American Educational Research Association.

    Google Scholar 

  • Simon, H.A. (1978). Information-processing theory of human problem solving. In W.K. Estes (Ed.),Handbook of learning and cognitive processes: Human information processing: Vol. 5 (pp. 271–295). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Tollefson, N. (1987). A comparison of the item difficulty and item discrimination of multiple-choice items using the “none of the above” and one correct response options.Educational and Psychological Measurement, 47, 377–383.

    Article  Google Scholar 

  • Tollefson, N., & Tripp, A. (1983).The effect of item format on item difficulty and item discrimination. Paper presented at the Annual meeting of the American Educational Research Association, Montreal, Canada.

  • Traub, R.E. (1992). On the equivalence of the traits assessed by multiple-choice and constructed-response tests. In R. Bennet & W. Ward (Eds.),Construction versus choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment (pp. 236–256). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Traub, R.E., & MacRury, K. (1990). Antwort-Auswahl- vs. Freie-Antwort-Aufgaben bei Lernerfolgstests [Multiplechoice vs. free-response in the testing of scholastic achievement]. In K. Ingenkamp & R.S. Jäger (Eds.),Test und Trends 8 (pp. 128–156). Weinheim und Basel: Beltz Verlag.

    Google Scholar 

  • Wesman, A.G. (1971). Writing the test item. In R.L. Thorndike (Ed.),Educational measurement (2nd ed., pp. 81–129). Washington DC: American Council on Education.

    Google Scholar 

  • Woods, D.R. (1987). How might I teach problem solving. In J.E. Stice (Ed.),Developing critical thinking and problem-solving abilities (pp. 55–71). San Francisco: Jossey-Bass.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Filip Dochy.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dochy, F., Moerkerke, G., De Corte, E. et al. The assessment of quantitative problem-solving skills with “none of the above”-items (NOTA items). Eur J Psychol Educ 16, 163–177 (2001). https://doi.org/10.1007/BF03173023

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03173023

Key words

Navigation