Abstract
Computer-based case simulations (CCS) require several development iterations, each involving a sizeable investment of time from physician volunteers and test development staff. Hence, a case that fails to demonstrate good measurement properties when administered to examinees at the pretest stage represents a costly loss. An earlier study investigated the relationship of case properties to case difficulty in part to obtain information that would permit (a) better instructions to case developers and(b) identification of problematic cases earlier in the developmental cycle. Some of the best predictors, however, were scoring points typically obtained at the pretest stage, making them inappropriate for meeting these two goals. The objective of this study was to determine if these scoring points could be predicted from cases properties that would be available early in case development. Case description variables and scoring points were available for 28 cases which were analyzed using regression procedures. Three models were identified that predicted the important scoring fairly well. These models differed primarily in the area of medicine:internal medicine, obstetrics/gynecology, or no medical area specified. The results were consistent with those found for predicting difficulty and appeared promising for better understanding of the nature of the simulations.
Similar content being viewed by others
References
Clauser, B.E. & Clyman, S.G. (1994). A contrasting group's approach to standard setting for performance assessments of clinical skills. Academic Medicine 69 (RIME Supplement): S42–S44.
Clauser, B.E., Subhiyah, R.G., Nungester, R.J., Ripkey, D.R., Clyman, S.G. & McKinley, D. (1995). Scoring a performance-based assessment by modeling the judgments of experts. Journal of Educational Measurement 32: 397–415.
Clauser, B.E., Clyman, S.G., Margolis, M.J. & Ross, L.P. (1996). Are fully-compensatory models appropriate for setting standards on performance assessments of clinical skills? Academic Medicine 71 (RIME Supplement): S90–S93.
Clauser, B.E., Margolis, M.J., Clyman, S.G. & Ross, L.P. (1997). Development of automated scoring algorithms for complex performance assessments: A comparison of two approaches. Journal of Educational Measurement 34: 141–161.
Clyman, S.G., Melnick, D.E. & Clauser, B.E. (1995). Computer-based case simulations. In E.L. Mancall & P.G. Bashook (eds.), Assessing Clinical Reasoning: The Oral Examination and Alternative Methods. Evanston, IL: American Board of Medical Specialities.
Melnick, D.E. (1990). Computer-based clinical simulation: The state of the art. Evaluation in the Health Professions 13: 104–129.
Scheuneman, J.D., Fan, Y.V. & Clyman, S.G. (1998). An investigation of the difficulty of computerbased case simulations. Medical Education 32: 150–158.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Scheuneman, J.D., Clyman, S.G. & van Fan, Y. An Investigation of the Properties of Computer-Based Case Simulations. Adv Health Sci Educ Theory Pract 5, 11–22 (2000). https://doi.org/10.1023/A:1009854511330
Issue Date:
DOI: https://doi.org/10.1023/A:1009854511330