Skip to main content
Log in

The Methodology of Metaevaluation as Reflected in Metaevaluations by the Western Michigan University Evaluation Center

  • Published:
Journal of Personnel Evaluation in Education Aims and scope Submit manuscript

Abstract

Metaevaluation is the process of delineating, obtaining, and applying descriptive information and judgmental information about the utility, feasibility, propriety, and accuracy of an evaluation in order to guide the evaluation and to publicly report its strengths and weaknesses. Formative metaevaluations are undertaken either in planning an evaluation or while it is in progress. They assist evaluators to plan, conduct, improve, interpret, and report their evaluation studies. Summative metaevaluations are conducted following the completion of an evaluation and help audiences to see an evaluation's strengths and weaknesses and to judge its merit and worth against the standards of good evaluation practice. Metaevaluations are in the public and professional interests to assure that evaluations provide sound conclusions and guidance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1985). Standards for educational and psychological testing. Washington, DC: Author.

    Google Scholar 

  • American Psychological Association (1973). Ethical principles in the conduct of research with human participants. Washington, DC: Author.

    Google Scholar 

  • Bracht, G.H., & Glass, G.V. (1968). The external validity of experiments. American Educational Research Journal, 5, 437-474.

    Google Scholar 

  • Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. J. Gage (Ed.), Handbook of research on teaching (pp. 171-246). Chicago: Rand McNally.

    Google Scholar 

  • Evaluation Center (1995). The USMC personnel evaluation standards. Kalamazoo, MI: Western Michigan University Evaluation Center.

    Google Scholar 

  • Finn, C.E., Stevens, F.I., Stufflebeam, D.L., & Walberg, H.J. (1997). In H. Miller (Ed.), The New York City public schools integrated learning systems project: Evaluation and meta-evaluation. International Journal of Educational Research.

  • Glass, G.V. (1974). Excellence: A paradox. Speech presented at the second annual meeting of the Pacific Northwest Research and Evaluation Conference sponsored by the Washington Educational Research Association.

  • Guba, E.G. (1969). The failure of educational evaluation. Educational Technology, IX(5), 29-38.

    Google Scholar 

  • House, E.R. (1977). Fair evaluation agreement. Urbana-Champaign, IL: University of Illinois Center for Instructional Research and Curriculum Evaluation.

    Google Scholar 

  • House, E.R., Glass, G.V, McLean, L.D., & Walker, D.F. (1977). No simple answer: Critique of the “Follow Through Evaluation.” Urbana-Champaign, IL: Center for Instructional Research and Curriculum Evaluation.

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation (1988). The personnel evaluation standards. Newbury Park, CA: Sage.

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards (2nd ed). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Krathwohl, D.R. (1972). Functions for experimental schools evaluation and their organization. In G. V Glass, M. L. Byers, & B. R. Worthen (Eds.), Recommendations for the evaluation of experimental schools projects of the U.S. Office of Education (pp. 174-194). Report of the Experimental Schools Evaluation Working Conference, Estes Park, CO, December 1971. Boulder, CO: University of Colorado Laboratory of Educational Research.

    Google Scholar 

  • Lessinger, L. (1970). Every kid a winner: Accountability in education. Palo Alto, CA: SRA.

    Google Scholar 

  • Millman, J. (Ed.). (1981). Handbook of teacher evaluation. Beverly Hills, CA: Sage.

    Google Scholar 

  • Millman, J., & Darling-Hammond, L. (Eds.). (1990). The new handbook of teacher evaluation: Assessing elementary and secondary school teachers. Newbury Park, CA: Sage.

    Google Scholar 

  • National Center for Education Statistics. (1991). SEDCAR (Standards for Education Data Collection and Reporting). Rockville, MD: Westat, Inc.

    Google Scholar 

  • Orris, M.J. (1989). Industrial applicability of the Joint Committee's Personnel Evaluation Standards. Unpublished doctoral dissertation. Western Michigan University, Kalamazoo.

    Google Scholar 

  • Sanders, J.R., & Nafziger, D.H. (1977). A basis for determining the adequacy of evaluation designs. Occasional Paper Series, Paper No. 6. Kalamazoo, MI: Western Michigan University Evaluation Center.

    Google Scholar 

  • Schnee, R. (1977). Ethical standards for evaluators: The problem. CEDR Quarterly, 10(1), 3.

    Google Scholar 

  • Scriven, M.S. (1969). An introduction to meta-evaluation. Educational Products Report, 2(5), 36-38.

    Google Scholar 

  • Scriven, M.S. (1973). Maximizing the power of causal investigations-The modus operandi method (Mimeo).

  • Scriven, M.S. (1975). Bias and bias control in evaluation. Kalamazoo, MI: Western Michigan University Evaluation Center.

    Google Scholar 

  • Scriven, M. (1994). Product evaluation: The state of the art. Evaluation Practice, 15(1), 45-62.

    Google Scholar 

  • Shadish, W.R., Newman, D.L., Scheirer, M.A., & Wye, C. (1995). Guiding principles for evaluators. New Directions for Program Evaluation, 66, 19-26.

    Google Scholar 

  • Shepard, L.A. (1977). A checklist for evaluating large-scale assessment programs. Occasional Paper Series, Paper No. 9. Kalamazoo, MI: Western Michigan University Evaluation Center.

    Google Scholar 

  • Shepard, L., Glaser, R., Linn, R., & Bohrnstedt, G. (1993). Setting performance standards for student achievement: A report of the National Academy of Education panel on the evaluation of the NAEP trial state assessment: An evaluation of the 1992 achievement levels. Stanford, CA: National Academy of Education.

    Google Scholar 

  • Sroufe, G.E. (1977). Evaluation and politics. NSSE Yearbook, Part 2, 76, 287.

    Google Scholar 

  • Stufflebeam, D.L. (1974). Meta-evaluation. Occasional Paper Series, Paper No. 3. Kalamazoo, MI: Western Michigan University Evaluation Center.

    Google Scholar 

  • Stufflebeam, D.L., Foley, W.J., Gephart, W.J., Guba, E.G., Hammond, R.L., Merriman, H.O., & Provus, M. (1971a). Educational evaluation and decision making. Itasca, IL: F. E. Peacock.

    Google Scholar 

  • Stufflebeam, D.L., Jaeger, R.M., & Scriven, M. (1992). A retrospective analysis of a summative evaluation of NAGB's pilot project to set achievement levels on the national assessment of educational progress. Annual meeting of the American Educational Research Association, San Francisco.

  • Vinovskis, M. (1999). Overseeing the nation's report card: The creation and evolution of the National Assessment Governing Board (NAGB). Washington, DC: National Assessment Governing Board.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Stufflebeam, D.L. The Methodology of Metaevaluation as Reflected in Metaevaluations by the Western Michigan University Evaluation Center. Journal of Personnel Evaluation in Education 14, 95–125 (2000). https://doi.org/10.1023/A:1008198315521

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008198315521

Keywords

Navigation