The evaluation of prevention and health promotion programs is one component of the broader field of evaluation research. Also referred to as social program evaluation, evaluation research applies the practices and principles of social research to assess the conceptualization, design, implementation, effectiveness, and efficiency of social interventions and to use that information to inform social action (Rossi, Lipsey, & Freeman, 2004). Prevention program evaluation is one component of evaluation research that draws on knowledge and traditions from several disciplines and fields of study, including psychology, public health, sociology, education, social work, social policy, public administration, medicine, and implementation science.
Below we describe prevention program evaluation, with a focus on the USA. We begin with a brief history of evaluation research and then summarize the prevention context, including a history of the prevention field and a discussion of prevention science....
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adedokun, O. A., Childress, A. L., & Burgess, W. D. (2011). Testing conceptual frameworks of nonexperimental program evaluation designs using structural equation modeling. American Journal of Evaluation, 32, 480–493.
Affholter, D. P. (1994). Outcome monitoring. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 96–118). San Francisco: Jossey-Bass.
Aldenderfer, M. S., & Blashfield, R. K. (1984). Cluster analysis. Beverly Hills, CA: Sage.
Allen, H., Cordes, H., & Hart, J. (1999). Vitalizing communities: Building on assets and mobilizing for collective action. Lincoln, NE: University of Nebraska-Lincoln.
Allen, N. E., Javdani, S., Lehrner, A. L., & Walden, A. L. (2012). “Changing the text”: Modeling council capacity to produce institutionalized change. American Journal of Community Psychology, 49, 317–331.
American Evaluation Association (AEA). (2011). Public statement on cultural competence in evaluation. Fairhaven, MA: Author. Retrieved January 19, 2013, from http://www.eval.org/ccstatement.asp
Andrews, J. O., Tingen, M. S., Jarriel, S. C., Caleb, M., Simmons, A., Brunson, J., et al. (2012). Application of a CBPR framework to inform a multi-level tobacco cessation intervention in public housing neighborhoods. American Journal of Community Psychology, 50, 129–140.
Barrera, M., Castro, F. G., & Steiker, L. K. H. (2011). A critical analysis of approaches to the development of preventive interventions for subcultural groups. American Journal of Community Psychology, 48, 439–454.
Beamish, W., & Bryer, F. (1999). Programme quality in Australian early special education: An example of participatory action research. Child: Care, Health and Development, 25, 457–472.
Benson, K., & Hartz, A. J. (2000). A comparison of observational studies and randomized, controlled trials. New England Journal of Medicine, 342, 1878–1886.
Biesta, G. (2010). Pragmatism and the philosophical foundations of mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), SAGE handbook of mixed methods in social & behavioral research (2nd ed., pp. 95–117). Thousand Oaks, CA: Sage.
Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for community intervention research. Prevention Science, 1, 31–49.
Bloom, H. S., Bos, J. M., & Lee, S. (1999). Using cluster random assignment to measure program impacts: Statistical implications for the evaluation of education programs. Evaluation Review, 23(4), 445–469.
Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.
Boruch, R. F. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage.
Braden, J. P., & Bryant, T. J. (1990). Regression discontinuity designs: Applications for school psychologists. School Psychology Review, 19(2), 232–240.
Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation, 19(3), 325–337.
Bronfenbrenner, U. (1979). The ecology of human development. Cambridge, MA: Harvard University Press.
Brown, C. H., Ten Have, T. R., Jo, B., Dagne, G., Wyman, P. A., Muthen, B., et al. (2009). Adaptive designs for randomized trials in public health. Annual Review of Public Health, 30, 1–25.
Bruyere, S. (1993). Participatory action research: An overview and implications for family members of individuals with disabilities. Journal of Vocational Rehabilitation, 3(2), 62–68.
Butterfoss, F. D. (2007). Coalitions and partnerships in community health. San Francisco: Jossey-Bass.
Caliendo, M., & Kopeinig, S. (2005). Some practical guidance for the implementation of propensity score matching (IZA Discussion Paper Series, No. 1588, pp. 1–29). Bonn, Germany: Institute for the Study of Labor.
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24, 409–429.
Campbell, D. T. (1996). Regression artifacts in time-series and longitudinal data. Evaluation and Program Planning, 19(4), 377–389.
Campbell, J. (1997). How consumers/survivors are evaluating the quality of psychiatric care. Evaluation Review, 21, 357–363.
Campbell, M. J., Donner, A., & Klar, N. (2007). Developments in cluster randomized trials and Statistics in Medicine. Statistics in Medicine, 26, 2–19.
Campbell, R., Gregory, K. A., Patterson, D., & Bybee, D. (2012). Integrating qualitative and quantitative approaches: An example of mixed methods research. In L. Jason & D. Glenwick (Eds.), Innovative methodological approaches to community-based research (pp. 51–68). Washington, DC: APA Books.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Skokie, IL: Rand McNally.
Catalano, R. F., Fagan, A. A., Gavin, L. E., Greenberg, M. T., Irwin, C. E., Ross, D. A., et al. (2012). Worldwide application of prevention science in adolescent health. Lancet, 379, 1653–1664.
Cellini, S. R., & Kee, J. E. (2010). Cost-effectiveness and cost-benefit analysis. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed., pp. 493–530). San Francisco: Jossey-Bass.
Centers for Disease Control and Prevention (CDC). (1999). Framework for program evaluation in public health (Morbidity & Mortality Weekly Report, 48, No. RR-11), pp. 1–58.
Centers for Disease Control and Prevention (CDC). (2011). A framework for program evaluation. Retrived January 19, 2013, from http://www.cdc.gov/eval/framework/index.htm
Chambers, D. (2012). The Interactive Systems Framework for dissemination and implementation: Enhancing the opportunity for implementation science. American Journal of Community Psychology, 50, 282–284.
Checkoway, B., & Richards-Schuster, K. (2003). Youth participation in community evaluation research. American Journal of Evaluation, 24, 21–33.
Cohen, A. B. (2009). Many forms of culture. American Psychologist, 64, 194–204.
Collins, K. M. T. (2010). Advanced sampling designs in mixed research: Current practices and emerging trends in the social and behavioral sciences. In A. Tashakkori & C. Teddlie (Eds.), SAGE handbook of mixed methods in social & behavioral research (2nd ed., pp. 353–377). Thousand Oaks, CA: Sage.
Community Tool Box. (2013). KU Work Group on Community Health and Development. Lawrence, KS: University of Kansas. Retrieved January 19, 2013, from the Community Tool Box: http://ctb.ku.edu/en/default.aspx
Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342, 1887–1892.
Connell, C. M. (2012). Survival analysis in prevention and intervention programs. In L. A. Jason & D. S. Glenwick (Eds.), Methodological approaches to community-based research (pp. 147–164). Washington, DC: APA Books.
Cook, T. D. (1985). Postpositivist critical multiplism. In L. Shotland & M. M. Mark (Eds.), Social science and social policy (pp. 21–62). Beverly Hills, CA: Sage.
Cook, T. D. (2008). “Waiting for Life to Arrive”: A history of the regression-discontinuity design in Psychology, Statistics and Economics. Journal of Econometrics, 142, 636–654.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Skokie, IL: Rand McNally.
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage.
Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.
Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Schweder (Eds.), Meta theory in social science (pp. 83–107). Chicago: University of Chicago Press.
Crusto, C. A., Ross, E., Kaufman, J. S., & The Center for Women and Families of Eastern Fairfield County, Inc. (2006). A practitioner’s guide to evaluating domestic violence prevention and treatment programs. In T. P. Gullotta & R. Hampton (Eds.), The prevention and treatment of interpersonal violence within the African American community: Evidence-based approaches (pp. 165–203). New York: Springer.
Curran, P. J., Stice, E., & Chassin, L. (1997). The relation between adolescent alcohol use and peer alcohol use: A longitudinal random coefficients model. Journal of Consulting and Clinical Psychology, 65, 130–140.
D’Agostino, R. B. (1998). Tutorial in biostatistics: Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17, 2265–2281.
DeGarmo, D. S., & Forgatch, M. S. (2012). A confidant support and problem solving model of divorced fathers’ parenting. American Journal of Community Psychology, 49, 258–269.
Dewey, J. (1925/2003). Experience and nature. In: J. A. Boydston & L. Hickman (Eds.), The collected works of John Dewey, 1882–1953 (Electronic ed.). Charlottesville, VA: InteLex.
Domitrovich, C. E., Bradshaw, C. P., Poduska, J. M., Hoagwood, K., Buckley, J. A., Olin, S., et al. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Based Mental Health Promotion, 1, 6–28.
Donner, A., & Klar, N. (2000). Design and analysis of cluster randomization in health research. London: Arnold.
DuBois, D., Doolittle, F., Yates, B. T., Silverthorn, N., & Tebes, J. K. (2006). Research methodology and youth mentoring. Journal of Community Psychology, 34, 657–676.
Dymnicki, A. B., & Henry, D. B. (2012). Clustering and its applications in community research. In L. A. Jason & D. S. Glenwick (Eds.), Methodological approaches to community-based research (pp. 71–88). Washington, DC: American Psychological Association.
Fetterman, D. M. (1994). Empowerment evaluation. Evaluation Practice, 15(1), 1–15.
Fishman, D. B. (1999). The case for pragmatic psychology. New York: NYU Press.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication #231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network.
Flaspohler, P., Lesesne, C. A., Puddy, R. W., & Smith, E. (2012). Advances in bridging research and practice: Introduction to the second special issue on the Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 50, 271–281.
Fraser, M. W., Guo, S., Ellis, A. R., Thompson, A. M., Wike, T., & Li, J. (2011). Outcome studies of social, behavioral, and educational interventions: Emerging issues and challenges. Research on Social Work Practice, 21, 619–635.
Frierson, H. T., Hood, S., Hughes, G. B., & Thomas, V. G. (2010). A guide to conducting culturally-responsive evaluations. In J. Frechtling, (Ed.), The 2010 user-friendly handbook for project evaluation (pp. 75–96). Arlington, VA: National Science Foundation. Retrieved January 19, 2013, from www.westat.com/Westat/pdf/news/UFHB.pdf
Gaber, J. (2000). Meta-needs assessment. Evaluation and Program Planning, 23(1), 139–147.
Garber, J., Clarke, G. N., Weersing, V. R., Beardslee, W. R., Brent, D. A., Gladstone, T. R. G., et al. (2009). Prevention of depression in at-risk adolescents randomized controlled trial. Journal of the American Medical Association, 301, 2215–2224.
Gibbons, R. D., Hedeker, D., Elkin, I., Waternaux, C., Kraemer, H. C., Greenhouse, J. B., et al. (1993). Some conceptual and statistical issues in analysis of longitudinal psychiatric data. Archives of General Psychiatry, 50, 739–750.
Giere, R. N. (2006). Scientific perspectivism. Chicago: University of Chicago Press.
Giere, R. N. (2009). Scientific perspectivism: Behind the stage door. Studies in History and Philosophy of Science, 40, 221–223.
Gilliam, A., Davis, D., Barrington, T., Lacson, R., Uhl, G., & Pheonix, U. (2002). The value of engaging stakeholders in planning and implementing evaluations. AIDS Education and Prevention, 14(supplement A), 5–17.
Girden, E. R. (1992). ANOVA repeated measures. Thousand Oaks, CA: Sage.
Gordon, R. (1987). An operational classification of disease prevention. In J. Steinberg & M. Silverman (Eds.), Preventing mental disorders: A research perspective (DHHS Publication No. ADM 87-1492, pp. 20–26). Rockville, MD: Alcohol, Drug Abuse, and Mental Health Administration.
Greene, J. C. (2007). Mixed methods in social inquiry. San Francisco: Jossey-Bass.
Greene, J., & Caracelli, V. J. (1997). Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms. New Directions for Evaluation, 74, 1–97.
Greene, J. C., & Hall, J. N. (2010). Dialectics and pragmatism: Being of consequence. In A. Tashakkori & C. Teddlie (Eds.), SAGE handbook of mixed methods in social & behavioral research (2nd ed., pp. 119–143). Thousand Oaks, CA: Sage.
Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco: Jossey-Bass.
Gunji, Y.-P., & Kamiura, M. (2004). Observational heterarchy enhancing active coupling. Physica D, 198, 74–105.
Hargreaves, W. A., Shumway, M., Hu, T., & Cuffel, B. (1998). Cost-outcome methods for mental health. San Diego, CA: Academic Press.
Harris, M. J. (2010). Evaluating public and community health programs. San Francisco: Jossey-Bass.
Harrow, B. S., & Lasater, T. M. (1996). A strategy for accurate collection of incremental cost data for cost-effectiveness analyses in field trials. Evaluation Review, 20(3), 275–290.
Hausman, A. J. (2002). Implications of evidence-based practice for community health. American Journal of Community Psychology, 30, 453–467.
Hawe, P., Degeling, D., & Hall, J. (1990). Evaluating health promotion: A health worker’s guide. Sydney, Australia: MacLennan & Petty.
Hedeker, D., Gibbons, R. D., & Flay, B. R. (1994). Random-effects regression models for clustered data with an example from smoking prevention research. Journal of Consulting and Clinical Psychology, 62(4), 757–765.
Hedeker, D., McMahon, S. D., Jason, L. A., & Salina, D. (1994). Analysis of clustered data in community psychology: With an example from a worksite smoking cessation project. American Journal of Community Psychology, 22(5), 595–615.
Heinrich, C., Maffioli, A., & Vázquez, G. (2010). A primer for applying propensity-score matching:Impact-evaluation guidelines (Tech. Notes No. IDB-TN-161). Office of Strategic Planning and Development Effectiveness,Inter-American Development Bank. Washington, DC: Inter-American Development Bank.
Hendricks, M. (1994). Making a splash: Reporting evaluation results effectively. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 549–575). San Francisco: Jossey-Bass.
Hennessy, M., & Greenberg, J. (1999). Bringing it all together: Modeling intervention processes using structural equation modeling. American Journal of Evaluation, 20(3), 471–480.
Hernandez, M. (2000). Using logic models and program theory to build outcome accountability. Education and Treatment of Children, 23(1), 24–40.
Hess, B. (2000). Assessing program impact using latent growth modeling: A primer for the evaluator. Evaluation and Program Planning, 23(4), 419–428.
Hitchcock, J. H., Nastasi, B. K., Dai, D. Y., Newman, J., Jayasena, A., Bernstein-Moore, R., et al. (2005). Illustrating a mixed-method approach for validating culturally specific constructs. Journal of School Psychology, 43, 259–278.
Hoeppner, B. B., & Proeschold-Bell, R. J. (2012). Time-series analysis in community-oriented research. In L. A. Jason & D. S. Glenwick (Eds.), Methodological approaches to community-based research (pp. 125–145). Washington, DC: APA Books.
Hurley, S. (1990). A review of cost-effectiveness analyses. Medical Journal of Australia, 153(Suppl), S20–S23.
Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142, 615–635.
Institute of Medicine (IOM). (1997). Improving health in the community: A role for performance monitoring. Washington, DC: National Academy Press.
Israel, B. A., Schulz, A. J., Parker, E. A., & Becker, A. B. (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173–202.
Jacob, B. A., & Lefgren, L. (2006). Remedial education and student achievement: A regression-discontinuity analysis. The Review of Economics and Statistics, 86, 226–244.
Jacquez, F., Vaughn, L. M., & Wagner, E. (2013). Youth as partners, participants or passive recipients: A review of children and adolescents in Community-Based Participatory Research (CBPR). American Journal of Community Psychology, 51, 176–189.
Jason, L., & Glenwick, D. (Eds.). (2012). Innovative methodological approaches to community-based research. Washington, DC: APA Books.
Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning, 21, 93–110.
Kalinowski, P., & Fidler, F. (2010). Interpreting significance: The differences between statistical significance, effect size, and practical importance. Newborn and Infant Nursing Reviews, 10, 50–54.
Kaufman, J. S. (2006, November 1). Tell it to me straight: The benefits (and struggles) of a consumer driven assessment process. Paper presented at The American Evaluation Association Conference, Portland, OR.
Kaufman, J. S., Crusto, C. A., Quan, M., Ross, E., Friedman, S. R., O’Reilly, K., et al. (2006). Utilizing program evaluation as a strategy to promote community change: Evaluation of a comprehensive, community-based, family violence initiative. American Journal of Community Psychology, 38(3–4), 191–200.
Kazda, M. J., Beel, E. R., Villegas, D., Martinez, J. G., Patel, N., & Migala, W. (2009). Methodological complexities and the use of GIS in conducting a community needs assessment of a large U.S. municipality. Journal of Community Health, 34, 210–215.
Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically-based prevention research. American Journal of Community Psychology, 27, 463–482.
Kellow, J. T. (1998). Beyond statistical significant tests: The importance of using other estimates of treatment effects to interpret evaluation results. American Journal of Evaluation, 19, 123–134.
Keppel, G., & Wickens, T. D. (2004). Design and analysis: A researcher’s handbook (4th ed.). Englewood Cliffs, NJ: Prentice Hall.
King, J. A. (2002). Building the evaluation capacity of a school district. New Directions for Evaluation, 93, 63–80.
Kleinbaum, D., & Klein, M. (2012). Survival analysis: A self-learning text (3rd ed.). New York: Springer.
Kline, R. B. (1998). Principles and practice of structural equation modeling. New York: Guilford.
Koch, R., Cairns, J. M., & Brunk, M. (2000). How to involve staff in developing and outcomes-oriented organization. Education and Treatment of Children, 23(1), 41–47.
Koepke, D., & Flay, B. R. (1989). Levels of analysis. New Directions for Program Evaluation, 43, 75–87.
Kretzmann, J., & McKnight, J. (1996). Mobilizing community assets: Program for building communities from the inside out. Chicago: ACTA Publications.
Lanza, S. T., Flaherty, B. P., & Collins, L. M. (2003). Latent class and latent transition analysis. In J. A. Schinka & W. F. Velicer (Eds.), Handbook of psychology: Vol. 2, research methods in psychology (pp. 663–685). Hoboken, NJ: Wiley.
Layde, P. M., Christiansen, A. L., Peterson, D. J., Guse, C. E., Maurana, C. A., & Brandenburg, T. (2012). A model to translate evidence-based interventions into community practice. American Journal of Public Health, 102, 617–624.
Lei, P. W., & Wu, Q. (2007). Introduction to structural equation modeling: Issues and practical considerations. Educational Measurement: Issues and Practice, 26, 33–43.
Levine, M., & Perkins, D. V. (1987). Principles of community psychology. New York: Oxford University Press.
Lipsey, M., & Cordray, D. S. (2000). Evaluation methods for social intervention. Annual Review of Psychology, 51, 345–375.
Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis. American Psychologist, 48, 1181–1209.
Lo Sasso, A. T., & Jason, L. A. (2012). Economic costs analysis for community-based interventions. In L. Jason & D. Glenwick (Eds.), Innovative methodological approaches to community-based research (pp. 221–238). Washington, DC: APA Books.
Long, B. B. (1989). The Mental Health Association and prevention. Prevention in Human Services, 6, 5–44.
Luellen, J. K., Shadish, W. R., & Clark, M. H. (2005). Propensity scores: An introduction and experimental test. Evaluation Review, 29, 530–558.
Mackay, K. (2002). The World Bank’s ECB experience. New Directions for Evaluation, 93, 81–100.
MacKinnon, D. P., & Lockwood, C. M. (2003). Advances in statistical methods for substance abuse prevention research. Prevention Science, 4, 155–171.
Madaus, G. F., & Stufflebeam, D. L. (2000). Program evaluation: A historical overview. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 3–18). Boston: Kluwer.
Madison, A. (2007). New directions for evaluation coverage of cultural issues and issues of significance to underrepresented groups. New Directions for Evaluation, 114, 107–114.
Marcantonio, R. J., & Cook, T. D. (1994). Convincing quasi-experiments: The interrupted time series and regression-discontinuity designs. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 133–154). San Francisco: Jossey-Bass.
Mark, M. M. (1986). Validity typologies and the logic and practice of quasi-experimentation. New Directions for Program Evaluation, 31, 47–66.
McGraw, S. A., Sellers, D. E., Stone, E. J., Edmundson, E. W., Johnson, C. C., Bachman, K. J., et al. (1996). Using process data to explain outcomes: An illustration from the Child and Adolescent Trial for Cardiovascular Health (CATCH). Evaluation Review, 20, 291–312.
McGuire, W. J. (1986). A perspectivist looks at contextualism and the future of behavioral science. In R. L. Rosnow & M. Georgundi (Eds.), Contextualism and understanding in behavioral science (pp. 271–303). New York: Pergamon.
Merriam, S. (1988). Case study research in education. San Francisco: Jossey-Bass.
Mersky, J. P., Topitzes, J. D., & Reynolds, A. J. (2011). Maltreatment prevention through early childhood intervention: A confirmatory evaluation of the Chicago Child-Parent Center preschool program. Children and Youth Services Review, 33, 1454–1463.
Mertens, D. M. (2011). Mixed methods as tools for social change. Journal of Mixed Methods Research, 5(3), 195–197.
Miles, J., Espiritu, R. C., Horen, N., Sebian, J., & Waetzig, E. (2010). A public health approach to children’s mental health: A conceptual framework. Washington, DC: Georgetown University Center for Child and Human Development, National Technical Assistance Center for Children’s Mental Health.
Millar, A., Simeone, R. S., & Carnevale, J. T. (2001). Logic models: A systems tool for performance management. Evaluation and Program Planning, 24, 73–81.
Milstein, B., Chapel, T. J., Wetterhall, S. F., & Cotton, D. A. (2002). Building capacity for program evaluation at the Centers for Disease Control and Prevention. New Directions for Evaluation, 93, 27–46.
Minkler, M. (2005). Community-based research partnerships: Challenges and opportunities. Journal of Urban Health: Bulleting of the New York Academy of Medicine, 82(2), ii2–ii12.
Morrisey, E., Wandersman, A., Seybolt, D., Nation, M., Crusto, C., & Davino, K. (1997). Toward a framework for bridging the gap between science and practice in prevention: A focus on evaluator and practitioner perspectives. Evaluation and Program Planning, 20(3), 367–377.
Mowbray, C., Bybee, D., Collins, M., & Levine, P. (1998). Optimizing evaluation quality and utility under resource constraints. Evaluation and Program Planning, 21, 59–71.
Mrazek, P. J., & Haggerty, R. J. (Eds.). (1994). Reducing risks for mental disorder: Frontiers for preventive intervention research. Washington, DC: Institute of Medicine, National Academy Press.
Muñoz, R. F., Mrazek, P. J., & Haggery, R. J. (1996). Institute of Medicine report on prevention of mental disorders. American Psychologist, 51, 1116–1122.
Murray, D. M., & McKinlay, S. M. (1994). Design and analysis issues in community trials. Evaluation Review, 18(4), 493–514.
Myers, D. L. (2012). Accountability and evidence-based approaches: Theory and research for juvenile justice. Criminal Justice Studies, 1–16. Ahead of Print. doi:10.1080/1478601X.2012.709853
National Institute of Mental Health (NIMH). (1996). A plan for prevention research at the National Institute of Mental Health: A report by the National Advisory Mental Health Council (NIH Publication No. 96-4093). Bethesda, MD: National Institutes of Health.
National Institute of Mental Health (NIMH). (1998). Priorities for prevention research at NIMH: A report by the National Advisory Mental Health Council (NIH Publication No. 98-2079). Bethesda, MD: National Institutes of Health.
Newcomer, K. E., & Conger, D. (2010). Using statistics in evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 454–492). San Francisco: Wiley.
O’Connell, M. E., Boat, T., & Warner, K. E. (Eds.). (2009). Preventing mental, behavioral, and emotional disorders among young people: Progress and possibilities. Washington, DC: National Academies Press.
O’Sullivan, R. G., & O’Sullivan, J. M. (1998). Evaluation voices: Promoting evaluation from within programs through collaboration. Evaluation and Program Planning, 21, 21–29.
Orlandi, M. A. (Ed.). (1992). Cultural competence for evaluators: A guide for alcohol and other drug abuse prevention practitioners working with ethnic/racial communities (DHHS Publication No. (ADM) 92-1884). Washington, DC: U.S. Department of Health and Human Services, Center for Substance Abuse Prevention.
Osgood, D. W., & Smith, G. L. (1995). Applying hierarchical linear modeling to extended longitudinal evaluation: The Boys Town follow-up study. Evaluation Review, 19(1), 3–38.
Patton, M. Q. (1978). Utilization-focused evaluation. Beverly Hills, CA: Sage.
Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
Peck, L. R. (2005). Using cluster analysis in program evaluation. Evaluation Review, 29, 178–196.
Petrosino, A. (2000). Mediators and moderators in the evaluation of programs for children: Current practice and agenda for improvement. Evaluation Review, 24(1), 47–72.
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 24(9), 442–459.
Preskill, H. A., & Russ-Eft, D. (2005). Building evaluation capacity: 72 activities for teaching and training. Thousand Oaks, CA: Sage.
Preskill, H. A., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage.
Price, R. H., & Smith, S. S. (1985). A guide to evaluating prevention programs in mental health (DHHS Publication No. ADM 85-1365). Washington, DC: US Government Printing Office.
Pullman, M. D. (2009). Participatory research in systems of care for children’s mental health. American Journal of Community Psychology, 44, 43–53.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage.
Reichardt, C. S. (2011). Criticisms of and an alternative to the Shadish, Cook, and Campbell validity typology. In H. T. Chen, S. I. Donaldson, & M. M. Mark (Eds.), Advancing validity in outcome evaluation. New directions for evaluation (Vol. 130, pp. 43–53). San Francisco: Wiley/Jossey-Bass.
Reichardt, C. S., & Trochim, W. M. K. (1995). Reports of the death of regression-discontinuity analysis are greatly exaggerated. Evaluation Review, 19(1), 39–64.
Reicken, H. W., Boruch, R. F., Campbell, D. T., Caplan, N., Glennan, T. K., Pratt, J. W., et al. (1974). Social experimentation: A method for planning and evaluating social intervention. New York: Academic Press.
Reiss, D., & Price, R. H. (1996). National research agenda for prevention research. The National Institute of Mental Health report. American Psychologist, 51, 1109–1115.
Reynolds, A. J., & Temple, J. A. (1995). Quasi-experimental estimates of the effects of a preschool intervention. Evaluation Review, 19(4), 347–373.
Rodwell, M. K. (1998). Social work constructivist research. New York: Garland Publishing.
Rogers, E. S., & Palmer-Erbs, V. (1994). Participatory action research: Implications for research and evaluation in psychiatric rehabilitation. Psychosocial Rehabilitation Journal, 18, 3–12.
Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer.
Rosenbaum, D. P., & Hanson, G. S. (1998). Assessing the effects of school-based drug education: A six-year multilevel analysis of Project D.A.R.E. Journal of Research in Crime and Delinquency, 35(4), 381–412.
Rosnow, R. L., & Georgoudi, M. (Eds.). (1986). Contextualism and understanding in behavioral science. Implications for research and theory. New York: Praeger Publishers.
Rossi, P. H., & Freeman, H. E. (1985). Evaluation: A systematic approach (3rd ed.). Newbury Park: Sage.
Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park: Sage.
Rossi, P. H., Lipsey, M., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Newbury Park: Sage.
Scheirer, M. A. (1994). Designing and using process evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 40–68). San Francisco: Jossey-Bass.
Schensul, J. J., & LeCompte, M. D. (2013). Essential ethnographic methods: A mixed methods approach (2nd ed.). Lanham, MD: AltaMira Press.
Schmitt, N., Sacco, J. M., Ramey, S., & Chan, D. (1999). Parental employment, school climate, and children’s academic and social development. Journal of Applied Psychology, 84, 737–753.
Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating causal effects using experimental and observational designs. Washington, DC: American Educational Research Association.
Schochet, P. Z., & Burghardt, J. (2007). Using propensity scoring to estimate program-related subgroup impacts in experimental program evaluations. Evaluation Review, 31, 95–120.
Sechrest, L., & Figueredo, A. J. (1993). Program evaluation. Annual Review of Psychology, 44, 645–674.
Sechrest, L., & Sidani, S. (1995). Quantitative and qualitative methods: Is there an alternative? Evaluation and Program Planning, 18(1), 77–87.
Segawa, E., Ngwe, J. E., Li, Y., Flay, B. R., & Aban Aya Coinvestigators. (2005). Evaluation of the effects of the Aban Aya Youth Project in reducing violence among African American adolescent males using latent class growth mixture modeling techniques. Evaluation Review, 29, 128–148.
SenGupta, S., Hopson, R., & Thompson-Robinson, M. (2004). Cultural competence in evaluation: An overview. New Directions for Evaluation, 102, 5–19.
Shadish, W. R. (1995). Philosophy of science and the quantitative-qualitative debates: Thirteen common errors. Evaluation and Program Planning, 18, 63–75.
Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103, 1334–1343.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
Shadish, W. R., Jr., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.
Sharpe, P. A., Greaney, M. L., Lee, P., & Royce, S. (2000). Assets oriented community assessment. Public Health Reports, 115, 205–211.
Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press.
Snow, D. L., & Tebes, J. K. (1991). Experimental and quasiexperimental designs in prevention research. In C. G. Leukefeld & W. Bukoski (Eds.), Drug abuse prevention intervention research: Methodological issues (NIDA Research Monograph 107, pp. 140–158). Washington, DC: US Government Printing Office.
Stake, R. E. (1978). The case study method in social inquiry. Educational Researcher, 7, 5–8.
Stake, R. E. (1994). Case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 236–247). Thousand Oaks, CA: Sage.
Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of the ECB process: A conversation with the ECB literature. New Directions for Evaluation, 93, 7–26.
Suchman, E. (1967). Evaluative research. New York: Russell Sage.
Summerfeldt, W. T. (2003). Program strength and fidelity in evaluation. Applied Developmental Science, 7, 55–61.
Tashakkori, A., & Teddlie, C. (Eds.). (2010). Handbook of mixed methods in social & behavioral science research. Los Angeles: Sage.
Taylor, S. J., & Bogdan, R. (1998). Introduction to qualitative research methods (3rd ed.). New York: Wiley.
Tebes, J. K. (2000). External validity and scientific psychology. American Psychologist, 55(12), 1508–1509.
Tebes, J. K. (2005). Community science, philosophy of science, and the practice of research. American Journal of Community Psychology, 35, 213–235.
Tebes, J. K. (2010). Community psychology, diversity, and the many forms of culture. American Psychologist, 65, 58–59.
Tebes, J. K. (2012). Philosophical foundations of mixed methods research: Implications for research practice. In L. Jason & D. Glenwick (Eds.), Innovative methodological approaches to community-based research (pp. 13–31). Washington, DC: APA Books.
Tebes, J. K., & Helminiak, T. H. (1999). Measuring costs and outcomes in mental health. Mental Health Services Research, 1(2), 119–121.
Tebes, J. K., Kaufman, J. S., & Chinman, M. J. (2002). Teaching about prevention to mental health professionals. In D. Glenwick & L. Jason (Eds.), Innovative approaches to the prevention of psychological problems (pp. 37–60). New York: Springer.
Tebes, J. K., Kaufman, J. S., & Connell, C. M. (2003). The evaluation of prevention and health promotion programs. In T. Gullotta & M. Bloom (Eds.), The encyclopedia of primary prevention and health promotion (pp. 46–63). New York: Kluwer/Academic.
Tebes, J. K., & Kraemer, D. T. (1991). Quantitative and qualitative knowing in mutual support research: Some lessons from the recent history of scientific psychology. American Journal of Community Psychology, 19, 739–756.
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating qualitative and quantitative approaches in the social and behavioral sciences. Thousand Oaks, CA: Sage.
Thigpen, S., Puddy, R. W., Singer, H. H., & Hall, D. M. (2012). Moving knowledge into action: Developing the rapid synthesis and translation process within the Interactive Systems Framework. American Journal of Community Psychology, 50, 285–284.
Thompson, B. (1993). The use of statistical significance tests in research: Bootstrap and other alternatives. Journal of Experimental Education, 61, 361–377.
Todd, N. E., Allen, N. E., & Javdani, S. (2012). Multi-level modeling: Method and application for community-based research. In L. Jason & D. Glenwick (Eds.), Innovative methodological approaches to community-based research (pp. 167–185). Washington, DC: APA Books.
Trickett, E. J. (1996). A future for community psychology: The contexts of diversity and the diversity of contexts. American Journal of Community Psychology, 24, 209–234.
Trickett, E. J., Beehler, S., Deutsch, C., Green, L. W., Hawe, P., McLeroy, K., et al. (2011). Advancing the science of community-level interventions. American Journal of Public Health, 101(8), 1410.
Trochim, W. M. K. (1984). Research design for program evaluation: The regression discontinuity approach. Beverly Hills, CA: Sage.
U.S. Department of Health and Human Services (DHHS). (2003). Cooperative agreements for the comprehensive community mental health services for children and their families program, child mental health initiative part 1, programmatic guidance (RFA-No. SM-02-009). Washington, DC: Author.
U.S. General Accounting Office (GAO). (1991). Designing evaluations. Washington, DC: Author.
U.S. Government Accountability Office (GAO). (2012). Designing evaluations: 2012 revision. Washington, DC: Author.
Varda, D., Shoup, J. A., & Miller, A. (2012). A systematic review of collaboration and network research in the public affairs literature: Implications for public health practice and research. American Journal of Public Health, 102, 564–571.
W. K. Kellogg Foundation. (2000). Logic model development guide: Using logic models to bring together planning, evaluation and action. Battle Creek, MI: Author.
Wagner, A. K., Soumerai, S. B., Zhang, F., & Ross-Degnan, D. (2002). Segmented regression analysis of interrupted time series studies in medication use research. Journal of Clinical Pharmacy and Therapeutics, 27, 299–309.
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., et al. (2008). Bridging the gap between prevention research and practice: An interactive systems framework for building capacity to disseminate and implement innovations. American Journal of Community Psychology, 41(3–4), 171–181.
Wandersman, A., Flaspohler, P., Ace, A., Ford, L., Imm, P., Chinman, M., et al. (2002). PIE a la mode: Mainstreaming evaluation and accountability in each program in every county of a state-wide school readiness initiative. New Directions in Program Evaluation, 99, 33–49.
Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. (2000). Getting to outcomes: A results-based approach to accountability. Evaluation and Program Planning, 23(3), 389–395.
Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. B. (1966). Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally.
Weiss, C. H. (1972). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall.
Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.
Wholey, J. S. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute.
Wholey, J. S. (2010). Exploratory evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed., pp. 81–99). San Francisco: Jossey-Bass.
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (Eds.). (2010). Handbook of practical program evaluation (3rd ed.). San Francisco: Jossey-Bass.
Whyte, W. F. (1989). Advancing scientific knowledge through participatory action research. Sociological Forum, 4(3), 367–385.
Williams, K. J., Bray, P. G., Shapiro-Mendoza, C. K., Reisz, I., & Peranteau, J. (2009). Modeling the principles of community-based participatory research in a community health assessment conducted by a health foundation. Health Promotion Practice, 10, 67–75.
Wolf, F. (2010). Enlightened eclecticism or hazardous hotchpotch? Mixed methods and triangulation strategies in comparative public policy research. Journal of Mixed Methods Research, 4(4), 144–167.
Wolff, N., Helminiak, T. W., & Tebes, J. K. (1997). Getting the cost right in cost-effectiveness analyses. American Journal of Psychiatry, 154(6), 736–743.
Wong, V. C., Cook, T. D., Barnett, S., & Jung, K. (2008). An effectiveness-based evaluation of five state pre-kindergarten programs. Journal of Policy Analysis and Management, 27, 122–154.
Woodruff, S. I. (1997). Random-effects models for analyzing clustered data from a nutrition education intervention. Evaluation Review, 21(6), 688–697.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316, 1036–1038.
Yanovitzky, I., Zanutto, E., & Hornik, R. (2005). Estimating causal effects of public health education campaigns, using propensity score methodology. Evaluation and Program Planning, 28, 209–220.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.), Joint Committee on Standards for Educational Evaluation (JCSEE). Thousand Oaks, CA: Sage.
Zeger, S. L., Irizarry, R., & Peng, R. D. (2006). On time series analysis of public health and biomedical data. Annual Review of Public Health, 27, 57–79.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this entry
Cite this entry
Tebes, J.K., Kaufman, J.S., Connell, C.M., Crusto, C.A., Thai, N.D. (2014). Evaluation in Primary Prevention and Health Promotion. In: Gullotta, T.P., Bloom, M. (eds) Encyclopedia of Primary Prevention and Health Promotion. Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-5999-6_95
Download citation
DOI: https://doi.org/10.1007/978-1-4614-5999-6_95
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4614-5998-9
Online ISBN: 978-1-4614-5999-6
eBook Packages: MedicineReference Module Medicine