Evaluation is an important way to sustain and grow prevention work. Unless you can find ways to evaluate and measure your program’s impact, your success may be ultimately unproven, and it may be harder to obtain future funding. Evaluation can measure intended outcomes that may ultimately benefit both funder and grantee.
Evaluations can prove to funders that the:
• Work builds data and creates results
• Projects are high quality
• Grantee is invested in building a case for future social change
It is important to choose evaluation strategies that can accurately measure the program's intended outcomes. As you can see from the resources below, program evaluation is a science in and of itself.
Contact the PCADV Prevention Team for help planning or conducting a program evaluation at 800-932-4632
Basic evaluation includes the following components:
- Needs Assessment – determine what needs existed before the program began and describe how the program served those needs. Be sure to consider the needs in the context of the characteristics of the group/community served. (Theoretically, the needs assessment should be done before the start of the program and would determine appropriate strategies and approaches.)
- Process – Describe the strategy any why it was the best way to the reach the target group(s).
- Outcome – Establish whether the process worked. Describe how the strategy worked or what factors presented challenges to it. Measure and/or describe the outcomes – what happened and how you can prove it. Determine if anything should be changed to improve future outcomes.
Academy for Educational Development Center for Community-Based Health Strategies (2002). Process Evaluation as a Quality Assurance Tool.
Balzer, L. (n.d.). The Evaluation Portal
Center for Theory of Change Assorted Resources
Centers for Disease Control & Prevention A Framework for Program Evaluation (2012)
Cox PJ, Keener D, Woodard T & Wandersman, A. (2009) Evaluation for Improvement: A Seven Step Empowerment Evaluation Approach for Violence Prevention Organization
Fisher D, Lang KS & Wheaton J. (2010). Training Professionals in the Primary Prevention of Sexual and Intimate Partner Violence: A Planning Guide (Rev.)
Thompson MP, Basile KC, Hertz MF & Sitterle D. (2006) Measuring Intimate Partner Violence Victimization and Perpetration: A Compendium of Assessment Tools. National Center for Injury Prevention and Control
Thornton TN, Craft CA, Dahlberg LL, Lynch BS & Baer K. (2002). Best Practices of Youth Violence Prevention: A Sourcebook for Community Action (Rev.)
Community Anti-Drug Coalitions of America. (2009) Strategic Prevention Framework
Davis R, Fujie Parks L & Cohen, L. (2010) Sexual Violence and the Spectrum of Prevention: Towards a Community Solution. Enola: National Sexual Violence Resource Center
Fullwood C. (2002) Preventing Family Violence Lessons From the Community Engagement Initiative. Family Violence Prevention Fund (now Futures Without Violence)
Imm, P, Wandersman, A, Rosenbloom, D, Guckenburg, S and Leis, R. (2007). Preventing Underage Drinking: Using Getting To Outcomes™ with the SAMHSA Strategic Prevention Framework to Achieve Results
Lavoie F, Vezina L, Piche C & Boivin, M. (1995). Evaluation of a Prevention Program for Violence in Teen Dating Relationships. Journal of Interpersonal Violence. Retrieved May 15, 2013.
Point K Learning Center Advocacy Evaluation Resources
Reisman, J, Gienapp, A and Stachowaik, S. (2007). A Guide for Measuring Advocacy and Policy
Surveygizmo – Online survey templates
The California Endowment Evaluation Toolbox. (2010). Storytelling Approaches to Program Evaluation: An Introduction
University of Iowa Extension (2004). Focus Group Fundamentals: Methodology Brief
Alert! Computer use can be monitored.
Review these safety tips to learn more. Click the red quick escape button above to immediately leave this site if your abuser may see you reading it.
Keep Up with our News and Events
Fetterman D.M. & Wandersman, A. (2005) Empowerment Evaluation Principles in Practice. New York: Guilford Press
Freichtling, J. (2007). Logic Modeling Methods in Program Evaluation. San Francisco: Jossey-Bass
Gruner Gandhi, A. (2007). The Devil Is in the Details: Examining the Evidence for "Proven" School-Based Drug Abuse Prevention Programs. Evaluation Review, 31(1), 43-74
Moore, D & Tananis, C. A. (2009). Measuring Change in a Short-Term Educational Program Using a Retrospective Pretest Design. American Journal of Evaluation, 30(2), 189-202
Patton, M. Q. (2008). Utilization Focused Evaluation. Thousand Oaks: Sage
Seigart, D & Brisolara, S. (2002) Feminist Evaluation Explorations and Experiences: New Directions for Evaluation, 96 (Winter). San Francisco: American Evaluation Association
Weiss, C.H. (1998). Evaluation, 2nd Edition. Englewood Cliff: Prentice Hall
Wholey, J. S., Hatry, H. P., and Newcomer, K. E. (2004). Handbook of Practical Program Evaluation, 2nd Edition. San Fransisco: Jossey-Bass