Which Is More Important—the Means or the Ends? Process, Impact and Outcome Evaluations

Sections of this topic

    One of my childhood memories is of my fifth-grade English teacher posing this question to us as she analyzed a piece of classical literature: does the means justify the ends? She qualified her question with, “I know you are too young to understand this, but one day you will.” I wonder how many of us ask ourselves that question while evaluating programs. In a way, we’re also asking, “Which is really more important to us—the means or the ends, that is, the process or the outcome?” Today we will review simple definitions of 3 types of evaluations: process evaluations, impact evaluations, and outcome evaluations. Introduction to Program Evaluation courses often include this component. For more experienced evaluators, I encourage you to critically consider: if forced to choose just two out of the following 3 options within a particular evaluation situation, which would you rank as more important and why?

    Process Evaluations

    These evaluate the program activities and methods a program uses to achieve its outcomes. These activities should be directly linked to the intermediary and ultimate outcomes that your program will target. Examples of measures and evaluation questions include:

    • number and demographics of participants served,
    • number of activities such as the number of prevention workshops conducted
    • Were activities really implemented as planned? How closely was the curriculum followed, etc..

    Impact Evaluations

    These measure intermediary “outcomes” such as changes in knowledge, attitudes, and behaviors that specifically link to the ultimate outcomes your program will target. In order to be able to capture these changes, make sure to measure these items before (pre-test or baseline data) and after (post-test) your intervention. For example, a heart disease prevention program may provide workshops targeting intermediary outcomes such as changes in knowledge, attitudes, and behaviors related to nutrition and exercise. We can view these intermediate outcomes as a “go-between” that connects the procedures with the outcomes. A quick note: theory-driven and research-based program activities and measures are much more likely to actually produce/demonstrate the outcomes a program is seeking.

    Outcome Evaluations

    These evaluate changes in the ultimate outcomes your program is targeting. Again, remember to collect this data before and after your intervention. In our heart disease prevention program, we might measure changes in a number of coronary events such as heart attacks, etc. In general, this level of outcomes can be harder to measure, especially in cases where stigma or shame is associated with the outcome you are measuring.

    Process Evaluation ←→Impact Evaluation ←→ Outcome Evaluation

    Thoughts

    In program evaluation, both the means and the ends are equally critical. Let us consider the importance of process evaluations since it is so easy to overlook the means. The process indeed determines the outcome. In a well-designed program, process measures link closely to intermediary outcomes, which in turn link closely to final outcomes. If the process evaluation reveals shortages, that is, if the program has not really been implemented as planned, the final outcomes may suffer. A good process evaluation provides an adequate program description over the course of the evaluation, which is so important! A program description portrays what the program is essentially and really all about. This is not that easy to accomplish but is worth the effort. What the program essentially is in its core will determine the outcomes it produces.

    Different programmatic contexts call for different evaluations. It is beyond the scope of this post to provide an exhaustive list of the different types of evaluations. Here are a couple of resources, however:

    http://www.cdc.gov/NCIPC/pub-res/dypw/03_stages.htm

    Program Evaluation, Third Edition: Forms and Approaches (2006) by John M. Owen.

    Question:

    Evaluators, if forced to choose just two out of these 3 options, which would you rank as more important within your particular program context and why?

    Announcement:

    Who: The Center for Urban and Regional Affairs (CURA) at the University of Minnesota is offering

    What: a two-day “Introduction to Program Evaluation” workshop by Stacey Stockdill, within its Spring Conference: Evaluation in a Complex World: Changing Expectations, Changing Realities

    When: Monday, March 26-Tuesday, March 27, 2012.

    Where: University of Minnesota – Saint Paul Campus, Falcon Heights, MN 55113

    Scholarships may be available for the Introduction to Program Evaluation workshop. Scholarship application deadline: February 24, 2012.

    For more information: http://www.cura.umn.edu/news/scholarships-available-two-day-introduction-program-evaluation-workshop

    Contact Person: William Craig

    ——————

    For more resources, see our Library topic Nonprofit Capacity Building.

    ______________________________________________________________________________________

    Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing, and facilitation. Contact her at [email protected]. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/