Program evaluations are “individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working1.” What was your reaction to this definition? Has the prospect of undertaking a “research study” ever deterred you from conducting a program evaluation? Good news! Did you know that program evaluation is not the same as research and usually does not need to be as complicated?
In fact, evaluation is a process in which we all unconsciously engage to some degree or another on a daily, informal basis. How do you choose a pair of boots? Unconsciously you might consider criteria such as looks, how well the boots fit, how comfortable they are, and how appropriate they are for their particular use (walking long distances, navigating icy driveways, etc.).
Though we use the same techniques in evaluation and research and though both methods are equally systematic and rigorous (“exhaustive, thorough and accurate”2), here are a few differences between evaluation and research:
Program Evaluation Focuses on a Program vs. a Population
The research aims to produce new knowledge within a field. Ideally, researchers design studies to be able to generalize findings to the whole population–every single individual within the group being studied. Evaluation only focuses on the particular program at hand. Evaluations may face added resource and time constraints.
Program Evaluation Improves vs. Proves
Daniel L. Stufflebeam, Ph.D., a noted evaluator, captured it succinctly: “The purpose of the evaluation is to improve, not prove3.” In other words, research strives to establish that a particular factor caused a particular effect. For example, smoking causes lung cancer. The requirements to establish causation are very high. The goal of the evaluation, however, is to help improve a particular program. In order to improve a program, program evaluations get down-to-earth. They examine all the pieces required for successful program outcomes, including the practical inner workings of the program such as program activities.
Program Evaluation Determines Value vs. Being Value-free
Another prominent evaluator, Michael J. Scriven, Ph.D., notes that evaluation assigns value to a program while research seeks to be value-free4. Researchers collect data, present results, and then draw conclusions that expressly link to the empirical data. Evaluators add extra steps. They collect data, examine how the data lines up with previously-determined standards (also known as criteria or benchmarks) and determine the worth of the program. So while evaluators also make conclusions that must faithfully reflect the empirical data, they take the extra steps of comparing the program data to performance benchmarks and judging the value of the program. While this may seem to cast evaluators in the role of judge we must remember that evaluations determine the value of programs so they can help improve them.
Program Evaluations ask “Is it working?” vs. “Did it work”
Tom Chapel, MA, MBA, Chief Evaluation Officer at the Centers for Disease Control and Prevention (CDC) differentiates between evaluation and research on the basis of when they occur in relation to time:
Researchers must stand back and wait for the experiment to play out. To use the analogy of cultivating tomato plants, researchers ask, “How many tomatoes did we grow?” Evaluation, on the other hand, is a process unfolding “in real-time.” In addition to determining the number of tomatoes, evaluators also inquire about related areas like, “how much watering and weeding is taking place?” “Are there nematodes on the plants?” If evaluators realize that activities are insufficient, staff are free to adjust accordingly.5
To summarize, evaluation: 1) focuses on programs vs. populations, 2) improves vs. proves, 3) determines value vs. stays value-free and 4) happens in real-time. In light of these 4 points, evaluations, when carried out properly, have great potential to be very relevant and useful for program-related decision-making. How do you feel?
References:
- U.S. Government Accountability Office. (2005). Performance Measurement and Evaluation. Retrieved January 8, 2012, from http://www.gao.gov/special.pubs/gg98026.pdf
- Definition of “rigorous.” Retrieved January 8, 2012, from google.com
- Stufflebeam, D.L. (2007). CIPP Evaluation Model Checklist. Retrieved January 8, 2012, from http://www.wmich.edu/evalctr/archive_checklists/cippchecklist_mar07.pdf
- Coffman, J. (2003). Ask the Expert: Michael Scriven on the Differences Between Evaluation and Social Science Research. The Evaluation Exchange, 9(4). Retrieved January 8, 2012, from http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research
- Chapel, T.J. (2011). American Evaluation Association Coffee Break Webinar: 5 Hints to Make Your Logic Models Worth the Time and Effort. Attended online on January 5, 2012
——————
For more resources, see our Library topic Nonprofit Capacity Building.
____________________________________________________________________________________
Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing, and facilitation. Contact her at [email protected]. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/