This is part 2 of a previous post on the Centers for Disease Control and Prevention’s -CDC Framework model. The goal of these posts is not to give an exhaustive description of this model but to whet your appetite for further study, to refer you to other sources, and to share with you some related topics that have been percolating in my head.
In the last post, we covered steps 1-3 of the CDC’s evaluation depicted below:
A Framework for Evaluation.
Source: Centers for Disease Control and Prevention (CDC), Office of the Associate Director for Program (OADPG)
Step 4: Gather Credible Evidence
What is credible evidence? Let us back up and consider credibility from various perspectives–those of funders, agency staff, and program participants. Involving the most important stakeholder groups throughout the evaluation process and being open to learning from their experiences will increase the credibility of the evidence. Some program participants may not believe sources like government agencies and doctors who are traditionally viewed as credible sources in professional circles.
Your evidence is only as good as the tools you use to collect it. Use high-quality tools, i.e., questionnaires, interview guides, etc. Pay attention to validity issues–for e.g. do the questions really measure what you think they are measuring? At the very least, choose indicators based on a review of the literature. Indicators are items being measured such as knowledge levels, numbers of low birth weights, etc. that shed light on the health or social condition that your program is trying to change.
Ask experts to review your evaluation tools and then pilot-test them among program participants. In some cases, it may be particularly important to use an evaluation tool that has been tested for reliability, i.e., does the questionnaire yield consistent results each time it is used? In these cases I recommend, if possible, using a tool that has been published in the peer-reviewed literature. University libraries often allow visitors to use their databases and access peer-reviewed journals online. Other tips from the CDC–the CDC recommends:
- choosing indicators wisely
- training staff in data collection,
- paying attention to data quality issues and
- protecting the confidentiality of participants’ information
Step 5: Justify Conclusions
All conclusions need to be based on evidence. Take care also to base all your conclusions on sound statistical reasoning. For e.g., one common mistake is to conclude that there is a cause-and-effect relationship on the basis of correlational data. A statistical correlation only shows that two variables are associated with one another. Take for example the following piece of evidence: depression is correlated with lower levels of perceived social support. All we can conclude is that there is a correlation between depression and social support. Lower levels of perceived social support could have contributed to the depression, or the depression itself could have led to social withdrawal, which then resulted in lower levels of perceived social support. If you’re interested in a light and amusing read to familiarize yourself with such principles, I second evaluator John Gargani’s recommendation of Darrell Huff’s classic book “How to Lie with Statistics.”
This is another step where it is important to continue engaging stakeholders. Encourage stakeholders to participate in the process of drawing conclusions from the evidence. This will increase their trust in the findings and will increase the chances that they will actually use the evaluation.
Step 6: Ensure Use and Share Lessons Learned
So how do we ensure that evaluation findings are actually used? Like cooking, presentation is everything! People process visual information much more intuitively and naturally than verbal information. Consider, for example, how well very young children respond to colors and pictures. This principle translates into effectively communicating your findings to adult audiences as well. A hot topic in the field of evaluation is data visualization or how to display information using sound design principles. While it is true that graphs can be confusing, effectively applying data visualization principles can produce graphs that are elegantly intuitive to a lay evaluation consumer. For further study, read Edward Tufte’s classic book “The Visual Display of Quantitative Information.”
A tool that helps visually depict a variety of graphs and charts in one place is a data dashboard. Think of it as a shortcut to communicating information visually. A dashboard is a display of multiple graphs all in the same location. A resource for further reference is Stephen Few’s Information Dashboard Design: The Effective Visual Communication of Data.
To increase the use of the evaluation findings, the CDC recommends:
- aligning evaluation design with how stakeholders plan to use the evaluation,
- translating findings into practical recommendations for the program and
- using reporting strategies that are customized to stakeholders’ specific needs.
For DIY (Do It Yourself) evaluators, I highly recommend renowned evaluation theorist Michael Quinn Patton’s book Utilization-Focused Evaluation. You can listen to free recordings of two webinars by Michael Q. Patton here. (You may need to download the software first). “But I’m too busy managing my program to sit down and listen to webinars,” you protest.
I understand too well! Well, I listened to both webinars recently while doing housework and received very helpful guidance for a current project.
(To be continued)
——————
For more resources, see our Library topic Nonprofit Capacity Building.
____________________________________________________________________________________________________
Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing, and facilitation. Contact her at [email protected]. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/