\r\n\r\nThis post is a response to "How might we put people at the center of evaluation?" Learn more about the conversation here. \r\n\r\nOn...
This post is a response to "How might we put people at the center of evaluation?" Learn more about the conversation here.
On August 12th, I had the good fortune to participate in the Innovation in Evaluation roundtable. And in the spirit of full disclosure, I have to confess that I have been a professional evaluator for more than 20 years, and have taught courses and workshops, written books and articles, and have consulted with a wide range of public, private, and government organizations on various evaluation topics. My point of view is that evaluation is about asking questions critical to the decision making processes, and that an evaluation's findings should contribute to individual, group, and organizational learning. (Examples of my writings can be found here.)
There is little doubt that a strong wind is blowing these days-and that it often takes the form of "evidence-based practices," "what works," and finding "proof points," that suggest causal relationships between philanthropic giving and social impact. Perhaps, it is just human nature to want to bring order out of chaos, to make simple the complex, or control that which is dynamic and ever changing. Yet, the world in which programs, initiatives, and social change occurs is not static, predictable, or manageable. Too often, I have seen evaluation approaches and designs couched in the language of "rigor" that ignore the human element-what it means to live through and into the social problems and solutions that are at the heart of philanthropic giving. While the results may produce statistically significant findings, they often do little to answer questions having to do with how or why the program did or did not make a difference or how it might better achieve its goals.
I believe that the topic of this week's blog, "How do we put people at the center of evaluation?" is fundamentally about what it means to design and implement evaluations in ways that honor the voices and lived experiences of those who are participants or recipients of the services, programs, and policies the field supports and funds. While I do think there are times when randomized control trials (RCTs) or quantitative designs may be appropriate, I think we must be extremely careful not to a.) over promise what these designs can deliver, and b.) ignore more qualitative ways of knowing. (For an excellent editorial on the need to use alternative evaluation approaches see here).
It is through the systematic collection and analysis of qualitative data (in the form of words and pictures), where the human spirit lives in all that we do. If we truly want to understand the ways in which our work adds value and meaning, and impacts those whom we hope to affect, then local context matters (IDEO's Jocelyn Wyatt's blog entry on this topic is a powerful example of what this looks like in practice). As such, RCTs are the antithesis of thinking locally. To illustrate the power of putting people at the center of evaluation consider the following poem constructed by Cheryl MacNeil, an evaluator and faculty member of the Sage Colleges. It was constructed from a series of focus group interviews with three different constituencies who were involved with a government-funded self-help program.
Poetic Representation of "Role Identity"