NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

More on Evaluation for Learning

This post is a response to "How Might We Celebrate Learning through Evaluation?" Learn more about the conversation here. I...




This post is a response to "How Might We Celebrate Learning through Evaluation?" Learn more about the conversation here.

I appreciated Sally's post on the topic of active learning and the questions she has posed are similar ones we challenge ourselves with at the Packard Foundation. The comments of Sonal Shah (I wasn't there at SoCap, but Beth Kanter was) resonate as well with our own work and approach to balancing the rigor of evaluation along with simultaneous commitment to continuous improvement.

At the Packard Foundation our commitment to effectiveness directly plays out in our evaluation and learning culture and practices. Our approach to evaluation is guided by three main principles:
    \n
  1. Success depends on a willingness to solicit feedback and take corrective action when necessary
  2. \n
  3. Improvement should be continuous, and we should learn from our mistakes
  4. \n
  5. Evaluation should be conducted in partnership with those who are doing the work in order to maximize learning and minimize the burden on grantees
  6. \n
    \n




Through these principles we acknowledge that no effort can be successful without feedback on a continuous basis and without good data. This has to be done in the spirit of improvement and collaboration and with staff and grantees being allowed to make mistakes, to go in a wrong direction, learn from previous work, and take corrective action. It has to be done with the acknowledgment that it is the grantees that are doing the work and that the grantees have to be involved in that conversation. As a Foundation, we need to be aware that the burden of evaluation can be large and that we work to minimize the burden and maximize the value of evaluation for our grantees.

Over the past four years we have been shifting from evaluation for proof or accountability ("Did the program work?") to evaluation for program improvement ("What did we learn that can help us make the program better?"). The latter reflects an approach we refer to as "real-time" evaluation. For us, real-time means balancing monitoring and evaluation to effectively support learning and continuous improvement as our grant-making strategies are implemented. In practice, this extends further than evaluation, and represents our overall approach to an appropriate monitoring, evaluation, and learning system for each programmatic area. Real-time monitoring and evaluation are integrated to regularly facilitate opportunities for learning to occur and bring timely evaluation data-in accessible formats-to the table for reflection and use in decision making. Rather than focus just on evaluation, we have been encouraging a culture that "thinks evaluatively" throughout the grant-making lifecycle of planning, implementation, monitoring, assessment, and course-correction.

We do not have a one-size-fits-all approach to monitoring and evaluation. Rather, we ask our program staff to consider the following factors when they are formulating their monitoring, evaluation, and learning agendas: What questions do we seek to answer? Who is the audience for this information? What level of rigor do they require to be convinced? How complex are the strategies? What is the timeframe for needing information? Finally, what are the overall program resources being invested? In response to these questions, the evaluation approach selected may range from retrospective to real-time evaluation or a combination of both, using qualitative and quantitative data, and loosely aligned or highly rigorous methods. We also encourage staff to consider their monitoring, evaluation, and learning needs at the beginning of a subprogram. We have found that doing so is more likely to lead to logic models, outcomes, indicators, and dashboards that are useful to informing decision making and program improvement rather than being requirements that are imposed on the subprogram with no connection to programmatic work.

Making the shifts from evaluation for proof to evaluation for program improvement was greatly aided by practices that were already underway within the Foundation. Since 2004 our Preschool subprogram has been engaged with the Harvard Family Research Project in a real-time evaluation. The HFRP's approach represented a new way of doing evaluation at the Foundation. The evaluation has in many ways been a strategic partner by serving as a mechanism for the timely flow of strategic information to facilitate the Preschool subprogram's development. From the start, its emphasis has been on continuous (or real-time) feedback and learning. Because the strategy relied on advocacy and policy change, for which there were practically no established evaluation methods, the evaluation also required methodological creativity. Traditional evaluation approaches in which the evaluator develops an evaluation design and then reports back when the data are all collected and analyzed, or in which the evaluator assesses impact after the strategy has been implemented, would have been less useful here.

The importance of these practices may sound self-evident, i.e., everyone should do these, but they are much harder to actually carry out. Not everything has worked as planned in these evaluations, but both program staff and the evaluators have become skilled at adaptation. The evaluation field has room for growth if more evaluation is going to be real-time. Evaluators themselves are learning how to be both rigorous and faster. I like what IDEO is doing in this space. I think it is just the kind of integration of evaluation rigor, rapid cycles, and continuous feedback that many of us are looking for to build into our practice.

Much has been made of the distinction between evaluations designed for accountability (determining whether a program did what it said it would do) and evaluations designed for learning (supporting ongoing decision making and continuous improvement). In truth, evaluations rarely are either one or the other. Typically they must be both (e.g., a program officer may be more interested in learning while a board member may be more interested in accountability), and the evaluation must find a way to ensure that both users' needs are met.

Balance and feedback are key here. How can we get the highest quality information we need, when we need it, to make the best program strategy decisions without undue burden of reporting among those we rely upon for data? And while we might have these good intentions within the Packard Foundation, they will not mean much if they don't translate into real value, that is, impact. At the forefront, our focus is still on the key questions: What are our goals or outcomes we want to achieve? Are we meeting them? It is the real-time aspect that allows us to better know if we are making progress towards those outcomes, what is contributing to our progress (or lack of it), and therefore how to make better strategy decisions along the way.

Guest blogger Gale Berkowitz directs evaluation at the David and Lucile Packard Foundation.
























More Stories on Good