NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
GOOD is part of GOOD Worldwide Inc.
publishing family.
© GOOD Worldwide Inc. All Rights Reserved.

Timing is Everything in Evaluation

This post is a response to "How Might We Measure in the Appropriate Timeframe?" Read more of the conversation here. Many of the blog...


This post is a response to "How Might We Measure in the Appropriate Timeframe?" Read more of the conversation here.Many of the blog entries to date have focused on evaluation ranging from longitudinal research-based studies to system theory and the importance of designing the display of data for the greatest impact.While many of my colleagues and new friends have written important and interesting posts, I am going to attempt to simplify my short entry down to three essential tenants:
    \n
  • Timing is everything.
  • The time to start measuring is now.
  • We all need deadlines.
  • \n
Many of us found the love of our life, our first home, our perfect pet, our dream job etc. based on timing. We were ready and open to the change. Evaluation is the same from the perspective that an organizational leader and the culture of the organization that he or she leads must be ready to make the strategic changes necessary to incorporate evaluation into practice. If you are reading this and are in a position to champion organizational change, read on. If not, read on and then forward the post to the appropriate person/people in your organization.What kind of evaluation to implement once your organization is ready? Traditional evaluation tends to fulfill a compliance order, retrofitting post-program data into a report, and results in little if any organizational learning. I propose that anyone reading this post should plan to start implementing ongoing performance measurement now-as in, today. The performance measurement I suggest is a process tied to internal improvement. It starts with a discussion to develop a shared vision for success, followed by identifying how that success translates into measurable outcomes and key indicators. Although longitudinal research-based studies of community impact can be useful and serve a purpose, I am outlining a plan for an individual organization to get started. This starts with defining what mission success looks like in a tangible, accessible way, and then developing milestones for data collection, reporting, and management. The goal is program improvement based on the learning now available through the data.When to implement the process? In short, the indicators or quantifiable data need to occur often enough to allow for course corrections based on that data. I do not propose that the data collection and reporting be so frequent that an organization is in a constant state of flux, trying to make management decisions based on data that is too dynamic to make sense of. But annual data collection is not typically sufficient. The sweet-spot should be driven by internal learning opportunities: quarterly board meetings, a twice annual management retreat or, dare I write, specific impact gatherings.I agree with Tim Brown when he writes, "In innovation we have learned that rapid feedback cycles are important when it comes to successful experimentation." I would expand the idea, as it is clear to me that although we learn something in exit surveys, for example, we gain real meaning after the participant/client/customer leaves our services, returns to their daily routines, and attempts to implement their newly acquired knowledge.Rapid feedback loops are important but need to be balanced by the settle-in response. After the "wow, that was interesting!" response, we need to learn about the actual results of our work. The problem, of course, is that attrition sets in as soon as participants leave the classroom, so it is incredibly difficult to receive a solid response rate after any amount of time has lapsed.In working with a large membership organization to develop an outcome dashboard tied to their strategic plan a couple of years ago, I learned that they weren't actively managing by it. While they found the tool to be useful, their targets fell apart during the economic downturn. Additionally, many of the metrics remained static, so it wasn't as useful as the initial couple of years. Clearly, it is time for them to reflect and consider how to overcome data fatigue to ensure that the tool is not thrown away for the need to simply update it. Times change and program offerings and organizations grow and shrink. Innovation is always at play and there is no question that a performance measurement process and tools need to be updated periodically.So, you may be thinking great, I get it; this is important and I'm willing to start now or soon. But where do I begin to identify the key outcomes to measure? How do I incorporate the metrics into my process so information, stories, and data are always at my finger tips? What can I realistically ask from a resource-strapped organization at a time of staff layoffs and closing programs? Is now really the time to get started?I welcome your comments and invite you to visit the outcome/indicators project, a joint and ongoing effort of The Center for What Works and The Urban Institute, for further ideas and free resources including an outcomes portal.Guest blogger Debra Natenshon is the CEO for The Center for What Works, a Chicago-based nonprofit organization dedicated to performance measurement and benchmarking for the social sector.

More Stories on Good