These short-term metrics are going to kill this great idea... We need to give it more time.... We'll just have to wait and...
These short-term metrics are going to kill this great idea...
We need to give it more time....
We'll just have to wait and see...
It's impossible to know if this is working...
We've likely all heard these sentiments before (and probably thought them ourselves).
One of the most vexing challenges we face is how to simultaneously balance short-term and long-term results. Some things need to happen today; they just can't and shouldn't wait. Other equally or more important projects take longer to develop, prove themselves, or have a noticeable impact. This is especially true when we're attempting to change a complex, interrelated system.
As a result, the insistence of today often trumps the importance of tomorrow. Standalone, measurable chunks become the focus rather than the impact on people or effectiveness of the system. Performance metrics designed to monitor ongoing performance often aren't the most appropriate way to assess something new. But unless we pose an alternative, it's our own fault.
When you're developing or launching something new you have an opportunity to take the reins and get a handle on how you want to seek and track results. Viewed as a positive, metrics are no more than a discussion starter. With some foresight, we can start the type of discussion that we'd like to start. And, back to the question at hand, we can propose when to pose the question and how to proceed once we have results.
My hunch is that many questions can be answered sooner and with a fair degree of confidence through rapid prototyping, talking more directly with users, and the design of smart experiments. Imagine if you showed up to your next meeting arguing for tracking more concrete results sooner.
In an attempt to get us going, I'd like to propose some guiding questions to help us collect our own time-sensitive results:
- How urgent is our question? How important? Do we have time for patience?
- What's our time horizon for learning? How quickly could we have an answer (a day, a week, a month, never)?
- How will we know it's working (learning, change, adoption, something else)?
- What might distract us?
- How will we capture the results?
- Can we design feedback into the experience?
- How aligned are the results and timing of this experiment with the goals and timing of our organization?
- Could we learn faster or more cheaply? If so, how?
- What happens next? What's our best guess as to what we should do when we "know"?
So here's my ask to you: go back to your desk right now and give it a shot (see my attempt below) and post your experience in the comments. Based on your experience, what other questions would you add? What doesn't work for you? For your initiatives what's the right question? When should you ask it?
Looking at this specific blog post as an example, here are answers to the above questions:
- How important? urgent? patience?: high / low / sure
- Our time horizon for learning: two days
- How we'll know it's working: links, comments, people let us know they tried it
- Potential distractions: whiners, flamers, whether "critics" like it
- How we will capture: blog analytics, google searches, stars for the post, retweets
- Designed in feedback: GOODmarks, comments, could and should add share/tweet/digg action links
- Alignment: I think the post fits the goals, but we could be working faster
- Learning faster: I could share it with the guy in 11A next to me (when he wakes up)
- What's next?: evolve the question list, publish it more broadly
Ryan Jacoby leads IDEO's business design discipline and writes about innovation strategy on his blog do_mati.
Illustration by Will Etling