A couple of weeks ago I received this question about innovation metrics via email:
“What are the one or two innovation metrics that you need to keep a corporate innovation program alive? Most corporations are made up of execution experts, quarter-to-quarter financial analytics, and incremental thinkers. So this question is not about what SHOULD we measure. What MUST we measure to pacify the organization while the long-term innovation ecosystem matures.”
– Ken
The answer is of course: It depends.
Innovation Metrics are the inputs and outputs that foster innovative behaviors at a company. Some are strategic and others operational. Most center around addressing new markets or opportunities in their industry.
So what’s a good way to determine what an innovation company is? What are the methods for measuring innovation?
Innovation Metrics before Launch
Whether it’s a small experiment or a big program, it’s always a good idea to agree on the innovation metrics for success before getting started. If we don’t, the metric always defaults to ROI. If we don't agree on the metrics before launching our project, the metric is always ROI. Share on X
ROI (return on investment) as a metric is just the natural state of things in a big organization. But from an innovation perspective, it can get us in trouble. Not because we can’t deliver ROI eventually, but because ROI is a lagging indicator that takes a while to show up. It’s like waiting to read your own obituary to know if people like you.
If we haven’t agreed on a different metric and ROI is the default, we’re in an awkward position. We have to convince everyone that whatever metrics we decide to report are somehow better than just reporting our non-existent ROI. That’s a losing situation, because anyone feeling a little territorial about innovation can turn around and just ask “What’s the ROI?” to sink us.
And they’d be right to do so, because we did not articulate our metric for success ahead of time.
Agree Before You Need to Disagree
Agreeing beforehand on metrics such as insight velocity, cost per project, and innovation portfolio gap analysis gives us a clear agreement. So we can rightly point out that judging the program on ROI would be unfair — we need to know how to measure innovation culture.
If I asked my friend to pick out a funny movie and he chose Pixels, I only have myself to blame. I knew he was an Adam Sandler fan, and I could have easily foreseen (and prevented) this unfortunate turn of events.
I should have specified in advance that my metric for “funny” excludes anything with Adam Sandler. (Sorry, Adam.) So instead of enjoying Ali Wong in Always Be My Maybe, I would have to get into a fist fight for the remote control.
Objectives before Metrics
Similarly, we have to make the objectives clear before setting metrics. Most innovation programs serve some sort of tactical purpose:
- Increase company-wide innovation capabilities
- Engage in defensive self-disruption (before startups do it for us)
- Increase idea throughput
- Reduce costs or time to market
Knowing our objective can help us set our innovation KPIs.
If the CEO has a specific ask then our job is simple. It could be as straightforward as, “We don’t have any good ideas, go start an innovation program!”
In that case, our job is to increase the number of good ideas. That’s an easy metric to start with: What percentage of our people are contributing ideas?
We can run an ideation workshop to increase that number or use one of the many ideation platforms to engage a distributed employee base.
If the CEO says, “Wall Street expects us to innovate, go start an innovation program,” then our job is to increase the stock price. That’s a hard one to directly impact, but at least we know what we’re aiming for. Now we can make sure to focus some of our “innovation” effort on PR for the Wall Street Journal.
If the CEO just says, “Go be innovative,” then we’re pretty screwed.
Don’t believe me? Go ask 20 senior executives in your company to define innovation. If you get more than five that are exactly the same, tweet me and I’ll send you an awesome clapping-hands emoji.
“Go Innovate” is not an innovation strategy, and it’s not an objective. Push back and figure out exactly what they mean by that.
Vanity Metrics Are Your Friend
So here’s the truth: Don’t be afraid to use vanity metrics.
If the goal is to increase the number of ideas in your company, then report the vanity metric.
Do not report, “We increased unique idea contribution from our 123,456 employees by 0.4%. No one wants to hear that.
Just say, “We generated 746 new ideas!”
Of course we should be tracking things such as:
- Number of unique views on our submission form
- Open rate on our announcement email
- Conversion rate to funding
We should know all of those more actionable metrics thoroughly and be prepared to answer for them. But no one is going to ask.
Watch Out for Year Two
The problem with vanity metrics is that they bite back.
If that’s all we measure, we won’t be able to act. They make us look good, but they don’t tell us what needs to be done. And next year the vanity metrics might not look so great.
This is a typical problem with idea-focused programs. The first year, metrics look promising. Employee engagement is up!
The second year, everyone learns the real story. Of those 746 ideas, only 20 were chosen to go into our accelerator program, and ten of those were killed soon after. Ultimately two ideas were chosen as “winners,” but neither was accepted into the main business unit, so they just ended up sitting in limbo.
Since no ideas from the first year were seen to fruition, only 98 ideas are submitted in the second year. The vanity metrics bite back.
Leading Indicators
So what are the real metrics to choose from? Ideally, we’d like to choose metrics that are leading indicators of the change we want to make.
Most general innovation programs have ROI as their end goal, which is a lagging indicator. By the time the data comes in, it’s too late to do anything about it.
Typically, innovation programs have a very limited number of projects and can’t guarantee an ROI. With 10-20 projects in an accelerator, the odds are stacked against generating a billion-dollar business. And when the ROI finally does come in, it’s generally after the project has been handed back to the core business. So the ROI accrues to the business unit, and the innovation department still looks, on paper, like a cost center.
At best, the innovation can get things started. We need to look for things to measure that we are confident will ultimately contribute to ROI. So there are some assumptions we can make:
- Teams with more insights about the market have a better chance of success.
- To get more insights about the market, teams need to run experiments and research.
- Therefore, the more research and experiments, the better the odds of success.
That means we can measure the number of experiments teams run or the number of insights they generate. This would (we hope) be a leading indicator of success.
Actionable Metrics
We can convert this number to something more actionable fairly easily. We just measure whether the teams are running experiments (experiment velocity) or generating insights (insight velocity) each week.
These metrics can be expressed as simple questions:
- Did the team run at least one experiment or research method this week? Yes or no.
- Did the experiment or research generate any insights this week? Yes or no.
This week-over-week data can now be aggregated into a four-week rolling metric. That gives us an early warning sign if the team is getting stuck.
We can also aggregate the data from teams in various cohorts and see that teams from North America generate insights 56% of the time. Meanwhile, teams from India generate insights 90% of the time. That’s a difference worth investigating.
It doesn’t necessarily mean that teams from North America are inherently inferior. But it could indicate that there is something slowing teams down, perhaps regulations, budget restrictions, or the quality of their coaching.
Regardless of the reason, something is going on, and that makes insight velocity a good dashboard metric and quite actionable for a coaching program.
Failure Is Success
Metrics like insight velocity help us by separating the success of each idea (ROI) from the success of the team. Because teams can be successful by showing that a project will be unsuccessful.
Showing an innovation project will be unsuccessful generates an immediate ROI simply by shutting it down. Share on XA typical corporate project takes at least a year to go from idea to market and costs at least $1.2 million in resources. If we can invalidate the idea by spending $200k in a few months of quick experiments, then we’ve saved $1 million. We’ve saved the money by not spending it on an idea we’ve proven isn’t going anywhere.
That’s actually ROI. We’re returning $1 million to the business that would have otherwise been thrown into the fire.
Saving $1 million by shutting down a bad project isn’t as sexy as earning $1 million in new revenue, but it’s just as valid. Share on X CFOs recognize the value of money saved just like we recognize the value of maximizing all those 20%-off Bed Bath & Beyond coupons.
Innovation folks might use a similar metric, such as cost per insight or cost per iteration. Our cost per insight before our program was $1.2 million because it took us the entire project budget before learning anything about the market. Now it’s only $200k. That’s a metric that sounds great to everyone.
The CEO Metric
CEOs don’t like either of those metrics quite as much as ROI. However, there’s a related metric that they do often favor: Time to market.
Everyone likes being faster. If we can tell the CEO that we’ll get market information in two months rather than a year, that’s a sure winner.
The danger here is that we’re not actually getting to market faster. We’re generating insights faster to help us make better decisions.
If we determine that the idea is worth continuing, then it still might take a year (or longer) to get to full-scale launch. So it’s important to shift the conversation from time to market to time to insight. (Innovation folks might call this “average iteration length” or another term.)
Ecosystem Metrics
There are a lot of other metrics to measure. The cost per iteration, the percentage of complete investment decisions, the salary deviation of failed entrepreneurs, and more. These are ecosystem metrics that measure the overall innovation capabilities of the company or country.
These ecosystem metrics (even if measured qualitatively) are critical to our ability to generate returns in the long run. However, most C-level folks don’t really care about the ecosystem or fancy ways of measuring it. Fixing the ecosystem has only indirect benefits in the long run. It won’t help the stock price and it won’t help hit quarterly goals.
The C-suite is our customer, and we have to focus on what the customer actually wants. I’m not saying we just build what the customer wants — the customer might be wrong and want a “faster horse.” But we have to meet the customer where they are.
(Note: If the customer asks for a faster horse, do not build a faster horse. Ask, “Why do you want a faster horse? What would you use it for?” If the customer wants a faster horse to move cargo across town, a car might be a great invention. When the customer wants a faster horse to win a horse race, a car is a terrible invention.)
If we’re not sure what metrics our customers will accept, we should do the obvious and go talk to our customers. We may need to sell them a metric that they didn’t know about, but can get behind once they understand the value proposition. (And of course we’d better make sure we’re explaining the value proposition well!)
So let’s go talk to the customer and reach an agreement on what metrics to measure before we get started.
Lessons Learned
Well, that was a long answer to what seemed like a simple question.
- Agree on metrics before getting started.
- Agree on objectives before setting the metrics.
- Separate innovation capability metrics from project metrics.
- Be careful — but not afraid — of vanity metrics.
Can’t think of a metric? Try some common ones:
- Experiment velocity
- Insight velocity
- Time to insight
- Cost per insight
Got a question of your own? Send it in here and we’ll write a post.