Most innovation metrics are vanity metrics. Total ideas submitted sounds impressive until leadership asks how many of them actually got implemented. Participation rate sounds great until someone points out you invited 2,000 people and got 40 submissions. Number of campaigns run is not a measure of anything except activity.
This guide is about measuring what actually matters, being honest about what you do not know yet, and building a reporting habit that shows real progress without padding the numbers.
The Metrics That Actually Matter
Implementation rate
This is the one. How many of the ideas submitted were actually acted on? Not explored, not discussed, not added to a backlog. Acted on. Defined as: a concrete next step was assigned, a budget was approved, a test was run, or a change was made.
A healthy implementation rate for a well-run idea program is somewhere between 5 and 15 percent. If yours is higher, either your challenge was very specific and well-scoped, or you are defining implementation loosely. If it is lower, the ideas are not landing or the process is breaking down somewhere between evaluation and action.
Time from submission to decision
How long does it take for a submitter to find out what happened to their idea? Not how long until it gets implemented. How long until they get a response. This is the metric that predicts future participation better than almost any other. If people wait three months to hear anything, participation in your next campaign will be lower. Track it by campaign and try to bring it down over time.
Participation rate
Submissions divided by invitations. But be careful here. A participation rate of 30 percent on a targeted 50-person campaign and a participation rate of 8 percent on an open 500-person campaign can both be healthy. Compare participation rates across campaigns with similar scope, not across campaigns with wildly different formats.
Repeat participation
What percentage of people who submitted in one campaign submitted again in the next? This is the trust metric. It tells you whether people felt the process was worth their time. If repeat participation is low, the feedback loop is probably broken. See the guide on feedback that builds trust.
Value generated (rough estimate, when possible)
This one is hard and often approximate, but worth attempting. For implemented ideas with clear outcomes (cost savings, time saved, defect reduction), document the before and after. You do not need a formal ROI analysis. A rough estimate is better than nothing and much better than only reporting activity metrics.
The Metrics That Feel Good But Are Not Useful
Total ideas submitted, without implementation rate. This rewards campaigns that generate noise rather than signal.
Number of campaigns run. Frequency of campaigns is a means, not an end. Running six campaigns that produce nothing is worse than running two that each result in a concrete improvement.
Employee satisfaction scores from a survey you ran right after a campaign closed. People feel good immediately after participating. That feeling fades if nothing happens. The meaningful satisfaction score is the one you take six weeks after the outcome communication, not immediately after launch.
What to Track, and When
During each campaign: submission count, participation rate, and qualitative notes about themes emerging.
At campaign close: time from close to outcome communication, number of ideas advanced, number declined, number parked.
30 days post-campaign: implementation progress on advanced ideas, any early indicators of value.
90 days post-campaign: implementation rate (by now you should know which ideas were genuinely acted on vs. which stalled), early value estimates where possible, repeat participation rate for next campaign if it has run.
Annually: trend lines across all campaigns, total value generated (rough), participation trend over time, and an honest assessment of where the program is not working.
How to Be Honest About What You Do Not Know
Two situations where innovation managers get into trouble with metrics:
First: claiming impact you cannot substantiate. If an idea was implemented but you do not know what it saved, do not invent a number. Say: this was implemented and is in use. We will track impact over the next quarter. That is honest and it sets up a credible future update.
Second: burying the implementation rate in a pile of activity metrics. If you ran four campaigns, generated 300 ideas, and implemented two of them, say that. Then explain what you are changing to improve the conversion rate. Leadership respects honest assessment and a clear improvement plan far more than they respect impressive-sounding numbers that do not add up under scrutiny.



