Most innovation metrics are vanity metrics. "Total ideas submitted" sounds impressive until leadership asks how many actually got implemented. "Participation rate" sounds great until someone points out you invited 2,000 people and got 40 submissions. "Number of campaigns run" measures nothing except activity.
This guide shows you how to measure what actually matters, be honest about what you don't know yet, and build a reporting routine that shows real progress without cooking the numbers.
The metrics that actually count
Implementation rate
This is the most important metric. How many of the submitted ideas actually got implemented? Not discussed, not put on a list. Implemented. Defined as: a concrete next step was assigned, a budget was approved, a test was run, or a change was made.
A healthy implementation rate for a well-run idea program sits between 5 and 15 percent. If yours is higher, either your challenge was very specific and tightly scoped, or you're defining "implementation" generously. If it's lower, either the ideas don't land or the process breaks somewhere between evaluation and rollout.
Time from submission to decision
How long does it take before someone who submitted an idea finds out what happened to it? Not until implementation. Until feedback. This metric predicts future participation better than almost any other. If people wait three months for feedback, participation in your next campaign will drop. Measure per campaign and try to stay under three weeks.
Participation rate
Submissions divided by invitations. But here's the catch: 30 percent participation on a targeted 50-person campaign and 8 percent on an open 500-person campaign can both be healthy. Compare participation rates across campaigns with similar scope, not across completely different formats.
Repeat participation
What share of people who submitted to one campaign also submitted to the next? This is the trust metric. It shows whether people felt the process was worth their time. If repeat participation is low, the feedback loop is probably broken. Aim for 25-35%.
Value generated (rough estimate, if possible)
This is tough and often approximate, but worth trying. For implemented ideas with clear outcomes (cost savings, time saved, errors reduced), document before and after. You don't need a formal ROI analysis. A rough estimate beats nothing and beats activity metrics by a mile.
Metrics that feel good but don't tell you anything
Total ideas submitted without implementation rate. This rewards campaigns that create noise instead of signal.
Number of campaigns run. Campaign frequency is a tactic, not a goal. Six campaigns with no results are worse than two campaigns that each deliver a real improvement.
Employee satisfaction scores from a survey right after the campaign ends. People feel good immediately after participating. That feeling fades if nothing happens. The meaningful satisfaction score is six weeks after you announce results, not right after launch.
What to measure, and when
During each campaign: submission count, participation rate, and qualitative notes about emerging themes.
At campaign end: time from end to result announcement, number of ideas moving forward, number rejected, number parked.
30 days after campaign: implementation progress on forward-moving ideas, first value signals.
90 days after campaign: implementation rate (you should now know which ideas really got implemented versus which stalled), rough value estimates where possible, repeat participation if the next campaign has run.
Annually: trend lines across all campaigns, total value generated (rough), participation trends over time, and an honest assessment of where the program isn't working.
Be honest about what you don't know
Two situations where innovation managers get into trouble with metrics:
First: claiming impact you can't prove. If an idea got implemented but you don't know what it saved, don't make up a number. Say: This idea was implemented and is live. We'll capture the impact next quarter. That's honest and sets up a credible future update.
Second: burying the implementation rate in a pile of activity metrics. If you ran four campaigns, generated 300 ideas, and implemented two of them, say that. Then explain what you're changing to move the needle on conversion. A diagnostic framework helps with this. Leaders respect honest assessments and clear improvement plans far more than impressive-sounding numbers that don't hold up to scrutiny.
From metrics to improvements
Measuring is only half. The other half: act on the data. If your turnaround time is too long, change your evaluation process. If your participation rate is dropping, check your feedback loop. If your implementation rate is low, either the ideas are too vague or your rollout process is blocked.
A good measurement process doesn't create metrics for reports. It creates feedback loops that make the program better.
Frequently asked questions
How often should we measure? At minimum after each campaign. Larger programs with continuous idea intake can take monthly snapshots. Trend reports should happen at least quarterly.
Should we share metrics with employees? Yes, but honestly. "Our implementation rate is 7%, our target is 10%, here's what we're changing" builds far more trust than a selective report about submitted ideas.
What's a good benchmark for our metrics? Industry benchmarks are hard to compare because program structure varies so much. Better: your own historical trends. Improvement over time means your changes are working.



.webp)