The Idea Scoring Scorecard: 3 Models for Different Situations

Most teams that run idea campaigns never agree on scoring criteria before they start reviewing. They each apply a gut feeling. They end up with inconsistent results, disagreements that feel personal, and evaluation sessions that run way over time.

The three scoring models in this guide solve that. Each one is designed for a different situation. Pick the one that fits your context, align on it as a team before you start, and use it consistently across all submissions in the same evaluation cycle.

When to Use Each Model

Model A is for teams that need to move quickly and want a simple, defensible way to separate stronger ideas from weaker ones. Good for a first evaluation pass or when your review team is pressed for time.

Model B is for situations where the stakes are higher, multiple stakeholders need to weigh in, and the criteria for a good idea need to be explicitly agreed on and documented. Good for strategic innovation programs or when leadership wants to see how decisions were made.

Model C is for teams who think visually, prefer discussion over spreadsheets, and want to use the evaluation session to also build shared understanding of where each idea sits relative to the others.

Model A: The 3-Question Shortcut

Best for: fast evaluation passes, small review teams, time-pressured situations.

For each idea, score three dimensions on a scale of 1 to 5:

Impact potential (1 to 5)
1 = minimal, affects very few people or processes in a small way
5 = significant, could meaningfully improve outcomes for a large group or a core process

Feasibility (1 to 5)
1 = extremely difficult, requires major resources, approvals, or infrastructure changes
5 = very doable, could be tested quickly with available resources

Strategic fit (1 to 5)
1 = not connected to current priorities
5 = directly aligned with a stated organizational or team goal

Calculate the average of the three scores. Ideas scoring 3.5 or above move to the next stage. Ideas below 2.5 are declined. Ideas between 2.5 and 3.5 go to the Interesting pile for a second look.

This model is fast and consistent. It does not account for nuance, which is a feature when you are doing initial evaluation and a limitation when you are making final decisions. Use it to sort, not to decide.

Model B: The Weighted Criteria Matrix

Best for: strategic innovation programs, higher-stakes evaluation, situations where leadership wants to see documented decision-making.

Step 1: Before you look at a single idea, your review team agrees on 4 to 6 evaluation criteria. These should reflect what actually matters for this specific campaign, not generic innovation criteria. Examples: cost reduction potential, implementation speed, cross-departmental applicability, risk level (score inverted, higher score for lower risk), customer impact, alignment with annual priorities.

Step 2: Your team assigns a weight to each criterion, expressed as a percentage that adds up to 100. This is the step most people skip, and it is the most important one. Deciding that implementation speed is worth 30% of the total score and cost reduction is worth 20% forces your team to be explicit about what actually drives decisions. That conversation is more valuable than the scoring itself.

Step 3: Each reviewer scores every idea on a 1 to 5 scale for each criterion. Multiply each score by the criterion weight and sum the results for a weighted total out of 5.

Step 4: Rank ideas by weighted total. Ideas in the top third move to the next stage. The rest get the standard triage treatment.

This model takes longer to set up but produces more defensible, consistent results. It is particularly useful when you need to explain your decisions to people who were not in the room.

Model C: The Effort vs. Impact Grid

Best for: teams who prefer visual thinking, want a discussion-based evaluation session, or need to quickly communicate prioritization decisions to a wider audience.

Draw a simple 2x2 grid. The horizontal axis runs from Low Effort on the left to High Effort on the right. The vertical axis runs from Low Impact at the bottom to High Impact at the top. Place each idea as a dot somewhere on the grid based on the team's collective assessment.

What each quadrant actually means, and what to do with ideas that land there:

High Impact, Low Effort (top left): Do these first. These are your quick wins. They have disproportionate value relative to what they cost to implement. Most programs should be able to act on at least one of these within 30 days of a campaign closing.

High Impact, High Effort (top right): Plan these carefully. These are your strategic investments. They are worth pursuing but require proper resourcing, a business case, and a realistic timeline. Do not let them stall in the pipeline just because they are complex. Assign an owner and a next step.

Low Impact, Low Effort (bottom left): Do these opportunistically. These will not move the needle much, but they are easy. If someone is motivated to implement one of these, let them. Small wins build momentum. Just do not prioritize them over the high-impact ideas.

Low Impact, High Effort (bottom right): Decline these with honesty. These cost more than they are worth. Be direct with submitters: the idea addresses a real issue, but the investment required does not match the return we expect. That is a legitimate reason to decline, and submitters will respect it.

One important note: the grid is a starting point for conversation, not a final verdict. Two people placing an idea in different quadrants is useful data. Discuss why. The disagreement often reveals assumptions that need to be made explicit before any decision is made.

A Note on Consistency

Whichever model you use, apply it consistently to every idea in the same evaluation cycle. Switching models mid-review, or applying stricter criteria to ideas from certain departments, undermines the credibility of the whole process. If your criteria change, acknowledge it and start the evaluation again.