Every product team struggles with the same question: what should we build next? RICE and ICE scoring are two popular frameworks that bring structure to this decision, but they work differently and suit different contexts. Understanding when to use each prevents analysis paralysis and builds confidence in your prioritization choices.
What is ICE Scoring?
ICE is a lightweight prioritization framework created by Sean Ellis. It scores opportunities based on three factors:
I - Impact: How much will this move the needle?
C - Confidence: How sure are we about the impact?
E - Ease: How simple is this to implement?
Formula: ICE Score = (Impact × Confidence × Ease)
Each factor is typically scored 1-10, producing scores from 1-1000.
Example:
| Feature | Impact | Confidence | Ease | ICE Score |
|---|---|---|---|---|
| Automated onboarding emails | 8 | 9 | 7 | 504 |
| AI-powered recommendations | 9 | 4 | 2 | 72 |
| Export to PDF | 3 | 10 | 9 | 270 |
The automated onboarding emails score highest despite AI recommendations having higher potential impact, because confidence and ease balance the equation.
What is RICE Scoring?
RICE is a more rigorous framework developed by Intercom. It adds "Reach" as a fourth factor:
R - Reach: How many users will this impact in a given time period?
I - Impact: How much will this impact each user?
C - Confidence: How confident are we in these estimates?
E - Effort: How much work will this require?
Formula: RICE Score = (Reach × Impact × Confidence) / Effort
Key differences from ICE:
- Reach is quantitative (number of users, not 1-10 scale)
- Effort replaces Ease (inverse relationship—higher effort lowers score)
- Effort measured in "person-months" not 1-10 scale
- Impact uses a specific scale: 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal
Example:
| Feature | Reach (per quarter) | Impact | Confidence | Effort (person-months) | RICE Score |
|---|---|---|---|---|---|
| Automated onboarding emails | 500 | 2 | 90% | 1 | 900 |
| AI-powered recommendations | 2000 | 3 | 40% | 6 | 400 |
| Export to PDF | 100 | 0.5 | 100% | 0.5 | 100 |
Notice how different numbers change the ranking. RICE rewards high reach but punishes high effort more severely.
ICE vs RICE: When to Use Each
Use ICE When:
Early-stage products or new features
You don't have enough data to estimate reach accurately. ICE's simpler scoring fits exploratory phases.
Fast-moving teams
Scoring takes 5 minutes per item. Good for weekly prioritization meetings.
Smaller teams
When your whole team is <10 people, estimation overhead isn't worth it. ICE provides enough structure without bureaucracy.
High uncertainty
When you're testing assumptions more than executing known roadmaps, ICE's confidence factor captures that uncertainty well.
Discovery-heavy work
Continuous discovery habits generate many opportunities rapidly. ICE helps triage quickly.
Use RICE When:
Mature products with analytics
You can estimate reach based on usage data. "500 users per quarter will use this" is knowable.
Larger teams or organizations
RICE creates consistency across teams and enables cross-functional comparison.
Resource-constrained environments
Explicitly tracking effort in person-months helps capacity planning.
Stakeholder communication
RICE's quantitative approach is easier to defend to executives and stakeholders.
Roadmap planning
For quarterly or annual planning, RICE's thoroughness pays off.
How to Implement ICE Scoring
Step 1: Define Your Scales
Impact (1-10):
- 10 = Game-changing for business metrics
- 7-9 = Significant measurable impact
- 4-6 = Moderate improvement
- 1-3 = Small, incremental gains
Confidence (1-10):
- 10 = Validated with strong evidence (assumption testing, prototype tests, data)
- 7-9 = Good evidence, some uncertainty
- 4-6 = Hypothesis with limited validation
- 1-3 = Pure speculation
Ease (1-10):
- 10 = Hours of work, minimal complexity
- 7-9 = Days to a week
- 4-6 = Weeks of work
- 1-3 = Months, high technical complexity
Calibrate these with your team. "What's a 10?" should have the same meaning to everyone.
Step 2: Score as a Team
Don't let the PM score alone. Bring:
- PM - Estimates impact and confidence
- Engineering - Estimates ease
- Design - Provides UX complexity perspective
- Data/Analytics - Validates impact assumptions
Use silent voting first (everyone writes their scores), then discuss outliers. Averaging after discussion produces better scores than debate-first approaches.
Step 3: Set a Threshold
Don't build everything with a positive score. Set a minimum:
- "We only build items scoring 300+"
- "Top 5 scores make the next sprint"
- "Anything below 200 goes to backlog review in 3 months"
This prevents churn and creates focus.
Step 4: Revisit Scores Regularly
Scores change as you learn:
- New data increases confidence
- Technical discoveries affect ease
- Market changes alter impact
Review your opportunity list monthly, update scores, re-prioritize.
How to Implement RICE Scoring
Step 1: Define Reach Quantitatively
Reach = Number of users/customers affected per time period (usually per quarter)
Examples:
- "1,500 users will interact with this feature per quarter"
- "30 new signups per month = 90 per quarter"
- "100% of Enterprise customers = 200 customers per quarter"
Use analytics when possible. Estimate conservatively when you can't measure precisely.
For new products:
Use target customer counts: "We expect 500 users in Q1, so reach = 500"
Step 2: Use the Standard Impact Scale
Don't invent your own scale. Use Intercom's proven scale:
- 3.0 = Massive impact (fundamental improvement, likely to WOW customers)
- 2.0 = High impact (significant improvement, customers will clearly notice)
- 1.0 = Medium impact (noticeable improvement, customers will appreciate)
- 0.5 = Low impact (small improvement, nice to have)
- 0.25 = Minimal impact (tiny improvement, most won't notice)
This scale forces hard choices and prevents everything being scored "high impact."
Step 3: Express Confidence as a Percentage
- 100% = Validated with strong evidence
- 80% = Good data, minor unknowns
- 50% = Reasonable hypothesis, needs validation
- 20% = Low confidence, mostly guessing
Confidence below 50% is a signal to do more discovery work before committing.
Step 4: Estimate Effort in Person-Months
Total team time, not calendar time.
- 2 engineers for 2 weeks = 1 person-month
- 1 designer for 1 week, 1 engineer for 3 weeks = 1 person-month
- Include PM, design, QA, and eng time
Be realistic. Include:
- Design and spec work
- Implementation
- Testing and QA
- Documentation
- Deployment and monitoring
Teams consistently underestimate. Add a 1.3x buffer for unknowns.
Step 5: Calculate and Compare
RICE Score = (Reach × Impact × Confidence%) / Effort
Example:
- Reach: 1,200 users/quarter
- Impact: 2 (high)
- Confidence: 80%
- Effort: 3 person-months
RICE = (1,200 × 2 × 0.80) / 3 = 640
Rank all opportunities by score, then make strategic choices about where to draw the line.
Refining Both Frameworks
Add Context Factors
Sometimes the highest-scoring item isn't the right choice. Consider:
Strategic alignment:
Does this support company OKRs or strategic initiatives?
Dependencies:
Does this unblock other high-value work?
Technical debt:
Will this increase or decrease future flexibility?
Learning value:
Will this test critical assumptions or open new opportunities?
Customer commitment:
Have you promised this to key customers?
Use these as tie-breakers, not overrides. If context always overrides scoring, you don't actually have a framework.
Adjust for Team Velocity
Both ICE and RICE assume linear scoring, but diminishing returns are real:
- The 5th onboarding improvement has less impact than the 1st
- The 10th export format adds little value
- Continued investment in one area often yields less than diversifying
Periodically ask: "Are we over-investing in this opportunity area?"
Account for Sequencing
Some features must come before others:
- User permissions before collaboration features
- Data import before advanced analytics
- Basic workflow before automation
Build dependency maps alongside scoring. High-scoring items that require low-scoring prerequisites need phasing.
Common Scoring Pitfalls
HiPPO override
Scoring is worthless if the "Highest Paid Person's Opinion" always wins. Establish that evidence-based scores guide decisions, even when leadership disagrees.
Optimism bias
Teams consistently overestimate impact, confidence, and ease (or underestimate effort). Calibrate against past projects: "We thought X would be high impact. Was it?"
Analysis paralysis
Don't spend 3 hours debating whether something is a 7 or an 8. Quick, directionally correct scores beat perfect scores delivered too late.
Ignoring confidence
Low confidence should trigger discovery work, not blind building. If confidence is below 50%, validate before committing significant effort.
Gaming the system
If people inflate scores to get their pet projects prioritized, the framework breaks. Combat this with:
- Transparent scoring (everyone sees the numbers)
- Post-launch reviews (did impact match predictions?)
- Rotating who estimates different factors
Combining with Other Frameworks
ICE and RICE work well alongside:
- Opportunity Solution Trees - Score opportunities within each branch
- Weighted Scoring Models - Use ICE/RICE as factors alongside strategic fit, risk, etc.
- Impact-Effort Matrix - Plot ICE/RICE scores to visualize quick wins vs. big bets
- Customer Value Scoring - Use customer segments to weight reach differently
No single framework answers every question. Combine approaches for comprehensive data-driven prioritization.
When to Stop Using ICE or RICE
Scoring frameworks aren't always appropriate:
Don't use when:
- You have 3 opportunities and all are clearly valuable
- It's an emergency bug or security issue
- The decision is obvious and scoring is just theater
- You're in early exploration (use discovery sprints instead)
Frameworks serve decision-making. When they don't help, skip them.
Prioritize features based on customer impact, not gut feel. Pelin.ai automatically analyzes customer feedback patterns from Intercom, Zendesk, Slack, and sales calls, helping you score opportunities based on real customer pain. Request a free trial and bring data to your prioritization decisions.
