How to prioritize features using weighted and unweighted scorecards
The prioritization process demands a lot from product managers and decision-makers. It’s not just about delivering the most valuable features to the user, it’s also about bringing visibility to the potential work that will go into building any given initiative.
Good prioritization is about surfacing those constraints, and establishing a clear, cross-departmental understanding of the vision (as well as communicating why that’s the vision, and why certain initiatives will never be built because they don’t align with those priorities).
There are many types of prioritization frameworks that PMs can use, and they all fit different types of products and stages of the development lifecycle (read our guide to the most popular methods here).
If you want to keep the process lean and easy-to-standardize, then weighted and unweighted scoring prioritization might be the best framework for your team. It all starts with choosing the criteria you’ll evaluate each feature by. These values usually fall under one of two dimensions: How important is it? values vs How difficult is it to build? values.
Different teams will label it different terms, but generally speaking this is how they can be categorized when it’s time to choose X vs. Y:
Which, once you pick the values you'll estimate (value vs. effort for this example), can look like this in an idea management tool:
Unweighted scorecards: The most common way to prioritize features
Prioritization is a game of constraints, and the winner is never the company who gets to build the most features. Rather, a company that does prioritization well is one where the most impactful features are being built perfectly within a limited set of resources.
Unweighted prioritization scorecards are a great way to easily compare highly valuable features (desirability) vs. how realistic it is to build them (feasibility). A simple scorecard is the perfect lean method for aligning everyone on what internal, strategic criteria makes a good feature. It’s also a great exercise in setting realistic expectations for what can be built with the resources the company has.
For this guide, we’ll explain the three most popular prioritization scorecards. Usually a combination of:
- Value: Value is defined as any benefit obtained from building the feature/initiative. It can refer to business value, like revenue, or value to the customer.
Versus one of three values:
- Cost: All the costs associated with building a feature. This includes development, operational, implementation and maintenance costs. Cost can be anything from time to build, technical effort, operational costs, and cost to implement.
- Complexity: How technically complex a feature will be to build. This can mean the technical/development complexities, the implementation complexities, testing/UX complexities, etc.
- Risk: This prioritization criteria is mostly used by new products and startups who are in the process of finding their footing in the market.
Value vs. Cost (or complexity)
We defined value as any benefit obtained by the customers and the business after a feature is built. Cost, on the other hand, refers to anything that creates a difficulty in the process of building a feature.
Cost, in this case, doesn’t just come as a monetary value (man-hours per person, # of team members required full time, etc). Costs can also come as the total amount of development and implementation efforts, operational costs, technical complexity and risk factors.
Value vs. Risk
Development risks are any potential unforeseen problems that might arise. It’s one of the hardest values to estimate and calculate because there’s no way to predict what might go wrong.
When taking risk into account during the prioritization process, it results in one of two scenarios. The risks never happen, and the development team doesn’t have to put any extra crisis control plans in motion. Or the risks happen, and the development team has a plan of action for surviving them and coming out victorious when the crisis ends.
Here are the types of risks a product team might want to account for during the prioritization process (as well as the types of questions they can ask for assessing each potential risk):
Delay risks
- What types of constraints might affect our predicted time to deliver this initiative or feature?
- Have we based our schedule estimations on as much data as possible? Or has it been mostly optimistic guesswork that doesn’t account for team capacity?
- Have we accounted for every task that can affect how long it will take to build this feature?
Cost risks
- How might we go over budget?
- Could the development scope and requirements change over time as new research/testing findings emerge?
- Are we prepared for any unforeseen costs and have we allotted a budget for them?
Technical risks
- Does the team have all the tools, knowledge and inter-departmental support needed to build this initiative?
- What are some functional reasons we might not be able to build, deliver or implement a potential feature?
- Have we account for all inter-departmental dependencies for completing the deliverables? Or was the decision making done in silos?
Sign up for a free trial and give one of our product roadmap templates a whirl..
RICE
Known as Intercom’s internal scoring system for prioritizing ideas, RICE allows product teams to work on the initiatives that are most likely to impact their goals.
This scoring system measures each feature or initiative against four factors: reach, impact, confidence and effort (hence the acronym RICE). Here’s a breakdown of what each factor stands for and how it should be quantified:
Then, those individual numbers get turned into one overall score using a formula. This formula gives product teams a standardized number that can be applied across any type of initiative that needs to be added to the roadmap.
After running each feature by this calculation, you’ll get a final RICE score that you can then use to rank the order in which you’ll build the features. Here’s an example:
Weighted scorecard: Add reach and confidence to the calculations
This prioritization method, also known as a weighted scorecard, involves the steps of the previous methods, but with an added calculation implemented for the sake of making stakeholders choose the relative importance of each criteria.
A weighted scorecard uses a second scoring dimension: the relative value of each dimension the features are being rated on. This relative value is a “standardized” weight of each prioritization criteria, usually adding up to 100% or 10. The added weight dimension is useful for taking into consideration the importance of each feature/initiative in relation to one another. It’s a good method for creating transparency around how important each prioritization factor is to all stakeholders before the features are scored and ranked.
Here’s a visualization of the weighted prioritization criteria. For this example, the criteria is customer value, impact on business goals, implementation costs and development risk. Each value has been weighted, adding up to a total priority weight of 100.
By asking stakeholders to assign a weight to each category before the features are scored, product managers are saying: “You have to decide which of these factors matters the most in terms of pushing the needle towards development, and which factors shouldn’t have the same weight in the decision-making process.”
The idea is that each scoring category (value, cost, impact, risk) has a different level of importance. This level of importance is then quantified as a “weight”.
For the previous scorecard, this is how you’d get the final priority score:
How to create a prioritized scorecard
For both weighted and unweighted prioritization methods, the logistical steps are essentially the same. Like all prioritization meetings, the main hurdle to overcome is ensuring that every stakeholder has an understanding of what criteria the features will be measured against, and an understanding of how the strategy and vision should inform those prioritization decisions.
Product managers can’t simply arbitrarily prescribe the weights and scores for each feature; the scores need to take into consideration the priorities of anyone affected by the direction of the product. This includes engineering/development, design, and to a lesser extent, CS and Sales.
1. Select the features to be scored
The way product teams arrive at a list of features to be prioritized varies from organization to organization. It all starts with a solid idea management process that consolidates user research, feedback and observable data into one platform or tool.
Potential ideas should live in one place instead of across multiple spreadsheets, inboxes and Google docs. An idea management platform is important for creating an environment of accountability, transparency and effortless buy-in and alignment among all stakeholders and team members.
After centralizing potential feature ideas into one place, product teams and relevant decision-making stakeholders can see A) that their ideas are being taken into account and B) how those ideas fare out against the established priorities for the product.
Once you’ve put together a list of potential features to be built for the next quarter or year, it’s time to agree on what the measuring criteria should be for each one.
2. Pick the criteria to be scored
The number of teams and stakeholders involved in picking the values that matter varies from one product to another. However, it’s definitely a group activity and it shouldn’t happen in a vacuum. If anything, picking the criteria (and the weight of each if you’re using a weighted scorecard) is one of the most important discussions PMs can have for clarifying expectations, scope limits, and cognitive biases.
Depending on your product, the criteria can be one of the unweighted scoring methods described earlier (value vs. cost/complexity, value vs risk), RICE or a weighted scorecard.
If your team decides to use a weighted scorecard, there’s an extra step to this stage in the process. It involves agreeing on what the relative weight of each scoring criteria is (out of 10 or out of 100%). This way, teams can establish what the relative value is in a consistent, standardized way. For example, implementation costs might have more weight than customer value when it’s time to decide what to build, so teams can account for that by assigning a relative weight to each other.
3. Meet with stakeholders to align on the chosen criteria
After you’ve picked the criteria to be scored (and if you’re using the weighted method, the weight of each criteria) it’s time to loop in anyone whose work might be dependent on what gets built.
These prioritization meetings are not just about representing the interests of the different departments that make up the product (since their interests are ultimately tied to the overall product vision). They’re also about accounting for unseen needs that might affect what gets built—the needs that don’t fall under customer satisfaction or user experience.
These needs can be things like operational costs, development risk and complexity. A PM might be aware that these constraints exist, but he or she should rely on the relevant department leaders to make their case for why those should be accounted for.
4. Assign scores to each feature
It’s time to score each feature according to the criteria established and agreed-upon earlier.
When it’s time to assign a score, it’s important to do it along a rating system rather than using arbitrary numbers and percentages. A score out of 3, 4, or 5 can be enough to see how much a feature weighs in terms of value, effort or whatever criteria your team is using to prioritize and rank features.
If you're an agile product team, check out our guide to creating an agile roadmap.
Prioritization matrix: Visualize scorecards on a chart
If scoreboards aren’t your thing and you like seeing information in graph form, you can use a 2x2 prioritization matrix to visualize the order in which initiatives should be worked on (ideally). Prioritization matrices work well with any of the x vs. y scoreboards defined earlier in this post (value vs. cost/complexity,, value vs. risk).
A 2x2 prioritization matrix is simply a visual representation of the order each initiative should be worked on. Along the vertical axis, you have value (or benefit, or impact) and along the horizontal axis, you have effort (or cost, or risk, or complexity). Then, each feature gets plotted on the chart.
Here’s what each quadrant can tell you (and the order they should be worked on):
- High value, low effort (“Quick wins”): These are the no-brainers, the low-hanging fruit. These are the ideas that don’t require tons of development effort, time, or money. They’re risk-averse, cheap solutions that are relatively easy on the technical side.
- Low value, low effort (“Maybe later”): These are the “maybes”, the “will get to it later when the scope opens up a bit more” ideas. They’re not essential to the success of the product, but they’d have a noticeable impact.
- High value, high effort (“Big new features”): These are the ideas that need a strategic approach. Your team will definitely work on implementing these ideas at some point in the future, they just need to be planned out more carefully.
- Low value, high effort (“Time sinks”): These are the ideas you can afford to pass up. They’re not worth doing at the time of the assessment, and they’re not likely to become a priority for a while.