A lot of discussions about product management are focusing on prioritizing the next features to add to the product. However, focusing on what to add next will probably lead to overlooking opportunities with the existing features. Adding more and more features won’t necessarily lead to a better product if we don’t assess the product and features we already have. It is definitely beneficial to answer, from time to time, the following question: “Should we continue to invest in this feature? Or kill it and move on?”. Here is how to assess that:
1. Be careful with biasesUser engagement alone is not enough to evaluate an existing feature. There are multiple biases in this metric. The first bias is due to the fact that user engagement does not mean user satisfaction. If our users don’t have any other alternatives, they will use the feature. Knowing that users are using a feature is not enough to determine whether our users are satisfied or not. The second bias is the fact that we are focusing on users’ standpoint only. There is no measure of the impact of the feature on the product and our business. While users might be happy with the feature, if it does not drive revenue to our business, our strategy is probably not robust enough.
2. Use quantitative data for the impact on the businessWe definitely want to measure the impact of a feature on our business. Since it is a complex topic, it might be hard to determine which feature is driving which amount of revenue. We also want to create a metric that we can measure for every feature, so that we can compare them. Here is a couple of metrics which, when linked together, gives a good overview of the impact of a feature on our business:
- Usage – % of intended users who used the feature, at least one time.
- Revenue Influence – the revenue ($) driven by the users that are actively using the feature.
3. Use qualitative data for the impact on the usersWe just saw how to measure the impact of a feature on our business, which is amazing. But, we obviously need to measure the impact of this feature on our users. As we saw previously, usage is coming with a strong bias. The best way to measure user satisfaction is to actually ask our users. There are two metrics we could use to measure the impact on the users:
- Perceived Value
- Perceived Effort
- Perceived Value – On a scale from 1 to 5, how valuable is this feature?
- Perceived Effort – On a scale from 1 to 5, how easy was it to use this feature?
4. Read the resultsThe decision tree will depend on our context and our overall strategy. However, there are some conclusions we could get from these metrics.
UsageLow usage – We might have a problem in the release process or our go-to-market is not good enough. High usage – We don’t have any issue here. 🙂
Revenue InfluenceThis is an indication of the risk we are taking with this feature. We have a clear number showing what is the revenue linked to this feature.
Perceived ValueLow value – We did not build the right feature, or we are not addressing the right challenge. High value – We built it the right way. 🙂
Perceived EffortLow effort – We are probably good here. High effort – Our user experience is not good enough
5. ConcludeOnce again, this will depend on our strategy and the risk we are eager to take. However, here is a simple example of a decision we might make, based on the four metrics. Let’s consider a feature, with the following metrics:
- High Usage
- Low Revenue Influence
- High Perceived Value
- High Perceived Effort