Best practices for collecting customer feedback using surveys
So, you got to know your users and defined the segments they fall under. Whether they’re defined based on a subscription model, MRR/ARR, by use case, industry or company size—it’s time to reach out and start measuring how happy they actually are with your product.
After collecting satisfaction scores, you can dig deeper into the why behind their answers using customer interviews (which we’ll cover in a later guide).
Enter customer feedback surveys.
In product organizations where there’s a solid strategy for continuously integrating customer feedback along the product development process, PMs use the feedback coming from those defined segments to decide what to build next.
Having a strategy for tackling customer feedback eliminates multiple headaches for product managers: the analysis paralysis that comes with trying to listen to every piece of feedback that comes in, the seemingly endless task of organizing/mapping those feedback pieces into categories and then trying to run all those ideas by multiple prioritization frameworks.
More importantly, when the product team knows which users they should be paying attention to, it allows them to reach out with active, structured feedback methods like customer feedback surveys and interviews (whether it’s via email, chat or phone).
These structured methods for collecting actionable customer feedback allow product managers to get a multi-dimensional view at the problem their users are facing—and eventually find the reason why they reached out to suggest a feature in the first place.
In this guide, we’ll go over the best practices for designing and collecting one of the most actionable types of structured customer feedback: customer feedback surveys.
Customer feedback surveys: Why are they important?
Customer feedback surveys aren’t for everyone. When you try to quantify complex questions that need a qualitative dimension in order to be actionable, it can be a recipe for disaster—especially in the context of user research.
User research involves diving deep into problems and gaining the deepest understanding possible of the journey your users go through when they use your product. WIthout the answer to the question “What part of our product is causing you frustration/stopping you from achieving your goals” there isn’t much action a PM can take on customer feedback.
A customer feedback strategy also involves having a good plan for tackling different pieces of feedback and the different types of information they contain.
Customer feedback surveys are only useful as a way to quantify sentiment and brand value. But it’s an important step in the road towards discovering whether or not you’re solving the right problems for the right people.
First, let’s quickly define the three main types of customer satisfaction surveys that you can use: the Net Promoter Score (NPS), the Customer Satisfaction Score (CSAT) and the Customer Effort Score (CES).
What’s the difference between NPS and CES?
Your NPS scores look at the big picture, long-term status of your users’ relationship with your product. It’s a survey that takes a look at how referrable your product is, not so much how and where your customer is experiencing frustrations with your product.
If NPS scores offer a macro level view of customer satisfaction with your product, then CES scores give you a micro level view. By prompting users to answer this survey soon after they complete specific actions, you can get a look into specific parts of the product or the customer experience that need improvement.
In reality, CES, NPS and CSAT scores are three important CX metrics that, when combined and deployed strategically, give product teams a good compass for identifying what problems are worth exploring.
NPS best practices
NPS surveys are simple, short and with results that are easy to work with when it’s time to determine what problems deserve attention. It can predict potential churn so that teams can devise a plan to halt it, and it shows customers that you’re taking their problems seriously.
1. Don’t send it too often—or not often enough
If you send your surveys too early on in the customer journey, your customers might not have had a chance to experience your product fully. And if you send it too late, your customers might forget how they felt while they were using your product. It’s about finding a medium balance that makes sense to your product specifically.
Measure the way your users move from one stage in the customer journey to the next, and schedule your NPS surveys based on that. The important thing is that you measure user sentiment across multiple points in time. Sentiment towards a product tends to evolve as the customer’s knowledge of the product deepens, and they can more accurately answer whether or not the product is solving the problems they care about.
Measuring how specific customer segments feel over time allows you to see how their sentiment changes over time.
2. Follow-up with customers who respond (whenever possible)
So you collected a good number of responses that cover the entire spectrum (from the promoters to the detractors). Now what?
A promoter will be more likely to engage in a post-NPS survey where you ask in-depth questions like:
- What are your goals with our product?
- What aspects of the product helped you achieve your goals?
- What features do you consider to be useful? Which ones aren’t useful?
When you follow up someone’s response with an interest in their problems and needs, you’re creating an opportunity to build deeper, meaningful relationships with your users.
And by following up with detractors to get their feedback, you’re unearthing potential growth by asking them:
- What could we do differently to change your initial response to the survey?
- Where did you meet the biggest challenges using our product?
- Are you considering our competitors, and if so, which ones and why?
3. Share the results with the rest of the organization
When you make NPS scores available to every team that has a hand in how the user experiences your product, it creates a deeper level of empathy, one that everyone can use to think about how to improve the customer experience in an actionable way.
When you share your NPS scores with the rest of the company, it’s important to communicate how you plan to make NPS scores actionable. For example, sales and CS can use NPS results to determine churn risks by following up with customers who might be struggling simply because they don’t understand your product, and who just need a nudge in the right direction.
CES best practices
Customer Effort Scores are pretty straightforward. They help you determine how difficult it was for a user to achieve a task, but also how much of a headache they encountered when it was time to reach out to support for help.
By measuring and taking action on those frustration points in a comprehensive way, companies can manage the product’s reputation online more efficiently.
1. Wording is everything
One of the biggest challenges with CES surveys is the fact that the word “effort” can mean different things to different users. It’s not a standardized term, which means it that it leaves a lot of room for individual interpretation. Some users might think that a small delay during the onboarding process was inconvenient, while others might not think twice about it.
That’s why, when you send a CES survey, ask how easy it was to use the product, rather than how much effort it took.
Then, follow up with two (optional) questions:
What made using our product easy or difficult?
What can we do to make it easier?
2. Always combine the results with other data
CES surveys aren’t standalone indicators of customer sentiment. Not only does a product need multiple types of surveys to get a comprehensive sense of customer satisfaction, it also needs to rely on other data points.
Ensure there are hypotheses that are grounded in observed behavioural data, as well as other customer success metrics and goals that your team is keeping track of. CES, and other surveys like it, are complementary information that add another dimension of information to problems that are grounded in more quantitative data.
Looking ahead: Digging deeper into the why
Customer feedback surveys are pulse checks that act as signals for a PM’s attention. They flag a PM so he’s aware that there’s a problem along the customer journey that’s causing dissatisfaction and frustration—using a simple question and an easy-to-classify numerical score.
Let’s say you developed a way to include NPS, CSAT and CES scores in your research and planning strategy. You have different customer segments expressing unhappiness regarding a specific part of your product, or their journey. Now you want to determine why this is happening. In order to do that, you have to talk to your customers. Like really talk to them, using an almost scientific approach to the way you formulate and act on your hypotheses for why they answered the way they did.
That’s where interviews come in as the strategic piece for uncovering those shortcomings, shortcomings that you can solidify into actionable pieces you can solve for using tangible product updates and features.
With interviews, you’re looking to dive into what the user is trying to achieve and where your product fails to fulfill those needs. You’ll hit the jackpot when you find underserved problems that also align with the vision of your product but that are also possible in terms of resources. Interviews are beyond the scope of this guide, so we’ll get into them in the next chapter.
Ready to start planning and building customer-driven product roadmaps? Sign up today and try our ready-to-use templates.