Just because a customer clicked a smiley face in your post-service feedback survey does not mean you gave them high-quality service.

Customer satisfaction and customer service quality are not necessarily linked at all, and that’s a problem because plenty of customer service teams rely solely on CSAT and NPS surveys to judge their performances.

In this post, we're going to explore the critical difference between customer satisfaction and customer service quality. Then, we’ll show you, step by step, how to build an effective customer service quality assurance system.

What is customer service quality assurance?

Customer service quality assurance is a structured process for evaluating responses to support inquiries. It uses a rubric to ensure evaluations are both consistent and based on the standards you've set for your team, with the goal of identifying areas where performance can be improved.

However, a quality assurance process isn't about policing responses with a red pen. It’s more like tending a garden. You’re giving things the care and attention they need to grow: giving feedback, identifying trends, and helping everyone align on what “good” looks like. 

Done well, customer service QA supports your team’s confidence and your customers’ trust. When both of those are strong, everything else tends to run a little smoother.

Why is having a quality assurance process important?

Without a formal QA process, it’s easy for standards to slip. This can happen for several reasons. You're scaling quickly and don't have as much time to train new team members. You have team members who've been around for a while and are feeling a bit burned out. Or your customers are asking more complex questions that your team hasn't been trained to answer.

A good QA system gives you an ongoing process for reviewing conversations, sharing feedback, and spotting patterns. More than anything, it's about supporting your support team. It’s a way to coach people, celebrate what they’re doing well, and gently guide them toward better outcomes.

The benefits of having a customer service QA process

Implementing a system for measuring the quality of your team's responses to customers has several benefits:

  • It creates a consistently great customer experience. Regularly coaching your team on what a good support response looks like ensures that customers will get an on-brand, helpful, and accurate reply regardless of who's responding.

  • It leads to more constructive feedback. The use of a rubric helps your team members understand exactly what you're looking for and makes it easier for reviewers to provide actionable tips on what could be improved.

  • It helps you spot trends before they snowball. Imagine that customers are running into an issue that's new to your team. Someone answers the first inquiry about it, but they answer it wrong. However, others start using that reply as the basis for their responses. QA would help you identify the issue before it becomes a saved reply.

  • It boosts team confidence. Not knowing what you're supposed to be doing at work and constantly wondering if you're doing a good job is miserable. When agents get regular feedback, they not only send better replies but also feel happier with their work.

  • It makes it easier to scale your team. Regular QA reviews help everyone on your team clearly understand what your expectations are, which helps them provide better guidance to new team members that they're mentoring.

How to create a customer service quality assurance process in 6 steps

If you're sold on the importance and benefits of QA and are ready to get started, follow the six steps below to set up a customer service quality assurance system for your team.

1. Define customer service quality for your company

How can you know whether your customer support department is consistently delivering high-quality service? You need to measure quality directly, which means first understanding what “quality” service means for your company.

It is your customer base who will ultimately decide whether or not you are delivering great service, but that leaves us with a conundrum: What if those customers don’t agree with each others’ assessments of your service?

What one person considers spectacular service might be merely acceptable to another, based on their unique expectations and past experiences.

Your team needs a way to consistently measure customer service quality, a measure they can use before the service is delivered instead of afterward. Start by pulling together data from the following sources:

  • Your company and team values.

  • Your customer service vision or philosophy, if you have one.

  • Existing CSAT and NPS comments that focus on positive or negative customer service interactions.

  • Reviews of your product or service that mention customer service.

  • Examples of excellent customer service your team has delivered in the past, as well as instances of service failure.

As you collect data, you will likely identify some common themes — the things that matter to your company and your customers and their relative priorities. Do your customers value detailed, one-to-one service? Does your customer feedback mention the speed of replies more often than anything else?

Use those themes to shape your answer to the basic quality question: What should a great customer service answer look like? Write down everything you can think of, have your team contribute suggestions, and refer to examples of your best customer service work.

That list will form the basis of your customer service quality scorecard or rubric.

2. Create a customer service quality rubric

A rubric is a list of criteria you can measure a customer service answer against. With a clear, well-written rubric, two people should be able to review the same customer service interaction and come up with similar scores.

As a general guide, a customer service quality rubric might include these areas:

  • Voice, tone, and brand: Does the answer feel like it comes from your company (while allowing for individual personalities)?

  • Knowledge and accuracy: Was the correct answer given, and were all of the customer’s questions addressed?

  • Empathy and helpfulness: Were the customer’s feelings acknowledged and needs anticipated?

  • Writing style: Were spelling and grammar correct, was the answer clear, and was the layout helpful?

  • Procedures and best practices: Were the correct tags and categories added, and were links to knowledge base articles included?

Having four to five main criteria is probably enough, though each one may include multiple elements. Keeping it relatively light will make the rubric much more likely to be used.

To save you some time, we’ve put together some of the most common elements of a quality rubric in this spreadsheet, which also includes a scorecard.

Share your completed rubric with your team, and try applying it to some existing conversations together. You will quickly identify any missing or confusing elements and areas that may need to be reconsidered.

You’ll know your rubric is working when you can reliably get similar scores on a conversation reviewed by different people.

3. Select a quality assurance review process

Quality assurance can take many forms at differing levels of complexity. For example, when the wider Help Scout team takes part in whole company support “power hours,” we use a Slack channel to share draft answers with our Customers team. They will review for accuracy and tone and offer suggestions for improvement.

The right choice for you will depend on your team size, conversation volume, and resources. Here are four common options to consider. You may use more than one or shift between them over time. We present them here in no particular order (though self-review would be our lowest-priority choice).

Leader reviews

Either team leaders review their direct reports’ work or a manager reviews work for the whole department.

Pros:

  • With fewer people reviewing, it is easier to create consistent review styles and feedback.

  • It’s helpful for leaders to compare work created across their teams to identify issues and trends.

Cons:

  • It's time consuming for leaders to review a reasonable number of conversations.

  • Feedback and insights from only one source limit the speed and amount of improvement possible.

Quality assurance specialist reviews

Common in larger companies, a permanent QA role (or team) can focus full time on monitoring and addressing quality.

Pros:

  • Specialists can get very good at reviewing and giving feedback.

  • It allows for a higher percentage of interactions to be reviewed.

Cons:

  • Specialists require a larger financial investment.

  • You aren’t developing QA skills in the individuals on your support teams.

Peer-to-peer reviews

Each support person reviews the work of other support people on the team, scoring them against the rubric. Typically, each person would review a small number of conversations each week.

Pros:

  • People learn directly from their peers by seeing different approaches and new information.

  • It promotes an open and collaborative culture.

  • You can review a lot of conversations when everyone is sharing the work.

Cons:

  • Some people may be harsh or inconsistent reviewers, requiring extra training.

  • It can be tricky to get people to do the reviews when their queue is full of customers waiting for help.

Self-reviews

Individuals select a handful of their own customer interactions and measure them against the agreed-upon standard to identify areas that can be improved. This should generally be your last resort review option.

Pros:

  • It allows for individual growth and self-improvement.

  • It’s simple to implement and much better than no reviews at all.

Cons:

  • People are less likely to identify their own problem areas as they have internal knowledge of their intended meaning.

4. Pick which conversations you'll review

Whichever model you use, you cannot realistically review every customer interaction. So which conversations should you review, and how should you find them? Here are some suggestions — use what works for you!

  • Random sampling: Take whichever conversations pop out from your QA tool, or blindfold yourself and poke your cursor at a screen full of conversations. You’ll get started — and that’s the main thing — but you may have to sift through some uninteresting conversations first.

  • New team members' conversations: When onboarding a new support agent, reviewing their work is critical both to protect the customer and to help the newcomer learn your tone, style, approach, and tools.

  • Complaints and wins: Work through conversations that resulted in complaints or praise, as they may be more likely to involve learning opportunities.

  • High-impact topics: Use tags or workflows to find conversations on particularly important areas of your product or service where customer service quality might make the biggest impact — e.g., during trials, pricing conversations, or with VIP customers.

  • Highly complex conversations: Focus on detailed conversations or those involving multiple people where new scenarios and surprises lurk to be explored by the team.

The specific process of finding and opening those conversations for review will of course depend on the system you are using to perform those reviews.

5. Select a quality assurance tool

Your quality assurance tool does not need to be complicated. A simple spreadsheet scorecard will work fine in many cases and is an enormous improvement over not reviewing interactions at all.

If a spreadsheet is no longer working for you (perhaps because of higher volumes, larger teams, or a need for better reporting), there are plenty of support QA tools on the market. Here are some of the key considerations when selecting the right QA tool for your team:

  • Does this tool support the style of reviews you want to do (e.g., can it arrange peer-to-peer reviews)?

  • Will it integrate with your help desk, and, if so, how good is that integration?

  • Is the pricing acceptable at the volume of reviews you would like to do?

  • Will its reporting options help you answer the questions you have about your team’s performance?

  • Does it perform well, and is the user experience smooth? (A clunky review experience is less likely to be used regularly.)

  • Will it help you identify the types of conversations you are interested in reviewing?

  • How good is the customer service experience at the tool's company?

6. Roll out your new quality assurance process

In addition to a clear, agreed-upon rubric, launching a successful QA process requires the right environment for the team to work in and training on how to review effectively.

  • Build trust and psychological safety within the team. If people don’t feel safe raising problems or disagreeing, it will be difficult to identify and improve on any quality issues.

  • Share your rubric and discuss quality as a team. As part of developing your rubric, you should be holding discussions with the team, listening to their perspectives, and coming to understand together what quality service looks like. That may also involve higher-level metrics like average response times, CSAT, and NPS.

  • Train reviewers on giving good feedback. Feedback should be specific and include suggestions for improvement when needed. Share examples of good feedback and unhelpful feedback.

  • Begin your review process. Try running the processes, keeping an eye out for any confusion, disengagement, or training issues.

  • Share feedback and take action. Use the review data to identify people or situations where quality could be improved, and share that feedback with the relevant people.

Your quality assurance will need to be modified over time as your team structure, conversation volume, and underlying work change. The process should always be in service to the goal of delivering higher-quality help, so do not hesitate to modify it when you identify issues.

Want to learn more? Check out this webinar about measuring support quality beyond CSAT and NPS, featuring Beth Trame of Google Hire, Shervin Talieh from PartnerHero, and Mathew Patterson of Help Scout.

Set (and raise) your own bar for customer service quality

Customers come and go, markets change, products launch, and staff members are promoted, and through it all you need a way to know if your quality is improving or declining.

If you rely on your customers telling you when you haven’t done a good job, you will always be reacting to problems that have already happened.

Stop worrying about that one person who somehow always clicks the “sad” face even though they leave a positive comment. Instead, by setting your own quality standard and then building tools and systems to measure against, you can chart a course of continual, proactive improvement for your customers.