How to Launch and Execute a Customer Service Quality Assurance Program
Introduction: Providing adequate customer service is no longer an option
Today, companies need to consistently provide stellar customer support in order to retain existing customers and gain new ones. In fact, the cost of poor support is high: U.S. companies lose more than $62 billion annually due to bad service.
That’s why it’s surprising that many companies – even large ones – do not have a Quality Assurance process in place. In a study of more than 300 organizations, HDI found that 57% of them record their support calls. Of those, 20% of the companies don’t actually do anything with the recordings.
But Quality Assurance is critical, not just for increasing customer satisfaction, but also for driving company profitability and increasing the ROI of the customer support team.
In this guide, we’ll give you step by step instructions to launch and manage an effective QA program.
First, we’ll align on a definition of QA. We’ll talk about why it’s important and and how you can use it to drive profitability. Finally, we’ll provide specific steps you can take to launch a QA program for your customer support team.
In addition, this guide contains a number of specific resources designed to make it easy for you to build your QA program. These include:
- Example staffing levels required to implement QA
- Specific QA templates that you can use for evaluating your own customer support team’s tickets
- Reviews of the best QA software so you can decide which tool, if any, to bring to your team
Are you ready to take your company’s customer support to the next level? Do you want to provide an exceptional experience for every customer your team interacts with? Do you want to demonstrate how customer support can drive profitability for the company as a whole? Read on.
What is QA?
First, let’s make sure we’re aligned on the definition of Quality Assurance.
Some companies use Quality Assurance to describe all of their efforts to measure their team’s performance. These include:
- Efficiency and productivity metrics like First Response Time and Full Resolution Time
- Customer satisfaction scores
- Qualitative measures of agent performance
For the purposes of this guide, we’re focused only on the third piece of this puzzle. The first two are absolutely critical as well, and we’ve covered them in other pieces of content. To learn more about efficiency and productivity, please see our complete guide to customer service Key Performance Indicators. You can learn more about how to drive great customer satisfaction here.
But in this guide, we use Quality Assurance to refer to all aspects of performance that can’t be quantified by your helpdesk software. Your helpdesk dashboard can’t tell you whether your agents are using the right brand voice; evaluate their grammar; identify whether they’re using the correct internal processes; or provide an assessment of how well they addressed the customer’s problem. That’s what a QA program is for.
We’ll go into more detail, but here’s how it typically works:
- You establish a QA scorecard, typically with a top score of 100, that covers the quality metrics that are most important to your company
- You identify how many tickets per agent you want to evaluate – anywhere from 5 tickets per month to 15 tickets per day
- Someone – a team lead, team member, or QA analyst – randomly selects tickets and scores them. You may want multiple analysts to score each ticket, to ensure accuracy.
- Results are used to coach individual agents and identify company-wide opportunities for improvement.
Why should your team do QA?
QA increases support quality … but what does that mean? In short, you’ll have happier customers, a more efficient customer service operation, and a more profitable business.
Here are all the reasons your company should implement a QA program.
Improve Quality
QA helps you go far beyond CSAT to improve the quality of your customer support. Your customers don’t know whether the answer that your team gave them is technically correct, they just know how it made them feel. So, in the event that they gave a technically unsound answer that made the customer feel good, they’ll still get good marks from a metric like customer satisfaction (CSAT). QA cuts through the fluff.
In addition, keep in mind that 1 in 26 customers who’ve had a bad experience bother to complain. The rest of them will just stop using your product without saying anything. QA helps you to potentially see the 25 other poor experiences that you might not have had the opportunity to review. It also helps you see the positive experiences – and provide positive feedback to the agents who are performing well.
QA also provides you with data to drive in-depth ongoing trainings for your agents. Without any kind of review, how do those agents know what they are doing well or could be doing better? Further: how do they know where they stand, or how far they have to go to be successful?
It can also inform training for new agents. Don’t just show them hypothetical situations—give them actual problem tickets and see how they handle them. Situational examples will always be more impactful than theoretical ones.
Improve security
Every company faces the risk of a bad actor – an employee who intentionally commits fraud or steals sensitive data. While you obviously hope this won’t happen, you need to follow the maxim “Trust but verify.” QA can help enforce this. If agents know that a certain percentage of their tickets will be randomly reviewed, you’ll reduce the risk that they’ll commit fraud.
Increase profitability
There are a number of ways that a Quality Assurance program can increase company profitability and help customer support teams demonstrate their ROI to senior leadership.
Increase customer retention. First, increasing support quality has a direct impact on profitability. According to Bain, a 5% increase in customer retention can produce a 25% increase in profit.
Improve efficiency. Second, QA increases profitability by improving the efficiency of your customer service team. Are your agents tagging tickets correctly? This is critical for driving efficiency on your customer support team. It enables you to analyze your data, understand what issues drive significant ticket volume, and identify potential fixes. But this can easily be neglected by agents; QA can help ensure they all understand the importance of tagging correctly.
QA can also help you identify opportunities to add new macros. And it can help you identify areas where your agents don’t understand key processes or policies, enabling you to develop more effective training.
Add incentive pay. Third, QA enables you to develop effective incentive pay programs for your agents. One of our clients increased efficiency by 60%, which was possible because we developed an incentive pay program that had QA as a critical element.
Incentive pay programs can be tricky. If you incentivize employees based on efficiency alone, they will naturally sacrifice quality responses. If you incentivize efficiency and CSAT, you risk creating a lot of Agent Billys – team members who’ll do anything for the customer, regardless of whether it’s actually best for the company.
If you include QA in your incentive pay system, you cover all your bases. Agents won’t be able to maximize their incentive pay unless they are truly performing well on all the areas that matter.
Avoid over-discounting. Fourth, QA ensures agents are following company processes related to refunds, discounts, and more. Let’s say two agents – Angela and Billy – work for an e-commerce company. Agent Angela follows company processes and denies refunds to customers who try to return products after 60 days. As a result, she gets some bad CSAT scores – but she’s actually doing a good job. Her QA evaluations will reveal that she is delivering accurate responses to customers that follow company policies.
Now let’s say Agent Billy disobeys company processes. He gives customers discounts left and right, hoping to boost his CSAT score. And he’s successful. He has the highest CSAT score in the company. His QA scores will show that he actually has room for improvement.
Empower your employees
Companies with high employee engagement outperform those with low employee engagement by 202% percent. When you give your team the opportunity to take constructive and positive performance feedback into their own hands, it gives them ownership of the team’s success as a whole. They know their opportunities first hand because they have them right in front of them every day.
What that means is when they get into a sticky situation with a frustrated customer or a complicated ticket, they know exactly what the parameters for success and failure are. They’re empowered to make their own, informed and supported choices based on your set QA rubric.
Align your team
Has anyone ever criticized a ticket that you thought was perfectly good? When you design your QA rubric, it forces your whole company to align on a definition of quality. How important is brand voice? Is it less important or more important than correct grammar? Without aligning on questions like these, everyone in your company could be evaluating your team’s work based on different standards.
Now that you know how important and impactful QA could be for your team, let’s talk about what type would be best for your company.
Types of QA
Just as there are different methods of providing support, and different types of businesses, there are also many different types of QA. And, like the story of Goldilocks, it’s likely that only one or two of them will fit your team just right. Let’s dig deeper into the different methods of QA so you can discover which one will be best for your team.
Manager Review
Most commonly, the team lead(s) or manager(s) takes point on providing feedback. The thinking here is that the manager or team lead should already be a solid arbiter of the right way to do support. Furthermore, this gives the team lead insights into opportunities for improvement that could be implemented across the team.
There are two challenges with this approach. The first is capacity: your team lead may already be stressed and overworked. QA takes a long time. One person can easily spend much of their day providing reviews if the team and ticket volume is large enough. As your team grows, it may be worth it to have an individual whose role is specifically dedicated to running your QA process. Otherwise, it will start to get put on the backburner and may not continue to provide such useful insights.
You can also mitigate this risk by evaluating only a small number of tickets. Even five tickets a month is better than none.
The second challenge is bias. If team leads are responsible for evaluating the team members they supervise, they may naturally have a tendency to give higher scores. After all, the team members’ performance affects their own. You can mitigate this challenge by having team leads review tickets for agents who are actually on another lead’s team, or by having multiple leads score each ticket.
Or you can try a different approach.
QA Analyst Review (recommended)
Our preferred option is to create a separate QA team, even if it’s just a team of one (or even just one person dedicated to QA part-time). Over time, depending on how much QA you want to do, this could become a dedicated team in its own right. This type of QA structure is especially common in larger companies who can afford a separate team.
At Peak Support, we have a centralized QA team. Our Senior Team Lead in charge of QA, Roland Allan Papa, oversees almost all of our QA agents, even those who are dedicated to specific client teams. Roland’s team is evaluated not by how high the scores are, but on how closely they calibrate with our clients’ scores. As a result, the scores are unbiased and truly reflect the criteria that matter most to our clients.
The number of QA agents can vary. One of our clients has 6 QA agents for a 65 person team. Customer service agents staffed with that client handle hundreds of cases per week, so we aim for 15 audits per agent per day, to ensure consistency. At smaller clients (<20 agents), QA might be a part time role for a tenured agent.
Whether the QA team is staffed with multiple full time agents or just one part-timer, there are multiple benefits to this structure. First, it keeps QA off the team lead’s plate. Second, as the people enacting the QA process grow in specialization, the quality of feedback tends to increase, thus making a higher impact on the work being reviewed. Google, for example, has organized its engineering QA in this way.
However, if your company is on the smaller side or is in a hyper-growth stage, it could be difficult for you to make a case for a whole separate team or dedicated team member. With a growing backlog of things to do, your company may be reticent to give another person over for a highly-specialized task. In that case, having a manager or team lead do the work is going to be your best opportunity.
Peer Review
If you don’t have a manager or team leader with the time to dedicate to QA, or the financial resources to hire a separate person or team, you could still launch a QA program by using peer-review.
Peer review works really well in teams at companies with an extremely open culture. Because agents are receiving feedback from their peers, they need to be receptive to both giving and receiving constructive criticism – and that’s not a small feat!
An added benefit of doing peer review is that it makes everyone feel like they are aligned and working towards a common goal. With more hands working on QA, the work goes faster, and individuals commit less time to getting it done.
Something to remember, though, is that different individuals use their own unique perspective to review tickets. It’s important to pay attention to each reviewer’s average review score. Some people may give consistently high scores and others may give consistently low scores. Anonymizing the names of the people that they are reviewing can help with some of this bias, but it’s still valuable to pay attention and coach on. Teams can also go through calibration exercises to make sure they’re all on the same page with what “quality” means.
Another issue with peer-review is, because of the fact that it’s nobody’s primary role, it may fall by the wayside during busy periods. It could also become the task that people rush through in order to be able to go and work on other projects.
Self-review
Self-review is the least ideal option when it comes to QA. While it can work in smaller teams where agents feel a great responsibility for the quality of their work, the fact of the matter is that, often, people will want to review their own work positively. It’s also difficult to separate yourself from an email that you are the author of.
That being said, any kind of review is better than no review, so if self-review is the only QA that will work for you, it’s better to implement than not have anything at all. It still provides an opportunity for agents to look at your QA rubric, remind themselves of your company’s quality criteria, and honestly evaluate their own work.
How to do QA
Now that you know everything there is to know about QA, let’s get down to details about how to implement QA at your company.
Figure out the problem
What issue are you trying to solve for with QA? Do you have low CSAT? Are your churn numbers high, but you don’t have any explanation for it? QA can help you uncover endemic issues in your support process that you might miss otherwise, but it’s important to go into it with some kind of idea of what you are trying to improve.
That will help you develop your rubric, and it will also help you develop your staffing plan. If you’re trying to improve quality and CSAT, and you think there are significant opportunities for improvement on your team, you’ll need to audit more tickets and invest more time in coaching.
You may, however, just want to review a random sample of tickets to make sure no agents are disclosing sensitive personal information or committing fraud. These types of reviews are much quicker and require no coaching.
Define how much time you will spend on QA
How many tickets would you like to review every day, week, or month? And how much time would you like to spend on each review?
Auditing 5 tickets per agents per week is typical. However, the volume can be much lower or higher than that. We recommend a minimum of 5 tickets per agent per month. This is very low, but still gives you a chance to review your agents’ performance and provide coaching. On the high end, for one client, we audit 15 tickets per agent per day.
Audits can also vary in intensity. You can spend three minutes on each audit, and provide minimal coaching besides giving the results to your team members. More commonly, you’d spend about five minutes reviewing each ticket and five minutes providing feedback, for a total of 10 minutes per audit.
But this can vary widely. If you’re using a tool like MaestroQA to automatically pull tickets to audit, you’ll save a lot of time. Some companies, however, have specific criteria for the tickets they want to audit, and the auditor may have to spend 10 minutes just looking for an auditable ticket. For example, you might want to make sure you audit tickets that don’t use any macros, and you might need to find 2-3 of these that the agent completed over the course of a week.
Furthermore, the QA analyst might have to access different tools to confirm that the information provided in the ticket was correct. A complex audit could take as long as 30 minutes.
The best way to understand the amount of time it will take is to try it out and see. But here’s an example of the staffing level required to audit 5 tickets per team member per week, assuming each audit requires 5 minutes of auditing and 5 minutes of coaching.
In addition to the time spent auditing, we’ve assumed each QA analyst spends 10% of their total time on other tasks, not including direct audits. This could include team meetings or calibrations. Overall, you can see that based on this model, about 2.5% of the total FTEs on your team will be devoted to QA.
Staffing requirement, assuming 5 tickets audited per team member per week
Number of team members | Audits per week | Hours per week on audits | Total hours per week | # of QA FTEs* |
---|---|---|---|---|
5 | 25 | 4.2 | 4.6 | .12 |
10 | 50 | 8.4 | 9.2 | .25 |
20 | 100 | 17 | 18.4 | .5 |
50 | 250 | 42 | 46 | 1.2 |
100 | 500 | 83 | 92 | 2.4 |
200 | 1000 | 167 | 186 | 5 |
400 | 2000 | 333 | 370 | 10 |
*Assumes 40 working hours available
Identify the person or people responsible for QA
When you know your overall goal and the time required, you can think about staffing. Do your team leads have the bandwidth to do this? Or is this an opportunity to promote someone else on the team?
In addition, who has the best perspective to provide reviews? Does it need to be a manager? Or could a team member provide an adequate review?
Furthermore, will QA take time away from other meaningful and important projects? QA can be incredibly time-consuming. When you are determining who will be responsible for it on your team, first ensure that they have the resources that they need to be successful. If they don’t have the time to dedicate a good deal of their attention to it, they may not be the best person to be responsible.
Develop your rubric
Next, develop the rubric that your analysts will use to evaluate tickets. Usually, these rubrics cover three categories:
- Resolution: Did the agent fully comprehend the question? Was the answer accurate and complete?
- Tone: Was the tone appropriate? Did the agent display empathy? Was the answer personalized? Was it aligned with your company values and brand?
- Processes: Were proper processes followed? This could include process related to discounts or returns. Or it could cover issues like how the ticket was tagged and whether it was marked as resolved at the appropriate time.
Each question will come with a number of points, usually adding up to 100. Sometimes you can award partial points; in other cases, the answer will be yes or no. If the person didn’t do it, they get a 0 for that question. You can also choose questions that will result in an automatic fail of the entire audit.
Here are two example rubrics for phone and email tickets.
Phone
Question | Additional Comments | Score |
---|---|---|
Email Handling | ||
Did the representative open the email with a positive greeting? | ||
Did the representative educate the customer on products or features? | ||
Did the representative set expectations around ongoing communication with the customer? | ||
Did the representative thank the customer and invite them to reach back out at the end of the email? | ||
Resolution | ||
Did the rep provide accurate information? | ||
Did the rep correctly answer the customer’s inquiry? | ||
Tone & Style | ||
Was the rep friendly and cordial? | ||
Did the rep use the correct punctuation, spelling, and grammar? | ||
Did the rep show empathy and a desire to help? | ||
Process | ||
Did the rep use the appropriate tools to solve the problem? | ||
Did the rep provide useful and relevant documentation? | ||
Did the rep log accurate and useful notes for escalation or future support context? |
You can find Excel versions of these two rubrics here.
Both of these QA rubrics are fairly extensive. If you are planning on reviewing a large number of conversations in a day, the rubric should be shorter and more honed to just a few specific questions, such as:
- Was the representative empathetic and helpful?
- Did they provide a correct solution to the customer?
- Did they exhibit appropriate product and process knowledge?
Choose your tools
There are a variety of software tools that integrate with your ticketing system, pull tickets automatically, and let you score them without leaving your helpdesk software. However, these typically charge a monthly fee per agent, which can add up.
We’ve built an Excel tool that provides more in-depth analysis functionality than most software products. You can view a video demo here of the tool. You could build your own tool, and we do offer ours for sale. However, if you want something that integrates with Zendesk or another helpdesk software, there are a few options you can explore.
We’ve put together a full review of the QA tools available on the market for you here, but here’s a brief summary of what we’ve found:
- Klaus: Easy to set up, great UX, good for small and large teams.
- MaestroQA: a bit more fully featured with full omnichannel integration.
- Scorebuddy: this tool has a specific focus on call centers, and may not be the best fit for other support functions.
- Playvox: an excellent resource for both companies and BPOs to offer omnichannel QA.
- Miuros: a great tool for QA; it also has an AI tool and provides deep data on any customer support KPI you could dream of
Document your processes
Even if you expect to be the only person doing QA for your company, it’s still helpful to document your process in an internal documentation center. Not only is this helpful for your team to have clarity on what to expect, it helps you to pass on the responsibility of QA or grow your QA team as you need to in the future. Documentation is one of the best ways to scale and make processes better as your team (and needs) grow.
Always be iterating
One of the best things that you can do for any process is to evaluate its efficacy. Continue to examine your QA process and assess whether it’s doing what you want it to do. Here are a few things to check up on regularly:
- Is it taking as much time as you thought it would? Or more or less?
- Are you accomplishing the goals that you set out to achieve?
- Are there any changes that have taken place in your company that you should address in your QA process?
Use these answers to inform how you move forward with your QA process. Never let something stay the same just because it’s easier to do so.
Conclusion
QA is one of the best ways to measure your customer experience, but it’s best to use it along with other measurements to get a holistic view of how your customers are feeling. Instead of diving right into QA, take your time to assess what you’re trying to accomplish and all of the pieces that you need to get there. Building carefully and considerately will get your team farther than rushing in because you feel like you need to do something.