Let's face it. Manual QA can be a real drag. It's tedious, time-consuming, and labor-intensive — and let's not even get started on the potential for human error.
Here at MiaRec, we see firsthand the enormous time savings and service quality improvements our Auto QA solution can achieve for larger contact centers. However, for smaller contact centers with fewer than 25 agents, we also know that investing in a full-blown Auto QA solution might be out of reach.
But what if there was a middle ground? What if there was a way to harness the power of AI to streamline your QA process and save up to 75% of your evaluation time? In this article, we will explore how you can create a custom GPT to do just that. It's like having a super-efficient assistant who can handle the bulk of your call scoring, freeing you up to focus on the bigger picture.
Before we dive into the "how-to," let's briefly take a moment to understand why automating your agent evaluation processes with Artificial Intelligence (AI) makes sense:
Now that we've established the "why," let's get down to the "how." Here's a step-by-step guide to implementing GPT for semi-automated QA:
Decide which scorecard you wish to use for your custom GPT. We recommend using only one scorecard per GPT. If you use multiple scorecards, try this on one scorecard and create other GPTs for the others.
Once you've decided which scorecard you will use, review your scorecard. Is it concise and well-structured? Are there any ambiguities? Ask yourself this question: If I needed to evaluate agents with this scorecard, but I was entirely new to this and had no prior training, would I be able to use this scorecard? Remember, GPT will be using this as its rule book for evaluating calls.
For a detailed how-to guide, including examples and more tips, check out our guide "Translate Your Manual Evaluation Questions to Auto Scorecards."
Now, it is time to create your custom GPT. But don't worry; you don't need to be a coding whiz for this part. To build the GPT, open Chat GPT (4 or higher), click on your profile, and create a GPT. For detailed instructions on how to do this, check out our article last week on how to use GPTs for agent scheduling or consult OpenAI's instructions. But it is super easy and quick.
Screenshot: Shows where you can click your profile icon and "My GPTs". These are the first steps of creating your own GPT.
Once you are in the configuration section of your GPT, give it a name, enter a description, and copy and paste the following prompt:
"You are a helpful assistant designed to evaluate agent performance in a contact center. Based on the call transcript, answer the questions with 'Yes' or 'No,' accompanied by an explanation (evidence). If the call transcript does not contain the answer to the question, answer with 'N/A.'
Respond in the following format:
Add up all the points awarded, divide by the number of total points possible, and provide a final score in the form of percentages at the end (e.g., 80%)."
A prompt is simply an instruction you give an AI to help it understand what you are looking for. I always like to think of this prompt as a magic formula that will turn your GPT into a QA scoring machine.
Alright, now it is time to copy and paste your scorecard questions underneath the prompt. Once you have done that, you can save your GPT and you are off to the races.
Now, it is time for your first test run. Copy and paste a call transcript of a call that you want to evaluate and paste it into your GPT interface. Hit submit and watch the magic unfold.
Within seconds, you should have a response from ChatGPT that includes scores per question, a total score, and an explanation for why the score was given.
Video: Shows the process of creating the Semi-Auto QA GPT and testing it on a transcript.
I always view GPT as an eager intern looking to be helpful and please. While this is very helpful, ChatGPT (like any AI) can get overzealous sometimes and hallucinate. Hallucinations happen because the model makes predictions of what you want to hear.
While it is often accurate, it does require careful testing and tweaking to refine your answers, as well as regular sanity checks to ensure the information is accurate. Take a look at the GPT's response and compare it to your own manual scoring. If you notice any discrepancies, tweak the prompt or your scorecard criteria.
Before you embark on your semi-automated QA journey, there are a few prerequisites you will need to have in place:
As you can see from the instructions below, you can create a custom GPT to semi-automate your agent evaluation process in probably less time than it takes you to score one call. Why not try it out and see how using AI can help you free up hours every week you could spend coaching and training your team.
Once you have outgrown this semi-automated solution and want to score 100% of your relevant calls automatically, you will be well prepared and equipped with fine-tuned scorecards and AI experience to adopt an Auto QA solution.