Blog

What sits behind your contact centre performance metrics?

Performance is measured in metrics, but driven by underlying competence, something leading organisations are now starting to measure more directly.

Most contact centres track the same set of metrics. We all know them. AHT, ASA… the usual efficiency measures. Useful, of course, but they don’t tell you whether the customer actually got what they needed. Then you’ve got the outcome metrics. FCR, CSAT, NPS, CES. Closer to the truth, but still only part of the picture. Quality and compliance sit alongside that. QA scores, adherence, error rates. Especially important if you’re operating in a regulated environment where mistakes carry real risk. And then there’s the layer people don’t always spend enough time on, which is demand: repeat contact, failure demand, complaints and root cause.

There’s no shortage of data, but here’s where it gets uncomfortable. These metrics don’t naturally line up. You can bring AHT down and quietly drive-up repeat contact, you can consistently meet your service level target and still leave customers frustrated, and you can score well in QA and still miss what actually mattered in the conversation – increasing demand and cost without it being immediately visible.

Where things are starting to shift

There’s an important reset happening in successful operations now. Less focus on isolated metrics and more on what they actually lead to. In part, this reflects a shift in demand, customers are increasingly looking to speak to an agent for more complex or sensitive issues, where the outcome matters more than speed alone.

So you start to see:

  • More weight on FCR, repeat contact, complaints.
  • More interest in how competent people actually are in role.
  • And more effort to connect the dots (QA → FCR → complaints → cost)

Because speed on its own isn’t performance and neither is hitting a target if the outcome doesn’t hold.

What’s actually driving the numbers

This is often where the focus drops away. Metrics give you a view of performance, but not much insight into what’s actually driving it. If you look underneath most contact centre metrics, the same few things show up again and again.

Competence in role

This is usually where the difference shows up. It’s not just about training completion or access to knowledge, but how capable someone is in role day to day. Can someone recognise what’s needed in the moment? Can they deal with complexity and ambiguity, rather than simply following a prescribed process? Do they communicate with confidence, or does it sound like they are searching for the answer? Put two agents in the same environment with the same tools and you’ll still get very different outcomes.

That gap is rarely about effort. It’s about competence. And it shows up everywhere, such as FCR, QA, complaints, customer perception.

Management and reinforcement

Performance doesn’t stay still. It drifts. Usually slowly enough that no one notices at first.

  • QA scores soften.
  • AHT creeps up.
  • Inconsistency between people gets wider.

Then suddenly it’s a “performance issue”. The reality is, most organisations don’t reinforce performance particularly often. Monthly QA sampling isn’t really enough to hold a standard in place. And a lot of managers are reporting on performance, not actually shaping it, often because they don’t have the time or visibility to do so.

Knowledge and how it’s used

Most contact centres don’t have a knowledge access problem anymore. They have a knowledge use problem. Yes, agents can find the answer. But do they understand it? Would they make the same decision without the prompt? Real-time tools can help in the moment. But they can also mask underlying gaps. The right answer is given, but not always for the right reasons. This is often easy to overlook.

Most improvement efforts still start at the metric

“Reduce AHT”, “Improve service level”, “Lift QA scores” Which makes sense on paper, but in practice, you’re adjusting the symptom. The real levers are underneath:

  • How competent people are in role.
  • How consistently performance is reinforced.
  • Whether knowledge is actually understood, not just followed.

A better question to ask

Instead of: “How do we improve this metric?” It’s probably more useful to ask: “What’s producing this result?” Where is the variability coming from? Why are two similar interactions ending differently? Because metrics will point you in the right direction, but they won’t fix anything on their own.

Further resources

Improving performance isn’t about managing the metric. It’s about addressing the conditions that produce it. That’s where Clever Nelly fits, giving organisations a clear, continuous view of competence in role, reinforcing performance over time, and ensuring knowledge is understood and applied in practice.

Discover how leading organisations are using Clever Nelly to build and measure competence – and the agent confidence that comes with it – to sustain performance over time, improving FCR, reducing complaints, and strengthening consistency across the frontline.

Access interactive e-book here

Improve your business with Elephants Don’t Forget today.