Estimated reading time: 8 minutes
It doesn’t take much digging into national IAPT 2020 – 21 data to discover big differences between service provision in different areas. In the last blog we highlighted (against a national recovery rate of 51%) the performance of Brighton and Hove (34%) and Stoke on Trent (64%) NHS Clinical Commissioning Groups (CCG) .
In this blog we dig a little deeper and reveal other wide differences in service performance. In each case we benchmark these with the national IAPT data and ask the question…… “what’s going on, and what’s to be done?”
Table of contents
- The numbers and what they represent
- Of those referred, how many actually enter treatment?
- Of those that enter treatment, how many end?
- What proportions that ended treatment were above ‘case level’, i.e. could recover?
- Of those that ended above case level, what proportions reached recovery?
- How do we do ‘better’?
The numbers and what they represent
Let’s start with the raw numbers at key stages of clients’ journeys through services, and the proportion of the total number of ended referrals they represent. Ended referrals include those never seen by the service, those seen but not treated, those that enter treatment but only had one session, and those that ended having had two or more sessions of treatment.

The table above shows the referrals that ended in 2020 – 2021 and of those, those that entered treatment, those that ended treatment, those that ended at case level, and those that recovered. Next to the numbers are the proportions they represent of the total of number of referrals that ended for any reason. (Note that the Ended at case level number for England is estimated).
From this we can see that the clients that recovered as a proportion of all referrals ended was 21% (England), 11% (Brighton and Hove) and 20% (Stoke on Trent). Depending on where you start the count you get different rates. The rates mentioned above of 51%, 34% and 64% are for clients recovered as a proportion of clients that ended treatment at case level.
Let’s look at some different stages of the client’s journey and see what differences are revealed.
Of those referred, how many enter treatment?
As noted above, not all referrals enter treatment. Looking at those that do, it’s clear that the proportions of these clients from Brighton and Hove and Stoke on Trent CCG’s differ markedly (see below). Against the England benchmark of 71%, Brighton’s rate is a little over half at 55%, while at 89%, Stoke’s rate approaches nine of ten referrals. What might explain this difference?

There are many potential explanations. For example, the appropriateness of the referral for therapy. Another possibility is the availability of alternative services to which clients may be referred. A further one is how discriminating services are in identifying which clients they can really help. A service that accepts every client for therapy is likely to see a higher level of dropout than one that carefully assesses the suitability of clients to the range of services offered.
It’s a mistake to jump to conclusions on the basis of data alone. Service data can only be properly understood in context. What is most important is that we are asking these ‘why’ questions. The value of benchmark data is that they invite us to try and understand what explains differences that we observe. From there we may be able to identify ways in which services can be delivered differently that better meet the needs of our clients.
Of those that enter treatment, how many end?
All those coded as ended treatment have attended two or more appointments. This includes both endings that were planned as well as those that finished prematurely.
I proposed above that services that accept every client for therapy will inevitably suffer higher levels of dropout. Bearing in mind the higher level of clients that enter therapy in Stoke, now look at the chart below. You’ll see that the proportion of clients recorded as ending therapy in Stoke (of those entering treatment) is the lowest of the three at 40%.
Only four in ten of Stoke clients that start therapy, end therapy. That’s in contrast with England overall (62%) and Brighton (58%). Is it a coincidence that the area in which the highest proportion of clients that enter treatment also has the lowest proportion of clients that end it? I don’t want to jump to conclusions, but at first glance that looks like client attrition, pure and simple.

I’ll make the point again about the value of benchmark data. Without comparative data, we have no way of knowing what’s normal, nor what’s possible. With such data, we can begin to ask the ‘why’ question.
What proportions that ended treatment started above ‘case level’, i.e. could recover?
In IAPT, clients achieve recovery if they start above the clinical cut-off on one or both of the primary measures (GAD-7 and PHQ-9) and finish below the cut off on both. In practice, this might only require a one-point movement on one of those measures.

Against the average of 94% for England, the proportions of clients that ended treatment above case level were 97% and 90% for Brighton and Stoke respectively. Put another way, as a cohort, clients in the Brighton and Hove NHS CCG area stood a significantly greater chance of reaching recovery than did those of Stoke on Trent NHS CCG. But was this greater chance realised in clients’ outcomes?
Of those that ended, having started above case level, what proportions reached recovery?
This feels like the real kicker to me. Despite 97% of Brighton and Hove’s clients having the potential to show recovery, only 34% did. By contrast, while ‘only’ 90% of Stoke’s clients could reach recovery, 64% achieved that milestone. How do we make sense of this difference?

Bear in mind these are clients that had two or more sessions of treatment, not clients that were assessed and referred elsewhere. I have four possible scenarios in mind.
- Brighton’s therapy ‘offer’ is insufficiently broad to maintain client engagement. In other words, clients are bailing out because the treatment doesn’t ‘fit’. Here I’d be looking to services such as Stoke to test this hypothesis a little further.
- Clients are getting an insufficient ‘dose’ of therapy. Stoke’s average number of sessions per recovered client is higher that Brighton’s (8.9 sessions against 7.9 respectively). It’s a small difference, but it might be significant.
- Stoke is delivering more effective therapy. For whatever reason, clients are remaining engaged in the process for long enough to show a greater level of benefit.
- No-one’s paying sufficient attention to the feedback from sessional measures to realise that either clients aren’t improving as expected, and/or that they are being discharged too early to show it. This is so simple to fix. Use measures as part of the process of assessing what’s the right ‘dose’ for each client.
Maybe one of these scenarios is accurate, or maybe it’s a combination of them. Or maybe it’s something else entirely. Maybe you have some alternative ideas, in which case please leave a comment below.
How do we do ‘better’?
Who is responsible for service quality and effecting change? Anyone that has a role in the commissioning, management, supervision and delivery of therapy services would be my answer. It’s long been my perception, however, that so much of therapy, both at an individual and a service level, takes place in a vacuum. We often don’t know what ‘good’ looks like, or what is possible.
I’ve been looking at these data at the level of the Clinical Commissioning Group. I chose CCG simply because I was curious about how performance in my local area compared with the national data. It didn’t compare favourably. Then I got curious about what good might look like, and followed my nose to Stoke on Trent.
How do we change things for the better? It all starts with being curious, and asking the right questions. I’m hoping that my local CCG might undertake a similar journey to the one that I’ve been on.
A few years ago we were instructed to change from recording a first contact as assessment to recording it as assessment/treatment, we would also frequently make a follow up telephone call to people, who were seen once, so the 2 session treatment criteria box could then be ticked. There were a few other “tricks” to improve KPIs.
Is there consistency between services re data recording?
Thanks Michael, and an interesting question over consistency. I decided to have a refresher on the IAPT manual and found this:
“Services should develop written criteria for deciding whether an initial session can be coded by their staff as ‘assessment’ or as ‘assessment and treatment’. Generally, sessions that exclusively focus on assessment or very brief sessions that simply identify that IAPT is not appropriate for an individual should be coded as ‘assessment’. However, if any of a range of interventions recognised are a significant focus of the session, it would be appropriate to use the ‘assessment and treatment’ code.”
So I suspect the short answer is probably ‘no’ there is not complete consistency, which makes comparison even more difficult. This is a downside to top-down driven targets, and that’s that there will be a temptation to interpret guidelines to show services in the best light. That’s being generous.
Cheers! Barry