Even when practitioners are already using sessional outcome measures, it is possible to deliver effective therapy more efficiently, and at reduced cost. What’s more, both therapists and clients like the process. As this rather simple and elegant study [i] showed, the answer is to simply……pay attention.
Same service, same therapists, same clients, same measures, same frequency of measures, same outcome feedback (OF) platform, same progress tracking charts.
You’d think it would be a challenge for any service, without changing any of the factors above, to significantly improve the efficiency with which it delivers outcomes, and significantly reduce their cost. You’d think it might take years to achieve such results.
For one service, all it needed was one day. How? Here’s the potted version.
Take a stepped care psychological therapy service (adjacent for details) already using sessional measures routinely, supported by computerised outcome feedback software which tracks weekly severity scores on a chart.
Provide a 6-hour training for self-selected therapists, focusing on the OF evidence base, theory, and technology. To the tracking chart, add an excepted treatment response (ETR) curve, and a signal alert where clients are falling significantly outside of the progress that might be expected.
Prime therapists to review progress against the ETR curve, to discuss progress charts with clients at every session, and discuss ‘not on track’ (NOT) cases in supervision.
Measure outcomes for the periods six months before and six months after the training. Compare the difference.
That’s more or less it.
Setting: IAPT steeped care service in Leeds
Therapists (n=18) Comprised of high intensity CBT (n=14), high intensity IPT (n=2) and low intensity CBT (n=2) therapists
Clients (n=594)
Measures: sessional GAD-7 and PHQ-9
Computerised outcome feedback platform: PC-MIS (Patient Case Management Information System)
Study design: Quantitative analysis of outcome and cost, and qualitative study of acceptability and feasibility of active routine outcome monitoring. Before and after training intervention study, with concurrent cost evaluation and assessment of acceptability
What did the study find?
Records were collected for clients seen by the team in the six months before (control group = 349) the application of OF technology, and six months after (OF cases = 245). ETR curves were calculated (see below for an example) for subgroups of clients with the same baseline of severity, using a large dataset of clients seen in IAPT.
Therapists were alerted to clients whose scores fell significantly outside of the ETR (i.e. NOT clients), making this group of clients relatively more visible.

Screenshot from OpenFIT reproduced with grateful thanks to Scott Miller. Read Scott’s excellent blog here
From quantitative analysis of the outcome data, the following findings are noteworthy:

There was no statistically significant difference in the degree of symptom change (on both PHQ-9 and GAD-7) between the control group and OF clients.

There was no statistically significant difference in the rates of reliable and clinically significant improvement (RCSI) between controls and OF clients

No significant differences were found in dropout rates between the controls and OF clients

Clients in the control group were significantly more likely to be classed as NOT in comparison to OF clients.

The mean number of treatment contacts for the control group was significantly higher than the OF group (10.25 and 6.59 respectively)
In summary, then, no difference in degree of symptom change, RCSI or dropout between the two groups, but a significantly higher rate of NOT clients and session utilisation in the control group.
Clients in the OF group used just over one-third less sessions than those in the control group.
Cost reductions in the OF client group
The researchers calculated the respective costs of treatment for the control and OF groups, taking into consideration the average hourly rate for each professional group, including overheads.
The average cost of treatment for the control group was £246.43, against £148.90 for the OF group.
The standardised mean difference (i.e. cost saving) was £97.54. Using a slightly more conservative savings figure of £65.88 per client, the researchers estimated the cost saving across the 245 clients in the OF group to be £16,140.60.
Therapist and client experiences of the OF framework
Few topics in the therapy field arouse more passion than that of using outcome measures with clients, whether pre and post-therapy, or sessionally. Those for their use argue its potential to reduce dropout, improve outcomes and deliver therapy more efficiently.
Those against argue that measures capture only a very partial picture of wider change, that the measures themselves may be flawed, or that they serve the interests of administration and accountability more than those of practice. This study sought the experiences of both therapists and clients involved in the OF framework.
Overwhelmingly, their experiences were positive. Therapist reported that the ETR charts alerted them to unnoticed difficulties and enabled them to review treatment plans in collaboration with clients, and to support and sometimes challenge their clinical judgement.
In the words of two therapists:
“It’s been a useful tool in getting me to think… I can sometimes blindly continue thinking ‘this is going to work, we’ll get some effect’, and it’s allowed me to say actually say no – we need to take stock”
“For me the biggest thing is about the increase in the collaboration with yourself and the patient, also creating a transparency of what these measures are for, I found that helpful, and it boosts the relationship”
“It’s been a useful tool in getting me to think… I can sometimes blindly continue thinking ‘this is going to work, we’ll get some effect’, and it’s allowed me to say actually say no – we need to take stock”
“For me the biggest thing is about the increase in the collaboration with yourself and the patient, also creating a transparency of what these measures are for, I found that helpful, and it boosts the relationship”
Clients’ experiences were also generally positive. In the words of one:
“It’s definitely a useful tool because sometimes you don’t realise you’ve made progress, but if you’ve got something on screen showing you what your scores were and what they are, it quantifies your progress”
“It’s definitely a useful tool because sometimes you don’t realise you’ve made progress, but if you’ve got something on screen showing you what your scores were and what they are, it quantifies your progress”
The power of paying attention, and collaboration
What I find remarkable about this study’s findings is that the apparent efficiencies achieved were made in a setting where sessional use of measures was already the norm. It would appear that what made the difference was adding ETR curves to the existing tracking graphs, asking therapists to review progress collaboratively with clients, and considering adjustment of the focus of their work where clients were NOT.
The therapists in the study were self-selecting. It may be that this ‘coalition of the willing’ might have been more pre-disposed to using the OF methodology, and that this may have boosted its impact. There was no follow up after therapy finished, so we do not know to what extent clients in the control and OF groups maintained their respective gains.
For all these caveats, this study suggests what may be possible when therapists and clients have access to ETR curves, and where existing treatment approaches can be reviewed and revised as necessary, as part of a collaborative effort.
References
[i] Delgadillo, J., Overend, K., Lucock, M., Groom, M., Kirby, N., McMillan, D., Gilbody, S., Lutz, W., Rubel, J.A., & de Jong, K. (in press). Improving the efficiency of psychological treatment using outcome feedback technology. Behaviour Research and Therapy. DOI: 10.1016/j.brat.2017.09.011
Interesting article
I do think the clients comment that it provides tangible evidence of progress being made is picking up on an important point. Anything we can do to show clients they are progressing tends to be very motivating for them (and dare I say, us too!). As a therapist, I see all the data monitoring we do as just another way of the clients “talking” to us. All the evidence I have read over the years, suggests that regularly checking in with clients to ask questions such as “how things do they think things are going”?, “whats working well & what isn’t”, “is the therapy going in the right direction for them to achieve their goals”? etc really helps to shape the therapy positively. So monitoring provides us with more formalised ways to open that conversation and look at things in detail. In a way its about being more consumer focused, or lets say more in touch with client’s experience if that language doesn’t “fit”.
Regards Graeme Butler
Graeme – ‘just another way of talking to us.’ I love that. We can think of it as measurement if we want, but we can also think of it as feedback, and that’s my preference. Thanks!