What does high quality therapy look like and how should we measure it? We suggest adopting two key performance indicators and making every other one fight for its place at the table. In the process you will do yourself, your service, your commissioners and your clients a huge favour.
When it comes to collecting data on therapy quality, more really is less.
Cast even a cursory glance over the latest quarterly Improving Access to Psychological Therapies (IAPT) programme activity data from NHS Digital [i] and you begin to get a sense, not only of the staggering volume of data, but also the collective effort its collection, organisation and publication requires.
The spreadsheet versions profile the performance of IAPT providers across key performance indicators (KPI’s), as well as significant demographic and other programme variables. If you want to know, for example, how many people of the Bahá’í faith were referred to services nationally over the period, and what proportion showed reliable improvement, you can drill down to that level of detail. The answers, by the way, are 55, and 11.1%.
In total, the spreadsheet contains an eye-popping 3,123,117 potential data points (cells, in other words).
From data analysis to data paralysis?
The IAPT Annual report on the use of services for 2015 – 16 [ii] summary datasheet lists 39 separate indicators, including the charmingly titled Number of referrals received, entering treatment, and finishing a course of treatment in the year by Indices of Deprivation decile, 2015-16, counts, Clinical Commissioning Groups (CCGs).
I don’t know about your experience, but a service that I’m currently working with has been asked by commissioners to report regularly on 37 separate indicators. Of these, just seven refer to how therapy ends or whether pre and post-outcome measures exist. Bewilderingly, none ask what proportion of clients achieve improvements in their psychological health.
Who is this all for? Does this level of data collection support commissioners, managers, supervisors and practitioners in using data to reflect meaningfully on the experience of their service users and provide better services, or does it actually obstruct? When all data is important then perhaps none is important, and data collection becomes like a cruel exercise in ‘feeding the beast’.
If I could see a link between this volume of data collection and improving outcomes for clients I wouldn’t be writing this blog. The fact is I neither believe it not see any evidence of it – as I’ve outlined in another post the overall proportions of IAPT clients that reach recovery as a proportion of those that enter treatment has remained static over 2013/14, 2014/15 and 2015/16. Are we in danger of losing sight of what’s important and ending up with data paralysis?
Getting back to the basics
I want to propose that we go back to basics and ask ourselves what effective therapy looks like. For me, that’s quite simple. It’s about helping the client to remain engaged in the process of therapy long enough that we can help them achieve a demonstrable improvement in their psychological health – be that through increased wellbeing, reduction in problems or symptoms, improved functioning, reduced risk, or other specific outcomes. Admittedly, some clients come seeking personal growth or development in which case this may be less relevant, but most come because they are suffering.
For me, the key characteristic of high quality provision is the ability to enable clients to maintain engagement in therapy to the point where we and the client can point to a tangible and lasting improvement.
If this holds true, then what and how we measure needs to start with data that captures two critical areas – are we able to hold clients in the process despite the inevitable challenges, and can we demonstrate that our intervention has made a difference? Rather than measure simple activity, we need to measure outcomes. Put another way, at the point at which therapy ends, do we still have an engaged client, and can we show that we’ve made a difference?
Measuring endings, and measuring outcomes
The two charts below illustrate the difference between simply measuring outcomes, and the process that I use when looking at service data, which is looking at outcomes alongside client endings.
The scatterplot shows the pre and post-therapy scores for a group of service clients. Each dot represents a client with a first and at least one subsequent measure (in this case the CORE Outcome Measure). Clients’ pre-therapy scores are plotted on the horizontal axis and their last score on the vertical axis.
CORE Net screenshot showing pre and post-therapy CORE scores for a group of service clients
The client circled started therapy showing a high level of distress on the CORE-OM, with a score of 30 of a possible 40. By the end of their contact with the service their score had reduced to 7, a significant and demonstrable improvement. All clients who appear in the green area have shown a reliable improvement, while all those in the red have reliably deteriorated. Those whose pre to post scores fall in the white ‘tram lines’ show no reliable change.
This is only part of the story, however. While it is essential to know the proportions of clients that have improved, deteriorated or showed no change, this does not tell us what proportion they form of the total number clients we have engaged with in therapy.
The chart below plots the proportion of clients that have reliably improved, against clients with a measured ending to their therapy. This time, each circle represents a service.
The horizontal axis shows services percentages of clients that have reliably improved, and the vertical, the proportion of clients with a measured ending – in other words, for those clients that started therapy, those who have at least a first and at least one subsequent measure. The service circled in red has captured ending data for 84% of its clients, and of these, 85% show a reliable improvement. The service circled in green has ending data for 58% of its clients, and of those, just 39% show a reliable improvement.
What of other indicators of service quality?
I’m not suggesting that measured endings and improvement are the only indicators that we should measure. I am proposing, however, that they are the foundation of our measurement activity, a solid base upon which we can build.
Focusing on these indicators first makes it more likely that we will also focus on how we keep more clients engaged to the point where we can finish, confident in the knowledge that we can also demonstrate an impact, whether to our clients, ourselves or our commissioners. I address the issue of dropout in therapy in a separate blog
To be sure, other indicators are available. I can’t deny the importance of measuring waiting times, for example, or the extent to which our clients are representative of the wider client group we serve. Important though these are, however, if we can’t hold clients in therapy and show that we have an impact, then this is surely where we should start.
So where might you go from here? Here are some closing thoughts I hope will be of help.
Don’t assume that your commissioners, purchasers, managers or others to whom you are responsible know what to do with the data they ask you to collect.
Make it your business to educate your commissioners and other top-level stakeholders.
Make it also your business to collect this kind of data regularly and reflect on what it may be telling you about your clients’ experience.
Review your current data collection. Is it fit for purpose or just obscuring what’s really important? Start with measuring measured endings and outcomes, and build from there.
Don’t measure anything just because you can. Don’t measure anything you will not use. Make every indicator you consider argue for its place at the table.
I’ve made the case for starting again with the basic essentials and measuring what’s important. If you’re persuaded but still struggling with data collection demands then use the arguments here to try and make the case for change. And remember that in this case, less really is more!