If we have any interest in shaping the world that we inhabit, we need to play by the rules as we find them, not as we wish them to be. For psychological therapy, that means engaging with research and measurement.
This article was published in Therapy Today in April 2018 and is reproduced with permission.
What is your pain? As a therapist, I mean. What aspects of your experience as a therapy provider would you change if you could? Maybe your pain lies in getting client referrals, or keeping clients engaged so that they don’t drop out. Maybe you feel compromised by the setting you work in, or the rules you must abide by. Perhaps it’s that you can’t find a way of making your chosen profession pay? Perhaps your pain lies in delivering or managing a therapy service? There’s more than enough pain here to keep you
awake at night – managing independently-minded therapists, managing the expectations of funders for activity data and evidence of impact, encouraging client engagement and managing failures to attend and dropout rates, dealing with funding cycles, commissioning structures, lead provider models, outcomes-based commissioning, the list goes on. There are no easy solutions, but I do believe it doesn’t have to be this way. We encourage our clients to locate and use their power, and we can do the same for ourselves.
Imagine it’s 30th April 2019, and you are reflecting on your past year’s work. You started the year by calculating your percentage client dropout rate for the past year, which you’ve never done before. You discovered that your dropout rate was at the higher end for practitioners in your setting. You explored what the literature says about minimising attrition and adopted a couple of simple strategies that looked promising.
By the end of the year, your dropout rate has halved over the previous year from 30% to 15%, and the 15% that might previously have dropped out have attended an average of 10 additional sessions. All those added sessions have made a visible difference to your income.
In managing your therapy service, you see similar changes. In the face of real resistance from some of your team, you’ve moved from an open-ended model of therapy to a more time-limited form, and from using outcome measures pre- and post-therapy to using shorter measures in every session. Your dropout rates have fallen significantly over the previous year. Despite having fewer sessions to work with, the proportion of your service’s clients who have shown significant improvement has slightly improved – fewer sessions, better outcomes.
You have been able to demonstrate that your service is operating more effectively and efficiently than ever before. Your commissioners are delighted and have been happy to agree that more than half the activity-based indicators they asked you to report on are redundant. What’s more, your service has been invited to bid for a new contract, in collaboration with two other services.
Am I dreaming? Definitely not. These scenarios are all possible. I’ve either been there myself, am working towards them, or have supported others in the process. I believe that bringing about these kinds of changes requires three pre-requisites: openness, in the sense of a willingness to adapt how we work in the light of evidence; knowledge (or evidence, if you prefer) of what practices lead to improved client engagement and outcomes, and wisdom, in the sense of applying that knowledge to our own practice or services.
Openness to change
‘Love yourself as a person, doubt yourself as a therapist?’ is the title of a paper by Helene A Nissen-Lie and colleagues [1] that reports on a study of the interaction between therapists’ personal self-regard, professional self-regard and client outcomes. They found that being a little too professionally pleased with ourselves may not be good for our clients’ outcomes, and that a healthy measure of professional self-doubt, combined with high personal self-regard was optimal for client outcomes.
By contrast, the combination of low professional self-doubt and positive self-regard was more detrimental. It also seems to be a common trait across many ‘psy’ professions. Walfish and colleagues conducted a study of 129 psychiatrists,
psychologists, professional counsellors, clinical social workers and marriage and family therapists. [2]
Among other questions, the researchers asked: ‘Compared to other mental health professionals within your field (with similar credentials), how would you rate your overall clinical skills and performance in terms of a percentile (0–100%, e.g., 25% = below average, 50% = average, 75% = above average)?’
On average, respondents rated themselves at the 80th percentile – in other words, more highly than 79% of their peers. Just 8.4% percent rated themselves below the 75th percentile. None rated themselves below the 50th percentile – ie. below average. The same study also found that participants estimated the proportion of their clients that improved to be significantly greater than is generally shown by the evidence from both controlled and naturalistic settings.
The lesson from this, for me, is that, if we’re going to claim to be effective therapists, we need to base it on something other than our own opinion. To do this we need first to define what we mean by effective. My working definition of effective therapy is simple: it is the degree to which we can keep clients engaged in therapy to its conclusion and are able to show a demonstrable impact on their concerns.
Second, we need evidence to support our claim. Research clearly shows significant variability between therapists in both process (for example, alliance formation), and outcome (for example, symptom reduction). [3]
I see the same variability in services with which I have worked as a consultant. The simple fact is that some of us are better at keeping our clients engaged in the process, and better able to help them resolve the issues that trouble them.
Okiishi and colleagues in 2003 [4] compared the outcome data of 56 therapists and found that those whose clients showed the fastest rate of improvement had an average rate of change 10 times greater than the average among their colleagues in the same service. The clients of the therapists with
‘…a healthy measure of professional self-doubt, combined with high personal self-regard, was optimal for client outcomes… the combination of low professional self-doubt and positive self-regard was more detrimental’
‘… it seems we are not very good at predicting which clients will have positive outcomes and which not’
THERAPY MEETS NUMBERS
the slowest rates of improvement, on average, deteriorated.
Do psychotherapists improve with time and experience? It appears not. One study tracked the outcomes of 170 therapists at a counselling centre at a large US university, and analysed their outcome data, which spanned on average 4.73 years. [5] There were exceptions, but on average, therapists tended to obtain slightly poorer outcomes as their experience increased.
Not only do we over-estimate our skills and performance; it seems we are not very good at
When was the last time you read a piece of research, or about a research study, that made a difference to how you practice? For most of us, it may have been a while. And where are we to find this type of evidence? The relevant research is all out there, but it can be hard to access. Getting hold of an academic paper usually requires us to part with a hefty slice of our hard-won income. Assuming we are prepared to go beyond the paywall, interpreting research can be heavy work, especially if we don’t ‘do’ numbers.
I spend a great deal of my time these days immersed in research. I’m no researcher, but I’ve eventually learned enough to work out how to find what’s useful and interpret it. It’s not as difficult as it seems, and it’s hugely gratifying when you can see it making a difference.
To help ease the pain of transition we (that’s me and my tech-supremo colleague Giles) are creating a new resource that provides a bridge between research, evaluation, and practice.
Therapy Meets Numbers will initially be a blog space where we will highlight relevant research and put it across in a way that you can translate
into practice. We’ll seek to demystify the dark arts of routine measurement, and we’ll share our own stories of success and failure, as well as the stories of other practitioners and services.
The focus will primarily be on research whose findings broadly generalise across all therapies, such as those I’ve cited here. It will not be about demonstrating the efficacy of a particular model for a common mental health problem. We’ve travelled beyond the Dodo and its verdict that ‘all have won so all must have prizes’, although we may reiterate the broad equivalence argument from time to time. In time, we hope to supplement the blogs with podcasts and other resources, and for Therapy Meets Numbers to become less of a resource and more of a community – one dedicated to creating the best therapy experience for every client. To find out more about what we’re up to, head on over to www.therapymeetsnumbers.com. It will cost you nothing more than your time to join in, and you can sign up for regular blog and other updates while you’re there.
predicting which clients will have positive outcomes and which not. Hannan and colleagues [6] used data from over 11,000 clients to generate a linear model to predict treatment outcomes. Based on this model, the researchers devised a test to try to predict early on in therapy which clients might be at risk of ‘treatment failure’, and compared its reliability with the predictions of the centre’s therapists, based solely on their clinical judgement.
Of 550 clients attending at least one session, only three were predicted by the therapists to deteriorate. According to the outcome data, however, of the 40 clients who had deteriorated by the end of therapy, only one had been predicted by the therapists. The test did have a tendency to over-predict potential treatment failure, but it was far more accurate than the
Knowledge – or reasons to be cheerful
So maybe we need to a little more humble and realistic about our own skills and performance. But here is some good news. There is a wealth of evidence that there are things we can do to improve our clients’ outcomes.
First, a simple but highly effective way in which we can reduce dropout and enhance clients’ engagement is by addressing their expectations about how much therapy they are likely to require. Swift and Callaghan [7] tested the impact of providing information to clients about the number of sessions normally required to achieve improvement.
Clients in the study were randomly allocated to one of two conditions. One group received treatment as usual, and the second (the education group) were provided with information about the typical trajectory of improvement in therapy and the number of sessions likely to be required to achieve improvement.
Those in the education group stayed in therapy significantly longer, and were more than 3.5 times more likely to complete therapy.
Two further key factors that have been shown to be strong early indicators of a successful outcome to therapy are signs of improvement early in the work and the client’s rating of the therapeutic alliance.
Numerous studies have shown that most improvement occurs in the early stages of therapy.[8-10] For example, a study by Howard and colleagues [8] found that up to 40% of clients improve in the first three sessions, 65% within seven sessions, 75% in around six months and 85% within 12 months. They also found that clients who don’t show early improvement are significantly
PRAGMATIC TRACKER
less likely to improve later on.
In a large study of outcomes in managed care, Brown and colleagues [11] found that clients who showed no improvement by the third session did not, on average, improve over the entire course of therapy. Those that showed deterioration by the third session were twice as likely to drop out as they were to progress. From this, we can conclude that, if improvement is going to happen, there are likely to be signs of it early in the process, and early deterioration or lack of early progress is a potential predictor of drop-out.
Recent years have seen a growing body of evidence that, by using brief measures of outcome and alliance at each session, we can identify clients at risk and dramatically improve their chances of a beneficial outcome. Lambert and colleagues, [12] for example, summarised the effects of providing feedback on clients at risk of treatment failure. The highest rates of improvement were evident when progress feedback was given to both clients and therapists.
The researchers concluded: ‘It seems likely that therapists become more attentive to a patient when they receive a signal that the patient is not progressing. Evidence across studies suggests that therapists tend to keep “not on track” cases in treatment for more sessions when they receive feedback, further reinforcing the notion that feedback increases interest and investment in a patient.’ Similar results have been found in other studies.
Whipple and colleagues [13] found that clients at risk of a negative outcome were less likely to deteriorate, more likely to stay in treatment longer and twice as likely to achieve clinically significant change when their therapists had access to information on outcome and alliance.
Miller and colleagues [14] explored the impact of introducing two short measures of outcome and alliance into an international employee assistance programme. During the early phase of the study, 20% of clients at intake had outcome measure but not alliance data. Those clients were three times less likely to return for a second session and had significantly poorer outcomes. Improving a poorly rated alliance early in therapy was correlated with significantly better outcomes by the end of therapy.
Then there’s the emerging subject of deliberate practice. [15,16] A study by Chow and colleagues [17] demonstrated that the amount of time spent improving therapeutic skills was a significant predictor of client outcomes. As part of a small subset of a larger study, 17 therapists were grouped into quartiles based on their client outcome data.
BACP is committed to supporting members in your practice and to building the evidence base for the counselling professions.
We also want to support you to work within the Ethical Framework, and collecting routine outcome measures is one of the ways in which you can meet the commitment to ‘monitor how clients experience your work together’. That’s not to say that we expect everyone to collect outcome measures, but it is one way that may already fit with how you work, or that you’d be open to exploring. For the last 18 months, BACP’s research department has been looking into several potential online outcomes monitoring systems. At the end of this process we entered into an agreement with Pragmatic Tracker (www. pragmatictracker.com), which not only allows clients to complete outcome measures online, but can also be used as a case management system.
We’re currently undertaking a small-scale pilot study of the system to understand more about its acceptability and usability for practitioners and their clients. As this is a pilot study, we have had to limit the study to therapists working with adult clients in private
practice. In the long run, we want to support the development of a platform that any member working in any setting and with any client group can choose to use, but for now, we have to be – forgive the pun – pragmatic.
We have now recruited enough participants, and the data collection starts next month, and will run until May 2019. We’ll be evaluating it in three ways:
the feasibility of using an online platform to collect outcomes data and to contribute to the evidence base about the effectiveness of counselling in private practice
the feasibility and acceptability of the platform to practitioners, in terms of how they experience using it as a case management system
the feasibility of rolling out the platform to the wider membership, in terms of the level of input required from BACP staff to support members to use it.
If you’d like to be kept informed on the study as it progresses, please email us at research@bacp.co.uk. We’ll also be providing updates via Therapy Today and our newsletters. Charlie Duncan BACP Research fellow
Therapists in the top performing quartile spent, on average, 7.39 hours a week engaged in activities designed to enhance their performance. This was nearly three times as much as the average spent by therapists across the other three quartiles. While I believe that deliberate practice should be about more than simply the mastery of ‘micro-skills’, as some literature on the topic would suggest,
‘The highest rates of improvement were evident when progress feedback was given to both clients and therapists’
‘If we have any interest in shaping the world that we inhabit, we need to play by the rules as we find them, not as we would wish them to be’
there appears to be something in the idea that it pays to spend lots of quality time not with our clients.
And so to wisdom
Lies, damn lies and statistics. Therapy Today recently carried a short news report entitled ‘Improved access to talking therapies’. [18] If we were to take the piece at face value, we could be forgiven for seeing the IAPT programme as an almost unqualified success story (patient numbers increased, recovery rates up, waiting times down etc). For me, however, the bigger story is that barely one in four people who enter therapy with IAPT, and fewer than one in five people who are referred to IAPT, recover. [19] It’s the same data, but looked at through a different lens.
Playing by the rules
My interest in research and evaluation lies in trying to be a more-than-average therapist for my clients. I’m possessed of a ‘Be Perfect’ driver, so for me the idea of being a ‘good-enough’ therapist means being the best I’m capable of, every time, secure in the knowledge that I’m unlikely ever to reach that lofty summit.
The meek will not inherit the earth, at least not in my lifetime. It will be given to those who know the research, have the data, and know how to use it. If we have any interest in shaping the world that we inhabit, we need to play by the rules as we find them, not as we would wish them to be. Let’s be wise enough to recognise that there is power in research, and in routine practice and service delivery data, and use it to further both our practice and our wider aims.
Let’s also be open-minded and wise enough to move beyond seeing CORE, GAD, PHQ, SRS and so on as only about measuring, and embrace ‘measures’ as another valuable means by which we can elicit feedback from our clients. If we are smart in how we go about it, there’s a lot we can do to ease the things that currently cause us pain.
That is why I’m excited by the initiative being developed by BACP to pilot the Pragmatic Tracker outcome monitoring system with members (see box). I look forward to watching the pilot’s progress over the coming months and to supporting it in any way that I can.
- Nissen-Lie HA et al. Love Yourself as a Person, Doubt Yourself as a Therapist? Clinical Psychology and Psychotherapy 2015; 24(1): 48–60.
- Walfish S et al. An investigation of self-assessment bias in mental health providers. Psychological Reports 2012; 110(2): 639–644.
- http://societyforpsychotherapy.org/clinicians-self-judgment-of-effectiveness (accessed 2 February 2018).
- Okiishi J et al. Waiting for supershrink: An empirical analysis of therapist effects. Clinical Psychology and Psychotherapy 2003; 10: 361–373.
- Goldberg S et al. Do psychotherapists improve with time and experience? a longitudinal analysis of outcomes in a clinical setting. Journal of Counseling Psychology 2016; 63(1): 1–11
- Hannan C et al. A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology 2006; 61(2): 155–163.
- Swift J, Callaghan J. Decreasing treatment dropout by addressing expectations for treatment length. Psychotherapy Research 2011; 21(2):193–200.
- Howard KI et al. The dose-effect relationship in psychotherapy. The American Psychologist 1986; 41: 159–164.
- Anderson E, Lambert M. A survival analysis of clinically significant change in outpatient psychotherapy. Journal of Clinical Psychology 2001; 57: 875–888.
- Harnett P et al. The dose response relationship in psychotherapy: Implications for social policy. Clinical Psychologist 2010; 14(2): 39–44.
- Brown J et al. What really makes a difference in psychotherapy outcome? Why does managed care want to know? In: Hubble MA, Duncan BL,Miller SD (eds). The heart and soul of change: what works in therapy. Washington, DC: American Psychological Association Press; 1999 (pp389–406).
- Lambert MJ et al. Providing feedback to psychotherapists on their patients’ progress: clinical results and practice suggestions. Journal of clinical Psychology 2005; 61(2): 165–174.
- Whipple JL et al. Improving the effects of psychotherapy: the use of early identification of treatment and problem-solving strategies in routine practice. Journal of Counseling Psychology 2003; 50: 59–68.
- Miller SD et al. Using formal client feedback to improve retention and outcome: making ongoing, real-time assessment feasible. Journal of Brief Therapy 2006;5(2): 5–22.
- Rousmaniere T et al (ed). The cycle of excellence: using deliberate practice to improve supervision and training. Chichester: Wiley; 2017.
- Rousmaniere T. Deliberate practice for psychotherapists: a guide to improving clinical effectiveness. Abingdon: Taylor & Francis; 2017.
- Chow DL et al. The role of deliberate practice in the development of highly effective psychotherapists. Psychotherapy 2015; 52(3): 337–345.
- www.bacp.co.uk/media/2453/bacp-therapy-today-feb18.pdf (p6) (accessed 2 February 2018).
- http://therapymeetsnumbers.com/iapt-2017-just-one-four-enter-therapy-reach-recovery (accessed 2 February 2018).
About the author
Barry McInnes is an independent therapist, service consultant, writer and blogger, passionate Hispanophile and co-creator of Therapy Meets Numbers. barrymcinnes@virginmedia.com