Estimated reading time: 10 minutes

It’s a provocative question, isn’t it? It’s also the title of a recent blog post, more of which below. Looking back at my outcomes over 25 years, the answer depends on what period I’m looking at. I know that when I’ve got complacent, or stopped looking at my data, it’s shown in my outcomes. So, what have I learned?   



First, a shout out to fellow traveller Dr Jordan Harris. Jordan is a therapist based in Arkansas, USA, where he gives trainings and coaches therapists on using deliberate practice and outcome measures to improve their clinical work. He reached out recently to ask if I’d pen a blog for his excellent website.

This blog is the result. It’s replicated on his site, along with a range of other great blog and podcast material. Jordan’s piece Maybe We’re Trying Too Hard is a good example, in which he brings together seemingly disparate themes of investment strategy, the lure of the ‘guru’ and a reminder that the biggest factor in change is the client. Do go and check out his site. Thanks Jordan!

“I just need an evidence base….”

Back in the day (mid 1990’s to be precise), I managed the counselling service for the Royal College of Nursing (RCN). I loved the role, but it used to keep me awake at night thinking how I was going to provide a service to the (then) 330,000 members of the RCN on a complement of eight staff.

Thankfully, there wasn’t really an expectation that we would. There was an expectation, however, that we would promote the value of counselling for NHS staff sufficiently well that NHS employers would be falling over themselves to provide their own. We just needed an evidence base.

Back then, there wasn’t an evidence base for staff counselling as such. Our subsequent efforts took us in two directions. First, we worked with the BACP Research Committee to develop BACP’s first review of sector-based counselling (John McLeod’s systematic review of the research evidence for counselling in the workplace). Second, we set about growing the evidence base for our own service, based on the CORE system.

When it comes to impact, we’re far from equal

We started using the CORE measures (or more accurately the 34-item CORE-OM) with service clients. We learned to introduce it into the work in a non-clunky way. Reviewing their responses became part of our standard assessment.

When it came to processing the data, we’d wait a few months and send batches of forms several inches thick to be scanned and reported on by the University of Leeds. Other than as part of assessment we had no real relationship with our data.

Extracts from our very first CORE system data report in 1999

Everything changed when we adopted CORE-PC. We input our data in real time, and we took charge of its analysis. It was so feature rich that initially it felt like flying a spacecraft. Features included an appraisal function, known commonly as the ‘scary button’. This allowed me to look beneath the overall service data at markers such as dropout and improvement and identify individual rates among my team.

Gaining that level of insight wasn’t a comfortable experience. Once seen, it can’t be unseen. One of the most uncomfortable aspects of this was the discovery that I had my own problem with dropout. It was a major problem. More than half of my clients were dropping out, and my figure was the highest of my team by some distance. That discovery completely blindsided me. How on earth could that happen, and I not be aware?

This experience left a lasting legacy. I learned that my judgement alone isn’t a reliable witness to what’s really going on in my practice. Assuming my judgement is even active, it’s easy to reassure myself that I’m doing OK, when my numbers are telling a different story. I need to give those numbers proper attention.

Improvement doesn’t happen by chance

By the early 2000’s we’d gotten very familiar with using the CORE measures actively with clients and by then I’d clearly taken my wayward dropout rates in hand. We were building our evidence base and routinely using CORE-PC to analyse our service data. There wasn’t much by way of benchmarks with which to contrast our performance, but what there was seemed to suggest we were doing OK by our clients.

By 2002, rates of unplanned endings across the service (for clients accepted for therapy) stood at 31%, and 79% of completers were showing clinical and/or reliable change. We challenged ourselves to do better, and it’s to the credit of my brilliant team that we did. Over the next two years (just prior to my departure), we nearly halved our rate of unplanned endings and raised our rate of improvement to 86%.

That improvement didn’t happen by chance. It came from a combination of challenging ourselves to do better, focused actions, and continuous measuring and monitoring the results. These days, I suspect it would probably fit the description of deliberate practice. It was certainly deliberate.

The art of the possible

I left the RCN in 2005 and started working for CORE-IMS, the organisation established to support CORE system users. Much of the next five years was spent roaming the length and breadth of the UK providing CORE system implementation training and supporting services to use their data as part of a service development strategy.

During that time I saw performance data for dozens of services, hundreds of therapists, and thousands of clients (all anonymous I should add). It’s said that a picture paints a thousand words. After a while I came to see data in a similar way. It’s no exaggeration to say that I could form a tentative (and usually accurate) picture of a service’s strengths and shortcomings from a two-minute tour of their data.

I was privileged to work with some truly exceptional services and provide independent reporting of their service quality. I saw what great therapy looks like, provided by services I’d be confident in referring any loved one of mine to. Many in primary care, sadly, lost their funding as IAPT was rolled out. A few are still going strong, such as My Sister’s Place (MSP) in Middlesbrough. You can read more about MSP and their achievements here.

How did I forget everything I learned?

Given all I’ve just said, you might imagine my commitment to using measures in my practice would have been unshakeable. You’d be wrong. Between 2005 and 2010, when I left CORE-IMS to go independent, I didn’t see one client. I was great at talking the talk, but when I went into private practice, not so good at walking the walk.

As anyone who’s set up in private practice will attest, it’s tough, especially at the start. I relied heavily on EAP referrals. Of the four EAP’s that I’ve been on the books of, one uses GAD-7 and PHQ-9; one GHQ-28; one CORE (sporadically). The other doesn’t use measures. It’s an utter mess, and to date I don’t know of one EAP that’s successfully managed to use data purposefully. This is an invite to tell me different.

So, in the early years of my private practice I was using measures, but very erratically. I’d also lost the habit of paying attention. And, once again, it was starting to show up in my numbers. In 2017, on a hunch, I set about looking at dropout rates for my EAP and non-EAP (private) clients. I discovered that while 95% of EAP referrals for the previous year had reached a planned end to therapy, for non-EAP clients the figure was just 38%. Not only that, the average number of sessions attended by non-EAP clients was one. Many just weren’t engaging.

I started focusing assiduously on the goal and means elements of the working alliance, and on more systematically using measures in my work. By 2019 my overall dropout rate was just eight percent, with no dropout from the non-EAP clients that finished in that year. It had taken feedback from the data to galvanise me once again, but the measures I was taking seemed to be making a difference.

You need an evidence base, but not always for the reasons you think

Over my years of consultancy I’ve heard many variations on the theme of “I need an evidence base so that I can show commissioners the great work that we do.” To which my response is some variation of “I’m a great fan of building an evidence base, but perhaps more to test our assumptions that we are doing great work?”

As I’ve discovered to my cost more than once, it’s unwise to make any assumptions about your therapeutic impact in the absence of evidence. As research has shown, in common with other professions, we tend to over-estimate the level of our professional abilities. While it may be “common to think of ourselves as somewhat remarkable compared to others” not all of us can fit into the remarkable category. We need to approach this area with a little humility. Outcomes can go down as well as up.

Am I Any Good…as a Therapist?

Are You Any Good…as a Therapist? is the somewhat provocative title of a recent post on the Society for the Advancement of Psychotherapy website. In the context of my own reflections on my journey with measurement and evaluation of my own practice, it feels like a timely question.

If I’m really objective, the truth is probably that there have been times when I’ve been consistently impactful, and times when I’ve been less so. Along the way I feel like I’ve picked up what feel to me like some simple but powerful truths.

I’ve learned that someone else’s evidence base is a poor substitute for my own. Just because the average effect size for therapy is on the order of d = 0.8, it doesn’t follow that mine will be anything like that. It’s an average. Some of us will do better, and some worse. And it won’t be consistent over time.  

I’ve discovered that when it comes to assessing my performance, my judgement alone isn’t reliable. Twice in my professional career I’ve discovered that I’ve gone off the boil and not realised until I’ve run my numbers. That’s not happening again.

I’ve also learned that attending to some simple evidence-based therapy practices, together with paying systematic attention to my numbers, significantly improves my ending and outcome data. It’s not even that I slavishly adopt a ‘one size fits all’ approach to measurement, more that my approach to monitoring progress with each client is considered.

I need no convincing about the value of an evidence base. Twenty-two years ago, at the end of an organisation-wide review of services, it was the strength of our evidence base that saved my service from being terminated or contracted out. Now, I’m no longer obliged to collect data, but I know that the very process of doing so is part of what helps me to guard against complacency.  

Just like you we thrive on feedback.

Please leave your thoughts on what you’ve read in the comments section below.


Share with your networks
Posted by:Barry McInnes

7 replies on “Are You Any Good…as a Therapist?

  1. This post makes a lot of sense, both evidentially and experientially.

    I use FIT OUTCOMES and got a bit slack recently on collecting the Session Rating Scale data.

    Unsurprisingly, I picked up (in hindsight) a drop in the therapeutic alliance with some clients, and in the absence of measurement, I couldn’t pinpoint why this was.

    It was the jolt I needed to re-focus on being more diligent with administering the Outcome Tools of the trade.

    Still kicking myself for taking my eye off the ball.

  2. Really interesting article, personally i welcome CORE which helps me assess my performance as a therapist. I think a key to performing well are regular client reviews, where I ask questions such as ”are we moving towards the goals we discussed” ? ”have the goals changed” ? ”Are we talking about what you want to cover” ? ”how do you feel it’s going” ? ”how was talking today”?. I found the Mick Cooper book on setting goals in therapy really helpful, particularly the analogy that goals can be like sailors navigating by the stars, we can make great use of them, to ensure the therapy is on course.

    It’s also important for me to be open enough to accept that I might not always get things ”right”, that I might make mistakes and not fully get clients. ” So goals also help me to maintain my humility. Training at University, I was highly suspicious of a tutor who just wouldn’t accept that on any occasion in a long (& distinguished) career there may have been a time when they ”didn’t get it right” with a client, it was always the clients fault ! (they weren’t ‘ready’, it was a ‘game’ etc).

  3. I’ve come to use one measure regularly, Core-OM. When working online, going through this questionnaire with the client and often stopping to allow further dialogue and comment has allowed me to incorporate it into my first session assessment. This questionnaire alone, administered in this way, will often take 45 minutes. I have no idea if this way of working skews the results, but it’s very valuable. And it also provides a rough guide to the client of how long we are likely to be working together.
    In the past I have also used the Session Rating Scale mentioned by Stephen, above. This scale takes around 5 minutes to administer (when things go well 😉 and having read this article I will be reinstating it for regular use at the end of each session. Aside from allowing the client an opportunity to gently point out what I’ve been missing, it’s an excellent and efficient way of bringing the session to a timely close!

    1. Thanks David…..”This questionnaire alone, administered in this way, will often take 45 minutes. I have no idea if this way of working skews the results, but it’s very valuable. And it also provides a rough guide to the client of how long we are likely to be working together.’ I couldn’t agree more! No such thing as bad feedback!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.