It’s generally accepted that the use of routine monitoring of outcome in therapy has the potential to reduce rates of drop out and improve outcomes. Rarely in academic papers, however, do we get much insight into what this actually looks like at the therapy coalface. A recent paper provides a refreshing exception, highlighting five individual client cases in which measures of outcome and alliance were used at every session.
There’s enough compelling evidence that using measures of outcome and alliance sessionally has the potential, repeat…potential… to significantly reduce dropout and improve outcomes for me not to need to repeat if here. If you’re not aware of the evidence, or would like a refresher, I’ve written about the topic previously here.
Despite the evidence, however, routine use of outcome measures, except in situations where its use is mandated, seems relatively rare. Rarer still, in my experience, are situations in which the use of outcome and alliance measures are routinely used. More often that not, we prefer to rely on our judgement alone, despite the evidence that that it may not always be reliable.
If the case for routine monitoring of outcome, and outcome and alliance together, is so strong, then why haven’t we all embraced it? I wonder if part of the reason (leaving aside those of us who are research phobic or simply deceiving ourselves) is that it’s hard to see past the numbers to how the evidence might actually apply to us?
It’s a familiar story. Researchers generally speak to researchers, and practitioners speak to other practitioners. While I sense that may be beginning to change, much of the material written for academic publication still isn’t awfully digestible. Therapy Meets Numbers is all about bridging the divide and hence I was delighted to come across a recent paper which attempts to do the same.
The stories behind the numbers
The paper I’m referring to is a Spanish study [i] centred on five case study clients from the caseload of the first author. The case studies illustrate how the feedback from outcome and alliance measures helped the practitioner to foster the therapeutic alliance and steer the direction of therapy towards a positive outcome.
In most cases the measures used were the ORS (Outcome Rating Scale) and SRS (Session Rating Scale), though in one case the CORE-10 was favoured over the ORS. Each of the two measures has just four items, each scored between 0 – 10. Hence the scoring range of each measure is 0 – 40. In the case of the ORS, lower scores indicate poorer functioning. For the SRS, the higher the score, the better the client’s rating of the session (note that that three of the SRS items are based on the key alliance domains of therapeutic goals, tasks and bond).
Below are snapshots of three of the five case studies presented in the paper. Each illustrates in some way the role of feedback from the measures in informing the way that therapy developed. For each case study I’ve scripted a key message that I would take away from this case. You may disagree or be able to offer a better one in the comments section below.
Case study 1
This concerns a 29-year-old man suffering low mood, lack of motivation and low self-esteem, whose main objective for therapy was to increase his level of motivation.
Of particular note in the chart below is the reduction in the client’s SRS rating of the therapeutic alliance in the second session (from 36 to 26.5). The items relating to the relevance of session goals and topics were scored particularly low (4 out of 10).

This reflected the fact that the therapist had spent most of the session focused on the task of reviewing the client’s history in order to complete the assessment from the first session. The client’s hope had been that they would be more focused on the problem that he wished to resolve. Presumably as a result of the SRS feedback, the pair were able to discuss the reason for the low rating and agreed to refocus on the client’s concerns in the next session. Therapy was concluded after five sessions with the ORS scores indicating an improvement in the client’s functioning and very healthy ratings of the partnership.
In the absence of feedback from the SRS would the therapist have picked up on the client’s concerns over the focus of their work? Perhaps the work would have proceeded successfully anyway, but the fact that the therapist picked up on the SRS scores and responded positively to the client’s concerns would surely have served to strengthen the alliance, if nothing else.
Key message: The client’s rating of the therapeutic alliance may yield clues that our therapeutic efforts may not be focused where the client wants, potentially undermining a cornerstone of successful outcomes.
Case study 2
A 31-year-old man who sought therapy experiencing a depressive episode of several months duration, seeking to break out of a ‘vicious circle’ he felt himself to be caught in. He received a total of eight sessions.
Of particular significance in this case was that despite the scores on the SRS suggesting that the therapeutic alliance was strong, the client’s scores deteriorated slightly on the ORS in the first three sessions.

Noticing this trend apparently prompted the therapist to consider a change in the therapeutic strategy which was thought to have brought about an improvement in the client’s functioning and the overall direction of the therapy work.
Key message: However highly the client rates the therapeutic alliance, we cannot assume that the client is going to eventually benefit. If the evidence suggests they are not, we need to consider adjusting our therapeutic focus.
Case study 3
A 19-year-old young woman seeking help for what she described as a ‘lack of self-esteem’. She had previously been seen by another therapist who had attributed her problem to her difficulty in becoming independent and separate from her family of origin. She had apparently not been happy with the focus of the therapy. She and the therapist had become increasingly distanced and the therapy was eventually terminated.
In the episode under study she apparently had eight sessions. Initially, her new therapist came to a similar conclusion as her previous one i.e. that the core of her problem was connected to a dependency issue with her family.

Despite the therapists initial conclusions, however, it was concluded that the main focus of the work should be the client’s reported lack of self-esteem, and this was where the work was focused.
As can be seen from the chart above the client’s initial SRS ratings were low. Although it is not specifically stated, this would be understandable given the client’s previous experience and the early conclusion arrived at by her new therapist about the basis of her problems: trust in her new therapist may have taken a little time to establish. By the end of the process, however, the SRS scores indicated a strong working relationship and the client’s psychological health showed significant improvement.
Key message: How we interpret the core of our clients’ problem is going to count for little if the client holds a different view. We pursue our own agenda at the risk of the client eventually disengaging.
What is the story behind the numbers?
I suspect that every therapist has a story, for each of his or her clients, about how their therapy is proceeding. I certainly do and hopefully, a reasonable amount of the time, I’m somewhere in the right ball park.
Even with the same client, however, my story is going to differ from yours. To illustrate, let me ask you to consider the progress chart below, of an ongoing client that has had 15 sessions booked, of which two were cancelled at short notice and then one DNA’d, before resuming (each dot in the graph represents a session attended where a measure was completed).
What do you think is happening here in the therapy process? What do you think needs to happen next? What factors are you considering in forming this view? What other options would be possible?

Just how much our stories may differ was brought home to me quite powerfully recently. The graph above was presented to candidates at interviews for counsellor posts at an agency that I provide consultancy to. Also presented were the four questions above. The range of perceptions, interpretations and suggested courses of action were as rich and varied as the candidates themselves.
In truth, we can only properly make sense of this kind of feedback in collaboration with clients themselves. Not this client, however, whose case is fictitious. But I hope it helps to illustrate how creative we can sometimes be!
References
[i] Gimeno-Peón A, , Barrio-Nespereira A & Prado-Abril J. 2018 Routine outcome monitoring and feedback in psychotherapy. Psychologist Papers, 2018. Vol. 39(3), pp. 174-182 http://www.psychologistpapers.com/English/2872.pdf
We thrive on feedback, so please let us know what you think about what you’ve read in the comment section below. Only the name you use to identify yourself will be shown publicly. Thank you!
Not signed up for our blog? You can sign up for regular updates here