How do we define dropout, and how can we use those definitions to create a simple framework to monitor and reflect on our own practice or service levels of dropout?
How can we use benchmarking to get a relative sense of how our dropout rates compare to others? What solutions work in reducing dropout out, and how can we ask the right questions to find solutions that will work for us in our particular contexts?
This second of a two-part blog aims to show you where to start.
In the last blog, I explored the issue of client drop out from therapy in general terms, using examples from the Improving Access to Psychological Therapy (IAPT) programme and national CORE data for primary care to illustrate the potential scale of the problem. Given that the most common number of sessions that clients attend is one, I also made the case for knowing our own level of drop out, especially if we think we don’t have a problem.
Here, I want to get practical, and offer some simple tools to help you monitor and reflect on your own experience of drop out.
I shared my own experience in the last blog of discovering, to my horror, that at one point back in my Royal College of Nursing (RCN) days I had the highest level of drop out in my service. I made that discovery almost by accident through routine audit.
My key message to you? Drop out may be inevitable to some degree, but we should not accept that there is little we can do about it. There is plenty we can do, starting now. Here are the areas I want to share with you:
Common definitions of drop out and why we need to use them
A very simple framework that you can use to monitor your rates of planned and unplanned endings
How to get an immediate snapshot of ending types for your clients in the past year, as well as your most commonly attended number of sessions
How to do very simple internal and external benchmarking of your data to get a sense of relative performance
Practical solutions that have been shown to work to reduce drop out
A simple framework that will help you to ask the right questions to find the right solutions for your own clients
What is drop out?
We all think we know what is meant by dropout. Experience tells me, however, that there are subtle differences in what we may mean by the term. If we are to talk about dropout meaningfully, and be able to compare our experiences, it helps to work from a shared understanding.
In the wider literature, dropout is referred to in several ways, including attrition, early withdrawal, unplanned ending, premature termination, and client-initiated unilateral termination [i]. Hatchett & Park (2003) [ii] provide what I feel is a helpful description, which is that dropout occurs when:
..the client has left therapy before obtaining a requisite level of improvement or completing therapy goals
Due to loss of contact
Due to crisis
Client did not wish to continue
Other unplanned ending
Planned from outset
Agreed during therapy
Agreed at the end of therapy
Other planned ending
The CORE System User Manual [iii] provides accompanying definitions to ensure consistency. It distinguishes, for example, between situations in which the client may feel therapy isn’t helping and chooses unilaterally not to attend further sessions (Unplanned – Client did not wish to continue), and those where a decision is jointly made to finish during the last session attended (Planned – Agreed at the end of therapy).
A framework for capturing planned and unplanned ending rates
As I mentioned in the last blog in this series, I was surprised to discover that the most common number of sessions my clients attend is one (which in the past year shares the top spot with five sessions). How did I discover this? I simply tweaked the Excel spreadsheet that I use for recording clients’ contact details to capture an additional field. Since then I’ve added fields for each client’s status, and, for those with whom I’ve had an assessment session, whether we committed to further contact. I’ve also added, using the categories above for planned and unplanned endings, a field for how contact ended. A final column simply records the number of contact sessions I had with each client.
The screenshot below shows my amended spreadsheet, with some columns hidden to protect client identifiable data. If you already use a spreadsheet to record client data it may be simple enough to amend yours to capture any additional data you need. Alternatively, you’re welcome to use and amend mine, which I’m happy to send you if you get in touch.
Having a Source column allows me to filter out data for each of my main referral sources. So, if I want to select data for just clients that come via my website, for example, then I can.
The aim here is simple. I want to know two things. First, the relative rates of planned and unplanned endings, for each of my main referral sources, for clients with whom I agree to do therapy and have now finished. I find this by using the Excel filter function (as shown below) to select only clients whose cases are now closed, and with whom I agreed to work beyond an initial assessment session. I then sort the ending type entries alphabetically for easier counting. In case you think from the screenshot below that all my clients achieved planned endings, I can tell you that as U comes after P in the alphabet, the unplanned endings are further down!
Second, I want to know how many contact sessions I had with each client. These are recorded in the final columns in each screenshot. I want the number unfiltered so that it includes all clients with whom I’ve had only one session, because I want to try and better understand why those clients may not have progressed into therapy. From this I quickly learn a number of things:
The average number of sessions for my EAP clients is five, while the average number of clients that contact through my website is 2.6 (bear in mind that some clients through my website are still in progress as their contact is not time limited, so this number will rise).
For clients who find me through my website the most commonly attended number of sessions is one. In other words, they are less likely than EAP clients to progress into therapy.
For clients who find me through my website the most commonly attended number of sessions is one. In other words, they are less likely than EAP clients to progress into therapy.
All EAP clients that that had at least one session subsequently achieved a planned ending to their therapy. In contrast, clients that came through my website had higher rates of unplanned ending, most commonly where there was an assessment session and agreement to have more, which were subsequently cancelled or not attended.
Now I’m equipped with this data I can start to make what sense of it I can, as well as form some ideas about how I might improve early engagement with clients that come via my website. My hunch is that it’s something to do with the significance of clients taking the first steps in help seeking. By the time that potential clients are referred to me via an EAP, they have taken the first, and probably most important, step in the process. They have had an initial conversation with someone on a helpline and they are in the process of being referred. Clients that come via my website are taking that first step with me.
I also imagine that approaching an EAP that has been contracted by an employer will carry an implicit prior endorsement. As for me, other than my BACP accreditation and entry on the BACP Register of Counsellors & Psychotherapists, I really am an unknown for the client.
What I need to do now is work with these hunches to provide potential non-EAP clients with a sufficient sense of safety and motivation to commit to further work, so far as that is possible. I can think of one evidence based solution that I need to use more consistently than I currently do, and I’ll share that with you later when I come to solutions.
Now that I’m working at this level of detail I’m also curious to know what difference the medium I use for delivering therapy makes. Unlike my EAP clients, which are almost exclusively face to face, a substantial amount of the work that I do with clients that come via my website is phone based. So that I can explore that further in the future and see if there’s an issue here also, I’ve added a column into my spreadsheet which allows me to record how therapy was delivered.
Where am I now? Take a snapshot of your caseload
If you’re not already systematically tracking how your clients end therapy, and how many sessions they attend, it will take a bit of effort to get started in capturing that data routinely. As I’ve done, you can go back over your past year’s clients and populate a spreadsheet. This will enable you to look at previous clients in detail and also serve you well in the future.
You need to ask yourself whether effort of doing this retrospectively is worth it, however, as there is a quicker and dirtier way of getting the basic data. Here’s how…..
Gather together your summary of client records for the past year if you keep such a thing. If they show how clients ended and the number of sessions they had, go to step 3.
Gather together your client notes, or whatever will allow you to know how each client ended and how many sessions they had.
Take paper and pencil and on the paper make five headings: Assessed; Therapy agreed; Planned ending; Unplanned ending; Sessions. Space them out like the image below.
Now, for each client that you saw for an assessment, or first contact session if you prefer, make a mark. You’re looking at making the classic ‘five bar gates’ as I’ve done in the example below. If you didn’t agree to see the client again, record them as having had just one session in the Sessions section.
If you agreed to see the client again, make a mark against Therapy agreed. Then, indicate whether the client reached a planned ending or dropped out. Use the CORE System definitions I outlined earlier. The total of your Planned and Unplanned endings should the same number as the number with whom you agreed to do therapy. Finally, record the number of sessions you had with the client.
From this you can work out the proportion of with whom you had an assessment that you agree to work with further (in the illustrative example above that’s 47/63 = 75%. You can also work out the relative proportions of planned and unplanned endings. In this case, unplanned endings represent 19 of the 47 endings, or 40%.
Recall from the last blog that the mean rate of declared unplanned endings was 22.5% and the estimated rate was 52.4%. While direct comparison with any benchmark data needs to be made with caution, they are nonetheless better than no comparators at all, and may prompt important questions about both the similarities and differences
How do your rates of planned and unplanned ending compare, and what questions or concerns do those comparisons prompt? I’ve mentioned that at one point during my time at the RCN I had worrying levels of drop out, but towards the end of my time there in 2005, across the service our rates of unplanned ending to dropped to just 16%.
I identified earlier that my EAP-referred clients, almost without exception, reached planned endings and use their allocated sessions. This phenomenon seems to be echoed in other research into workplace counselling outcomes. A study of the outcomes of six UK based EAP providers [iv] found a mean unplanned ending rates of 16% (declared) and 27% (estimated), respectively. It would seem, then, that just as I found in my own clients, that EAP-referred clients seem to remain engaged with greater frequency.
Finally, from your recording of the number of sessions that each of your clients attend, what did you find is your most commonly attended number of sessions? Is it, like me, one? Are you able to segment your clients further and determine any differences in session attendance, as I did between those clients that are EAP-referred and those that come via my website? What sense can you make of any differences?
How am I doing? Assessing relative performance using benchmarking
Benchmarking is simply the assessment of relative performance, using a standard performance indicator. Performance indicators in psychological therapy include waiting times, risk assessment, ending types and improvement rates. Once you know your own baseline for an indicator, such as unplanned ending rates, you can use comparative or benchmark data to get a sense of the answer to the question ‘How am I doing?’
The data for unplanned endings for primary care and EAP’s I mentioned earlier and in the last blog can be used as benchmarks against which you can compare your data. I’ve drawn together the declared and estimated unplanned ending data for both into the table below. The chances are that the data you’ll choose for comparison will be the data which puts your own performance in the best light. While that’s human nature, please try to be honest with yourself!
If you’ve been through the process of estimating your ending rates outlined above, you’ll have an ending type for every client you agreed to do therapy with that has now finished. That’s as good as you can get, and it’s much better than most of the services that contributed their data to help build the benchmarks you can see above. While they aren’t perfect, however, they are a start.
In the example above we’ve been talking about external benchmarking. In other words, comparing your data to an external reference source. Below is an example of internal benchmarking. The data is drawn from my own service at the RCN, and compares a range of indicators, including unplanned endings, across three time points. You can see how, over a period of three years, we were able to reduce our rate of unplanned endings from 31% to 16%. Without that first figure of 31%, however, we would never have had a baseline upon which we could attempt to improve. You can benchmark your own ending rates year on year, or if see a sufficient volume of clients, perhaps quarter by quarter.
Another example of internal benchmarking would be the comparison of the same indicators, this time across the service’s therapists. Such analyses always highlight variations in performance across therapists, and these may provoke considerable anxiety. We need to take care in exploring therapist variance.
I recall one workshop I delivered for a team’s managers and supervisors where we focused on client improvement rates across the team. It appeared that one team member was achieving significantly lower rates of improvement than most of her colleagues, much to the surprise of everyone present. That was, until it became apparent that almost half this therapist’s clients were below the clinical cut-off at assessment. This meant that relative to other therapists’ clients, her clients had much less scope to show improvement.
Provided care is taken in exploring therapist variance, however, and we are open to learning and challenging ourselves, the potential benefits to both therapists and their clients can be considerable. We are not all the same, and we should always aspire to keep learning.
Given that benchmarking is not exactly new, it is sad that so few benchmarks still exist across the different setting and sectors in which we practice. That needn’t stop you from using those that do exist, however, and from partnering with other practitioners and services in similar settings to create your own networks. The prize seems clear – using this data to reflect on how clients experience us, and how that experience might be improved, even slightly. More engaged clients, more likely to complete therapy, with better outcomes.
What works in reducing dropout?
Any strategy that you adopt in the attempt to reduce levels of drop out in your practice or service needs to be based on your analysis of the factors which may be contributing to drop out. Only then can you identify appropriate remedies. As I’ve already said, context is all. Shortly, I’ll be outlining a simple framework of questions to help you focus your exploration.
That said, others have trodden this path before. Below are three examples from those that have, each slightly different, that may provide a useful background to your consideration.
If you’re looking for a menu of options, this meta-analysis of research [v] is a good starting point. The authors identified 31 randomised controlled trials (RCTs) that tested attendance strategies and assessed their relative success. The impact of each of the strategies employed is expressed as an effect size. Choice of appointment time or therapist, motivational interventions, preparation for psychotherapy, informational interventions, attendance reminders, and case management were found to be the most effective.
To what extent do client expectations about the length of therapy affect their engagement? A deceptively simple strategy employed in a study by Swift and Callaghan [vi] tested the impact of providing information to clients about the number of sessions normally required to achieve improvement.
Clients in the study were randomly allocated to one of two conditions. One received treatment as usual, whereas the second (the education group) were provided with information about the typical trajectory of improvement in therapy and the number of sessions likely to be required to achieve improvement. Those in the education group found to stay in treatment significantly longer, and were more than three and a half times more likely to complete therapy.
Premature Termination in Psychotherapy: Strategies for Engaging Clients and Improving Outcomes [vi] is a comprehensive guide that provides exactly what it says on the cover. In the book’s own words, it ‘… helps therapists and clinical researchers identify the common factors that lead to premature termination, and… presents eight strategies to address these factors and reduce client dropout rates. Such evidence-based techniques will help therapists establish proper roles and behaviours, work with client preferences, educate clients on patterns of change, and plan for appropriate termination within the first few sessions.’
I can’t recommend this resource highly enough. It doesn’t come cheap, but if it helps you to maintain engagement with one client who would otherwise have dropped out, then it’s arguably worth the cover price.
Reflecting on your own practice or service
Over the course of this and the last blogs, I’ve tried to outline the potential size of the problem of dropout, and to encourage you to establish a baseline for your own practice and critically reflect on what you find. I want to conclude this blog with a simple framework that I hope will help you to reflect on your own experience of client drop out and identify some solutions that may help to reduce it.
Step into your client’s shoes. It’s easy to forget just how much it takes for clients to start the process of seeking help and engaging in therapy. At best, they are likely to be ambivalent about coming to see us, at worst downright apprehensive. We should assume they know nothing about the process of therapy and are taking what for them feels like a leap into the unknown.
Be clear about the stages at which you lose clients. If your most commonly attended number of sessions is one, or you’re losing clients in the very early stages, then try and work out what clients are not hearing or experiencing in those first sessions that would make the difference. Are you able to instil a sense of hope that continuing with you will make a difference to the concerns that have brought them? Have you explored with them how therapy might unfold in the early stages? Have you explained the patterns of change that therapy typically shows? Are you helping them to be clear about what to expect?
Review your marketing. Stand back from your publicity and marketing material, including your website, and reflect on how clients might experience it. Your marketing should speak to your clients as people, not as fellow therapists. Ask for honest feedback from people you know who are not therapists. Rarely do clients understand the finer points of therapy modalities, the core conditions, etc., so why would we want to include anything other than the most minimal detail about these?
Critically review a selection of clients with whom you’ve recently finished. Include clients who dropped out in the early stages as well as those that reached a planned ending. Can you identify any factors that distinguish one group from another? Reflect on any factors that may be relevant, such as clarity over respective roles, the client’s expectations of therapy, and their levels of hope and motivation. Were you and the client clearly agreed on the goals or purpose of therapy and how you would work together to achieve these? Was there a good sense of ‘fit’ between you? Were the client’s preferences accommodated, for example in terms of the model of therapy you practice?
Work out your ‘best guesses’ and make a plan. For each client that finished prematurely, formulate your best guess as to why they chose not to continue. For example: This client chose not to continue because they were sceptical about whether therapy would help them with the problems they faced. We agreed to proceed in an open, exploratory way, without sufficient agreement about where that exploration would lead. After a couple of sessions, the client was probably lacking sufficient hope that continuing therapy would lead to any relief of the symptoms and problems they were experiencing.
Once you have identified where you think the problem lies, then you can begin working on what might have made the outcome different. In this case, a greater focus on the goals and tasks of therapy in the early stages, making it clear to the client that it might take several sessions before they could expect to see real improvement, and a commitment to monitoring both progress and process, might have made a difference.
It’s possible that if you review a handful of clients, you may notice some themes emerging. Are there any critical elements that you may be routinely missing that may be undermining your clients’ early engagement? We all have habitual and preferred ways of working, but sometimes these benefit from being tested. Identify what you think you need to routinely incorporate into your early conversations and contracting with clients, as well as what you need to hold in mind with particular clients.
There’s only one way to test whether your hunches and possible solutions are correct, and that’s to apply them and review the results.
Review and benchmark. If your hunches and solutions are correct, then you should see this making a difference to the level of engagement you experience in your clients as your work with them progresses, and longer term in your levels of dropout. Here are three simple suggestions for monitoring this. First, with every new client, especially in the early stages, ask yourself two questions after each session.
- On a scale of 1 – 10, how engaged is this client in the process?
- What else might I need to do to ensure they remain engaged?
There’s nothing, of course, to stop you from finding a way of asking those questions directly of the client as well.
Second, when clients disengage unilaterally and you are unable to get feedback directly from them, work through the best guess process above as soon as you’re able, and before your memory of events begins to fade.
Lastly, make a date in your diary for a time when you will have sufficient new clients to confidently determine whether your new way of working or adjusted practice is making a difference to your level of dropout. If you completed the baseline analysis of dropout as outlined earlier, you’ll be able to determine what changes have taken place. It may be difficult to know for certain whether you can directly attribute any changes to your new practice, but if you are also monitoring client progress and disengagement in real time, then when you come to benchmark your new level of dropout, you can have reasonable confidence that it’s making a difference.
You can read the first blog in this series here.
[i] Decreasing treatment dropout by addressing expectations for treatment length. Joshua Swift & Jennifer Callaghan. Psychotherapy Research · March 2011
[ii] Comparison of Four Operational Definitions of Premature Termination. Gregory Hatchett & Heather Park. Psychotherapy: Theory, Research, Practice, Training, Vol 40(3), 2003, 226-231
[iv] Benchmarking key service quality indicators in UK Employee Assistance Programme Counselling: A CORE System data profile. John Mellor-Clark, Elspeth Twigg, Eugene Farrell, and Andrew Kinder. Counselling And Psychotherapy Research Vol. 13 , Iss. 1, 2013
[v] Interventions to Increase Attendance at Psychotherapy: A Meta-Analysis of Randomized Controlled Trials. Kellet s et al. Journal of Consulting and Clinical Psychology · August 2012
[vi] Premature Termination in Psychotherapy: Strategies for Engaging Clients and Improving Outcomes. Joshua Swift & Roger Greenberg. 2012. APA, Washington. https://www.amazon.co.uk/gp/product/1433818019/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
2 replies on “What level of drop out in therapy is OK (part 2)”
Hi Barry some belated comments from me and team having left blog in office for them to read.
‘Assessment – I am always feeling something about the paucity of training around assessment in colleges and services and how it is vital for first understanding what clients wants and needs are.’
‘Client matching – vital in my book and undervalued. We value spending time in this service thinking about it.’
‘What’s the impact of wait times?’
‘The manner of the therapist/assessor in how they introduce counselling themselves, the process and ensuring client has the control and security to feel safe and that we have the luxury of working open ended here which means clients don’t have to feel a time pressure if they don’t want to and they don’t have to do a lot of box ticking, I think this is why our clients stick around.’
Rachel – my thanks to you and your team once again and great to have them involved too.
For me the common theme which runs through all of those comments, other than waiting times, is how critical the quality of the connection is between us and the client, especially in the early stages. Strong evidence that if we don’t get that right from the off, we won’t have a client for long, let alone an engaged one.
As for waiting times, I’m sure there is a relationship between length of wait and likelihood of first session attendance, but that will be moderated by how wait lists are managed. Maybe you can explore this within your own data? Interestingly, Trusler et al found no impact of waiting times on rates of unplanned endings (Trusler et al. 2006. Waiting times for primary care psychological therapy and counselling services. Counselling and Psychotherapy Research, March 2006).