When it comes to the role of research in psychotherapy practice, there seems to be a gap between what we say we want of research, and what we actually do with it in practice. We explore this rather ambivalent relationship a little further and ponder on what the future might look like.

Research in psychotherapy practice: our ambivalent relationship

My sense is that among therapists there is often a deep ambivalence about research and the role that it plays in our profession and practice. This view is based more on experience than anything else, but recently I’ve read a couple of papers which seem to nicely illustrate this rather ambivalent relationship.

I’m in pretty good company in holding this view. It was eloquently expressed by Louis Castonguay and colleagues in a paper from 2010. In it they state: “It is well established that the practice of many full-time psychotherapists is rarely or nonsubstantially influenced by research.” Don’t you love that? “…nonsubstantially influenced….”  It feels like a polite way of saying that many psychotherapists would rather walk over hot coals than sully their practice with research.

My journey with research in therapy practice

I will state now that I have absolutely no interest in research as a solely academic endeavour. My brain isn’t built that way. I really struggled with maths at school. The only explanation I have for my maths ‘O’ level is that I must have been mistaken for someone else.

My interest lies in the application of research in psychotherapy practice. It started when I was manager of the counselling service for the Royal College of Nursing (RCN). It was part of my role to promote the wider development of staff counselling provision for health service staff. To put it bluntly, the RCN couldn’t and wouldn’t pick up the damage to staff from poor employment practice elsewhere.

 

“It is well established that the practice of many full-time psychotherapists is rarely or nonsubstantially influenced by research.”

We needed health service employers to attend to their own damage. An evidence base would have helped greatly in our task. At the time, there wasn’t one. So, we decided to try and grow our own. In this way began my relationship with research in psychotherapy practice. I’ve written about that formative journey at the RCN in an earlier blog.

The role of research in psychotherapy practice: what we say and what we do

I offer you two contrasting perspectives of how therapists seem to view and value the place of research in psychotherapy practice. The first is from a paper published in 2006 by Mike Lucock and colleagues titled A Survey of Influences on the Practice of Psychotherapists and Clinical Psychologists in Training in the UK.

The authors surveyed 95 qualified psychotherapists of various therapeutic orientations and 69 psychologists in clinical training, to determine the main influences on their practice. The survey used Questionnaire of Influencing Factors on Clinical Practice in Psychotherapies (QuIF-CliPP). This contains 39 items under the four categories of training (6), literature (11), practice (12) and personal factors (10). Each was scored on a seven-point rating scale where 0 = ‘not at all’ and 6 = ‘a great deal’.

The three main areas addressed by the study were:

The extent to which evidence-based practice (EBP) is an influencing factor in clinical practice

The extent to which other factors play a part, and

What therapists see as the most influential factors on their practice.

The table below shows the scores, for qualified therapists and trainee psychologists, for those factors with a mean score of 4.5 or higher. Asked to identify the most influential factors, 16 of the qualified therapists rated current supervision the most influential, and 39 as one of the three most influential factors. Among trainees, no one factor stood out clearly, though professional training was the most frequently rated.

Comparing the responses of the two groups revealed statistically significant differences for 13 of the factors. Trainees rated textbooks, electronic journals and databases, current and past supervision, professional training, client characteristics and psychological formulation more highly. Qualified therapists rated major life events, personal therapy,  supervision provided, conferences and providing teaching and training as more influential.

Theory based journal articles received a mean rating of 3.7 and 3.4 from qualified therapists and trainee psychologists respectively. Otherwise, the lack of prominence of research and evidence-based practice resources is striking. I’m in two minds on the higher ratings given by trainee psychologists to electronic journals and databases. Is this to do with the academic rigour of their psychology training, or more that as we accrue experience as practitioners, we pay less attention to research? Maybe it’s a combination of both.

The second perspective is from a Canadian study from 2014 which surveyed a total of 1,019 participants, mostly practising clinicians, about the importance to their clinical work of 41 psychotherapy research topics. Participants were asked “How important is it to you to have practice-based research on…..” and items were rated on a 5-point scale from 0 = ‘not important’ to 4 = ‘extremely important’.

The topics were ranked according to the mean level of importance attributed to them by respondents. The 10 highest rated topics are shown in the table above. Contrast these items with those at the other end of the table. The items ranked at 39, 40 and 41 respectively, were ‘Using manualized psychotherapies’; ‘Adherence to manualized treatments’ and ‘The practice of matching client and therapist characteristics (e.g., based on culture, gender, sexual orientation)’.

Am I alone in sensing that the top and bottom ranking items have a rather different quality about them? To me, it seems that many of those ranked highest are more concerned with the wider common or contextual factors that underlie therapeutic efficacy, than the specifics of manualised therapies or whether adherence to manualised treatments gets better results.

The practitioners in this study had an average of 17.5 years practice experience. It might, therefore,  be the case that they were strict model adherents who didn’t feel any need for further research on their preferred model and the value of adhering to it and were thus interested in more general issues about what works.

It might be that, but I get a sense there’s something wider going on. A move away from model purity and towards greater integration. A proper acknowledgement that no matter how much we refine our models, they are still going to deliver pretty much the same outcomes. A greater valuing of the various relationship factors that are being increasingly shown to underpin change.

Integrating research into therapy practice: the next frontier?

Sadly, despite all we have learned about what makes for effective therapy it would appear that, overall, therapy outcomes haven’t improved in the last forty years. For all that we may think we know, our results still seem to be flatlining. Surely we can do better than this?

We know that, in common with other professions, we are prone to self-assessment bias and can easily over-estimate our abilities. We also know that some therapists are more effective than others. I wonder if a potential starting point to the next stage of our evolution as a profession might lie in a collective commitment to benchmarking our individual and collective impacts?

Clearly, not all therapy is about distress or symptom reduction or the achievement of goals that can be described in behavioural terms. But an awful lot is, and a great deal of that can be measured. I’d like to think that instead of viewing relative performance as threatening, we might instead see it as an opportunity to learn and develop.

Over the past three to four years one of the things I’ve been focusing on is trying to reduce my level of dropout. When I last wrote about this a few months back, I was able to report that I’d significantly reduced dropout among my private practice clients over the past two years. Because of the pandemic, I’ve had rather fewer clients this year than in preceding years, but looking back at clients that started in 2020, I can see only one client of the 24 that have now finished that dropped out.

In the coming year, I want to get my rather patchy outcome data into better shape. I’d like to start benchmarking my own ending and outcome data and start having conversations with practitioners who are interested in doing the same. If that describes you, then get in touch and we can maybe start sharing what we’re doing behind our therapy room doors that actually works.

Wishing you a safe and rewarding festive season, whether you celebrate or not!

Leave a comment

Just like you we thrive on feedback. Please leave your thoughts on what you’ve read in the comments section below.

Posted by:Barry McInnes

Leave a Reply

Your email address will not be published. Required fields are marked *