SDQ, YP-CORE and SWEMWEBS. You’d work hard to find three more different measures to test for their acceptability, applicability and appropriateness in an online counselling service for young people. So how did the young people that contacted Kooth experience them? And what do we know about the acceptability of measures to clients more generally?

About Kooth, and about the research question

Kooth is a leading provider of digital mental health services for young people in the UK. Launched in 2004, it has grown to the extent that it is now commissioned by over two thirds of Clinical Commissioning Groups (CCGs) in England, with over 100,000 users in 2018.

Online provision has come a long way since 2004, and so has the use in therapy of measures of outcome and therapy process. While the use of measures like YP-CORE and the Strengths and Difficulties Questionnaire (SDQ) are relatively routine in counselling settings for young people, however, they remain relatively untested in online environments.

It was with this is mind that Kooth sought to answer the following research question:

Are there discernible differences in the acceptability, applicability and appropriateness between three selected measures completed at registration for an online counselling service for young people?

SDQ; YP-CORE and SWEMWEBS: The three measures in question

 

Strengths and Difficulties Questionnaire (SDQ)

A 25-item emotional and behavioural screening questionnaire for children and young people widely used in CAMHS settings, comprising 5 sub-scales of 5 items each. The sub-scales include emotional symptoms, conduct problems, hyperactivity/inattention, peer relationships and prosocial behaviour.

Young Person’s Clinical Outcomes in Routine Evaluation (YP-CORE)

A brief (10 item) scale developed for use in school and youth counselling services for young people aged 11 – 16. It covers the domains of well-being, problems and symptoms, functioning and risk.

Short Warwick–Edinburgh Mental Well-Being Scale (SWEMBS)

The Warwick-Edinburgh Mental Wellbeing scale was developed to enable monitoring of mental wellbeing in the general population and evaluation of wellbeing projects, programmes and policies.  The shorter 7-item version of the measure is derived from the original 14-item measure.

Determining acceptability, applicability and appropriateness

To be properly useful, measures need to meet basic standards of acceptability, applicability and appropriateness. They should be sufficiently acceptable to responders (as well as practitioners where appropriate) to ensure they are completed. They should bear relevance to the concerns of responders. They should also be relevant to the circumstances and setting in which they are delivered.

The three measures were randomly allocated to each new registrant to the online service for a period of 10 weeks between September and November 2019. In all, 7,235 young people signed up and consented to share their data and were randomly offered one of the three measures at completion of registration.

Acceptability, applicability and appropriateness of the measures was determined thus:

Acceptability

Registrants were given up to 5 days, and up to three refusals, after registration to complete the measure. Acceptability was determined by capturing how many young people chose to complete the measure they were assigned.

Applicability

Applicability of the measure was determined by asking the follow-up question: ‘Did you understand and relate to those questions?’ Respondents could answer yes or no, or skip the question.

Appropriateness

The level of appropriateness of each measure was determined by the follow-up question: ‘How are you feeling after answering those questions?’ They were able to select from the following options: Better, Same, Worse, Unsure. If I’m honest I was left uncertain about how the question addressed the issue of appropriateness, but maybe I’m missing something.

What was young people’s experience of the three measures?

Tests were conducted to determine if there were significant differences between the three variables of acceptability, applicability and appropriateness. Additionally, effect sizes were calculated to establish the magnitude of any differences.

Acceptability (measure completion)

In general, each measure achieved relatively high levels of completion. Completion rates for the SDQ, SWEMWBS and YP-CORE were 74.9%, 77.9% and 77.4% respectively.

showing differential completion rates between measures

Differences in completion rates between the SWEMWBS and SDQ were statistically significant. The difference between CORE-YP and the SDQ did not reach the level of significance.

Applicability (Did you understand and relate to the questions?)

The degree to which respondents understood and related to the questions was almost identical for the SDQ and YP-CORE, and rather lower for the SWEMWBS. The differences between the SWEMWBS and the SDQ and YP-CORE were statistically significant.

showing relatability of each measure

Appropriateness

I won’t go into too much detail here, except to say that in response to the question ‘How are you feeling after answering those questions?’ between 27 and 33% responded with ‘Unsure’. In the words of the authors, ‘The level of ‘unsure’ suggested a lack of clarity around the question, with a clear sense that the majority did not report any significant change in mood after completing the measure.’

Who is more reluctant to use outcome measures – clients or therapists?

It often seems we attribute non-engagement with measures to reluctance on the part of clients to complete them. But Is that true? Or is that a convenient projection on our part? Is it more the case that it’s our attitudes, prejudices even, that determine whether and how clients engage with measures?

This study involved users engaging with measures at the point of registration with Kooth’s online service. Even at this early stage there seems to have been a high degree of measure acceptability. Without the involvement of a therapist or other person to offer a rationale, at least three-quarters of registrants were prepared to complete one of the three measures offered.

In 2019 BACP commissioned YouGov to conduct a survey of UK adults’ perceptions of therapy. Of  some 5,731 respondents, approximately 50% indicated that they had previously received therapy. Of these 80% said they were happy to complete questionnaires during therapy, and two-thirds felt that completing measures helped both they and their therapists to track their progress during therapy.

A study by the Association for Counselling and Therapy Online (ACTO) in 2019 finds a similar picture. A small sample of clients were asked about their experience of completing sessional measures during online therapy. As their responses show, all were broadly positive or raised no objections.

In other feedback the same clients were overwhelmingly positive about their experience, one going so far as to say:

“I was really happy to answer all the questions in the first session and was a little disappointed when these were reduced to only 10”.

I’ve long been convinced that it’s therapists that tend to have a bigger problem with using measures than clients. It’s hardly surprising. Most of us were never trained in their use, and most of our work isn’t directly observed. Using measures can feel exposing. We may work within services where we are required to use measures and may feel our performance is being judged. There are lots of reasons why we might feel ambivalent.

Despite all those reasons, however, a recently published meta-analysis has shown that feedback can improve outcomes and reduce dropout. The final sample for this study, consisting of 58 individual studies, found that using measures to track clients’ progress in therapy improves symptom reduction, and reduces dropout by 20%.

In the end, however, it’s not getting feedback that makes the difference, it’s what we do with it. All being well, it may simply serve to confirm that our work with the client is progressing as we would expect. If that’s the case, we keep on doing what we’re doing. If not, then perhaps we need to start paying a little more attention.

I leave you with the words of Mike Lambert and colleagues in a paper from 2004, which we profiled in an article in Therapy Today in 2018:

‘It seems likely that therapists become more attentive to a patient when they receive a signal that the patient is not progressing. Evidence across studies suggests that therapists tend to keep “not on track” cases in treatment for more sessions when they receive feedback, further reinforcing the notion that feedback increases interest and investment in a patient.’

What’s been your experience? Where has your journey with measures taken you? We’d love to hear your thoughts. 

Leave a comment

Just like you we thrive on feedback. Please leave your thoughts on what you’ve read in the comments section below.


Share with your networks
Posted by:Barry McInnes

4 replies on “How acceptable are outcome measures to young people?

  1. The Kooth research is, to me, odd. Could acceptability not be measuring compliance, that doesn’t mean acceptable. Applicability, yes I can understand and relate to the questions but that doesn’t mean they are relevant to my situation I’m here to address, which links to appropriateness, I too fail to understand how this question, how it makes me feel can be a measure of appropriateness.
    As I’ve said before I fail to see how “standardised” outcome measures are the best tool to help an individual monitor their progress. I do think they’re a service or therapist need. If it’s a research project fine, but be honest, and offer payment.

    1. Michael, I imagine (though I don’t know) that the research is a first step to determining a standard measure for use by the service. It doesn’t explicitly say.
      I think I’ve previously said I also don’t think that standardised measures are the best way to monitor progress. I think I’ve also said that they are one way, and one that used well, can support the process of therapy. But their use can also be clumsy and crass, and we both know that’s not in the service of the client. 🙂

  2. Hi Barry,
    My thought about this is that our society is becoming much more capitalistic, in the sense that you don’t get much for nothing. So people expect to have to fill in forms and undertake an assessment to access a service, even one they are paying for.
    My experience, has been that most clients are okay about completing the forms, particularly the full CORE – which, considering it is 34 questions long and the language can be a bit formal, surprised me.
    As you say, the important thing is what we do with the results and this is where a lot of clients do seem to respond positively. If I get a client to complete the CORE 34, I take them through an analysis of what they said. I am sure that some are not interested and may think that I am just telling them things they already know, but for most it seems to be a positive experience and sometimes therapeutically helpful for them to see their thoughts in words on paper.
    My general concern is about the number of different measures available. Whereas CORE is a measure of general distress which I can see makes sense to most clients, I worry that the PHQ -9 and GAD -7 are not always applicable and have the potential danger of encouraging clients into considering the idea that they require medication for their condition.
    I would be interested in your thoughts about how measures proliferate and if this is a good thing or not, if you have not already done a blog on it?

    1. Hi David – great question!

      I’m a firm believer in of the fact that we can learn to use measures therapeutically if we put our minds to it. I’m reminded yesterday of a client that I had a first session with. Running through the items with him the one that he scored highest on was “I have thought I am to blame for my problems and difficulties” That led to a most illuminating conversation about his capacity for self-blame and punishment, which will undoubtedly form part of our therapeutic work together.

      On the issue of measure proliferation, I’m thinking about what we might call the parallel process of therapy model proliferation. It seems that we always feel that what we have can be improved upon in some way. So we now have hundreds of brand name therapy models. Is one more effective than another? I think we know the answer to that. I guess there’s also the question of measure developers wanting to make a name for themselves. Just as in other case of therapy models, maybe there’s gold in those hills.

      So I’m not sure that I have a definitive answer to your question, but I hope nonetheless that’s a small contribution!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.