SDQ, YP-CORE and SWEMWEBS. You’d work hard to find three more different measures to test for their acceptability, applicability and appropriateness in an online counselling service for young people. So how did the young people that contacted Kooth experience them? And what do we know about the acceptability of measures to clients more generally?
About Kooth, and about the research question
Kooth is a leading provider of digital mental health services for young people in the UK. Launched in 2004, it has grown to the extent that it is now commissioned by over two thirds of Clinical Commissioning Groups (CCGs) in England, with over 100,000 users in 2018.
Online provision has come a long way since 2004, and so has the use in therapy of measures of outcome and therapy process. While the use of measures like YP-CORE and the Strengths and Difficulties Questionnaire (SDQ) are relatively routine in counselling settings for young people, however, they remain relatively untested in online environments.
It was with this is mind that Kooth sought to answer the following research question:
Are there discernible differences in the acceptability, applicability and appropriateness between three selected measures completed at registration for an online counselling service for young people?
SDQ; YP-CORE and SWEMWEBS: The three measures in question
Strengths and Difficulties Questionnaire (SDQ)
A 25-item emotional and behavioural screening questionnaire for children and young people widely used in CAMHS settings, comprising 5 sub-scales of 5 items each. The sub-scales include emotional symptoms, conduct problems, hyperactivity/inattention, peer relationships and prosocial behaviour.
Young Person’s Clinical Outcomes in Routine Evaluation (YP-CORE)
A brief (10 item) scale developed for use in school and youth counselling services for young people aged 11 – 16. It covers the domains of well-being, problems and symptoms, functioning and risk.
Short Warwick–Edinburgh Mental Well-Being Scale (SWEMBS)
The Warwick-Edinburgh Mental Wellbeing scale was developed to enable monitoring of mental wellbeing in the general population and evaluation of wellbeing projects, programmes and policies. The shorter 7-item version of the measure is derived from the original 14-item measure.
Determining acceptability, applicability and appropriateness
To be properly useful, measures need to meet basic standards of acceptability, applicability and appropriateness. They should be sufficiently acceptable to responders (as well as practitioners where appropriate) to ensure they are completed. They should bear relevance to the concerns of responders. They should also be relevant to the circumstances and setting in which they are delivered.
The three measures were randomly allocated to each new registrant to the online service for a period of 10 weeks between September and November 2019. In all, 7,235 young people signed up and consented to share their data and were randomly offered one of the three measures at completion of registration.
Acceptability, applicability and appropriateness of the measures was determined thus:
Registrants were given up to 5 days, and up to three refusals, after registration to complete the measure. Acceptability was determined by capturing how many young people chose to complete the measure they were assigned.
Applicability of the measure was determined by asking the follow-up question: ‘Did you understand and relate to those questions?’ Respondents could answer yes or no, or skip the question.
The level of appropriateness of each measure was determined by the follow-up question: ‘How are you feeling after answering those questions?’ They were able to select from the following options: Better, Same, Worse, Unsure. If I’m honest I was left uncertain about how the question addressed the issue of appropriateness, but maybe I’m missing something.
What was young people’s experience of the three measures?
Tests were conducted to determine if there were significant differences between the three variables of acceptability, applicability and appropriateness. Additionally, effect sizes were calculated to establish the magnitude of any differences.
Acceptability (measure completion)
In general, each measure achieved relatively high levels of completion. Completion rates for the SDQ, SWEMWBS and YP-CORE were 74.9%, 77.9% and 77.4% respectively.
Differences in completion rates between the SWEMWBS and SDQ were statistically significant. The difference between CORE-YP and the SDQ did not reach the level of significance.
Applicability (Did you understand and relate to the questions?)
The degree to which respondents understood and related to the questions was almost identical for the SDQ and YP-CORE, and rather lower for the SWEMWBS. The differences between the SWEMWBS and the SDQ and YP-CORE were statistically significant.
I won’t go into too much detail here, except to say that in response to the question ‘How are you feeling after answering those questions?’ between 27 and 33% responded with ‘Unsure’. In the words of the authors, ‘The level of ‘unsure’ suggested a lack of clarity around the question, with a clear sense that the majority did not report any significant change in mood after completing the measure.’
Who is more reluctant to use outcome measures – clients or therapists?
It often seems we attribute non-engagement with measures to reluctance on the part of clients to complete them. But Is that true? Or is that a convenient projection on our part? Is it more the case that it’s our attitudes, prejudices even, that determine whether and how clients engage with measures?
This study involved users engaging with measures at the point of registration with Kooth’s online service. Even at this early stage there seems to have been a high degree of measure acceptability. Without the involvement of a therapist or other person to offer a rationale, at least three-quarters of registrants were prepared to complete one of the three measures offered.
In 2019 BACP commissioned YouGov to conduct a survey of UK adults’ perceptions of therapy. Of some 5,731 respondents, approximately 50% indicated that they had previously received therapy. Of these 80% said they were happy to complete questionnaires during therapy, and two-thirds felt that completing measures helped both they and their therapists to track their progress during therapy.
A study by the Association for Counselling and Therapy Online (ACTO) in 2019 finds a similar picture. A small sample of clients were asked about their experience of completing sessional measures during online therapy. As their responses show, all were broadly positive or raised no objections.
In other feedback the same clients were overwhelmingly positive about their experience, one going so far as to say:
“I was really happy to answer all the questions in the first session and was a little disappointed when these were reduced to only 10”.
I’ve long been convinced that it’s therapists that tend to have a bigger problem with using measures than clients. It’s hardly surprising. Most of us were never trained in their use, and most of our work isn’t directly observed. Using measures can feel exposing. We may work within services where we are required to use measures and may feel our performance is being judged. There are lots of reasons why we might feel ambivalent.
Despite all those reasons, however, a recently published meta-analysis has shown that feedback can improve outcomes and reduce dropout. The final sample for this study, consisting of 58 individual studies, found that using measures to track clients’ progress in therapy improves symptom reduction, and reduces dropout by 20%.
In the end, however, it’s not getting feedback that makes the difference, it’s what we do with it. All being well, it may simply serve to confirm that our work with the client is progressing as we would expect. If that’s the case, we keep on doing what we’re doing. If not, then perhaps we need to start paying a little more attention.
I leave you with the words of Mike Lambert and colleagues in a paper from 2004, which we profiled in an article in Therapy Today in 2018:
‘It seems likely that therapists become more attentive to a patient when they receive a signal that the patient is not progressing. Evidence across studies suggests that therapists tend to keep “not on track” cases in treatment for more sessions when they receive feedback, further reinforcing the notion that feedback increases interest and investment in a patient.’
What’s been your experience? Where has your journey with measures taken you? We’d love to hear your thoughts.
Leave a comment
Just like you we thrive on feedback. Please leave your thoughts on what you’ve read in the comments section below.