Estimated reading time: 7 minutes
Table of contents
- Why all the noise about progress feedback?
- What exactly are routine outcome measurement and clinical feedback systems?
- Fast forward 2021: A new meta-analysis of studies into progress feedback
- What difference did progress feedback make to outcomes?
- What moderated the impact of feedback, and what didn’t?
- Beyond the mean effects, what’s the potential for progress feedback?
- Why we shouldn’t expect instant results
Why all the noise about progress feedback?
Back in 2003, Professor Mike Lambert and colleagues published a paper titled Is it time for clinicians to routinely track patient outcome? A meta-analysis.The paper was a meta-analytic review of three large-scale studies whose findings suggested that formally monitoring client progress had a significant impact on outcomes for clients showing a poor initial response to therapy.
Back then, the application of routine outcome measurement (ROM) and progress monitoring was still in its relative infancy. 18 years on, and after dozens more studies into the subject (some summarised previously) the question contained within the paper’s title is still a hot topic of debate in therapeutic circles. Indeed, few issues seem as guaranteed to divide opinion as the question of the place of ROM in routine practice.
A further meta-analysis, taking account of 58 studies in all, has recently been published. In the paragraphs that follow, we consider its findings and wider implications.
What exactly are routine outcome measurement and clinical feedback systems?
The terms routine outcome measurement and clinical feedback systems (CFS) appear widely in psychological therapy research. While their definitions are often confused and conflated, the two are distinctly different. A recent paper by Christian Moltu and colleagues provides a helpful clarification of the terms:
“ROM is the practice of collecting patient reported data on mental health status, symptoms and development throughout treatment in naturalistic settings. In CFS, information provided by the patient through ROM is made immediately available and actionable in the sessions to support clinical conversations, establish and evaluate treatment focus and goals.”
So, ROM refers to the practice of collecting data on a routine basis, and CFS refers to how that data is made available to therapists and clients to support conversations about progress and the focus of therapeutic work.
Fast forward 2021: A new meta-analysis of studies into progress feedback
Bringing the work of Mike Lambert up to date, Kim de Jong and colleagues set out to conduct the most comprehensive meta-analysis on the effectiveness of progress feedback in psychological treatments in curative care to date. You can read their findings in full here.
The study includes data from 58 studies in all, resulting in 110 effect sizes for 21,699 clients in both randomised control trial and naturalistic settings. For 27 studies, data were also available for clients that were ‘not on track’ (NOT) i.e. those whose actual progress was at odds with their expected treatment response.
The primary focus was the effect of progress feedback on symptom reduction. This was based on the difference in post therapy symptom reduction between patients who received treatment as usual (TAU; control group) and those who received therapy supplemented with progress feedback (feedback group). Additionally, the study examined the effects of feedback on dropout rates, the percentage of clients that deteriorated, and treatment duration.
The study also examined a range of variables that might moderate the impact of feedback. These included the outcome measure used, feedback type, feedback timing and frequency, country and year in which the study was conducted, and a range of treatment variables that included intensity and setting.
What difference did progress feedback make to outcomes?
Across a total of 58 studies, a small but significant effect (d = 0.15) was found in favour of progress feedback versus TAU control groups. A marginally larger effect (d = 0.17) was found for clients that were not on track.
Regarding dropout, the overall dropout rate for control groups was 24.5%, in contrast with the rate for feedback groups of 20.9%. This corresponds to a 20% increased chance of dropout in the control condition compared to feedback.
No significant effect was found for feedback on the rate of deteriorated cases, nor the number of sessions used by clients.
What moderated the impact of feedback, and what didn’t?
In the full sample (i.e. including on-track and NOT clients) four moderators were found to significantly affect outcome:
-
- Outcome instrument. Studies using the Outcome Rating Scale (part of the PCOMS system; d = 0.34) had significantly larger overall effect sizes than those using the Outcome Questionnaire-45 (OQ-45; d = 0.11) or other outcome instruments (d = 0.12). [1]
- Feedback system. Studies using the PCOMS feedback system (d = 0.24) also had larger effect sizes than those using the OQ System (d = 0.13) or other feedback systems (d = 0.07)
- Study location. Studies conducted in the US (d = 0.23) resulted in higher effect sizes than those conducted elsewhere (d = 0.11)
- Year of publication. Studies published in later years were found to report on average 0.02 lower effect sizes per year since the first study in 2001.
Feedback type was found to be a significant moderator in the NOT sample. Studies using clinical support tools e.g. measures of the working alliance or social support, were more effective (d = 0.36) than feedback systems that presented expected treatment response (ETR) curves (d = 0.12) or raw scores (d = 0.04).
Beyond the mean effects, what’s the potential for progress feedback?
The mean effect size found from the studies examined by de Jong and her colleagues was, at d = 0.15, a modest one. The problems with mean effects sizes, of course, is that they tell us everything about the average and nothing about the range and potential. To understand the potential of progress feedback to improve outcomes we need to look at individual studies.
In 2005 Mike Lambert summarised the findings of four large scale studies evaluating the effects of providing feedback about clients’ progress under four conditions:
-
- TAU – therapists received no feedback on clients that were NOT
- T-Fb – Therapists received feedback on those clients that were NOT
- T-Fb+CST – Therapists received feedback for NOT clients, plus the option of using a range of clinical support tools e.g. an alliance measure, measure of clients’ social support
- T/P-Fb – Both therapists and clients received feedback when clients were NOT
Progressive increases in rates of clinical and/or reliable change were seen from TAU (21%); T-Fb (34.9%); T-Fb+CST (49.1%) to T/P-Fb (56%). The rates of improvement achieved when feedback was given to both clients and therapists were more than 2.6 times higher than for treatment as usual.

A study by Heidi Brattland and colleagues published in 2018 investigated the effects of the PCOMS system in an adult outpatient setting, including whether the effects differed with the timing of the treatment within the 4-year implementation period of PCOMS.
Clients (n=170) were randomized to TAU or ROM, delivered by twenty therapists who provided therapy in both conditions. Clients in the ROM condition were 2.5 times more likely to demonstrate improvement than those in the TAU condition. Of particular note is the fact that the superiority for ROM over TAU increased significantly over the duration of the study.
“The increasing superiority of ROM compared with TAU over time was due to a significant improvement in the ROM condition, and a corresponding nonsignificant deterioration in the TAU condition.”
Why we shouldn’t expect instant results
Implementation of ROM and CFS is a process, not an event. The benefits are likely to be cumulative, as the Brattland study illustrates. Practitioners need to familiarise themselves with the process of introducing measures to clients, with interpreting and using the feedback from CFS, and using this feedback collaboratively with clients. We are not simply putting in place a new administrative. Rather, we’re learning a new set of skills.
My own experience at the Royal College of Nursing echoes this. It may have taken a considerable period, but over time the benefits of ROM and CFS became clear to see. Starting in 2000, with a reliable improvement rate of 62% and an unplanned ending rate of 45%, those figures improved to the point in 2004 where 85% of our clients were reliably improved and our unplanned ending rate reduced to 16% (figures for 2001 are sadly not available).

While the study by Kim de Jong is valuable in confirming the overall contribution made by progress feedback to outcomes and reducing drop out, we need to look at individual service and practitioner examples to get a sense of its true potential.
[1] As the authors speculate, this could be caused by a higher sensitivity to change, but could also be the result of bias, for instance because the instrument is often completed by the patient in the presence of the therapist.
Can I suggest that you do NOT use acronyms and spell out what were not particularly long full names? I found I was constantly going backwards and forwards to work out what you were saying.
Hi Louise, I’m sorry that the use of acronyms got in the way of the way of the flow for you!
I have to agree, I was quite confused near the end. If this could be changed I would definitely re-read and try to understand the takeaway points.
Kim deJong is one of the most balanced and systematic reviewers in this meta-analytic literature. As a long-time user of the OQ system, I have followed with interest her evaluations of PROMS over the years. Her reviews have served as a benchmark to evaluate clinical efficacy of the various systems.
Given your summary of her results above , May I ask if you concur with her inference ( page 14?)
“ . It appears that not all feedback systems are equally effective in all patients. Studies using PCOMS have larger effect sizes in the full sample, but have negligible effect sizes in the NOT subgroup. For this meta-analysis, Duncan and Reese conducted new analyses on the NOT subgroup for six studies using PCOMS. Consequently, the number of studies with NOT cases using PCOMS is higher in our meta-analysis than in previous ones, which may dilute the effect in NOT cases. The OQ System seems more effective in NOT cases, especially when it is used in combination with CSTs, but seems to be doing less well in the full sample. Thus, PCOMS seems to be more effective in OT cases, whereas the OQ System works better in NOT cases. This is in line with how these feedback systems have been
designed. The OQ System aims to give feedback signals for patients that did not progress well and strives to improve treatment outcomes for these patients (Lambert, 2007), whereas PCOMS has been constructed to be completed and discussed in session, thereby promoting better communication between patient and therapist”
Hi David and thanks for your comments. Broadly speaking, assuming that I’ve understood correctly, her inference would seem to make sense. It makes sense to me that a system which promotes a dialogue between clients and therapists for all cases, whether on track or not, would have the potential to have a greater impact across the board then one just focusing on clients that are not on track.
So, even for clients that are on track, those routine conversations might have the effect of making the intervention even more effective than it would have been without, if that makes sense? I hope that’s answered your question?
Question answered. Thank you for your considered response – and for keeping this site current – really helps keep those outside of NHS informed on lessons learned in optimization through measurement and feedback.
Gotta love that dashboard!
David – many thanks for taking the time to comment. Feedback like yours is what powers us and is much appreciated!
Cheers
Barry