Blind trials are experiments in which the subjects do not know what kind of treatment they are receiving, so their expectations about the treatment cannot affect the results. Unsurprisingly, researchers who aim to prove some pet theory or other rarely use a blind trial.
Instead, they deliberately allow subjects’ expectations to influence the outcome.
In many cases, subjects in a control group receive what is euphemistically known as ‘treatment as usual’ (TAU), which means they are pretty much ignored by the researchers and can be guaranteed to have low expectations. At the same time, subjects in a treatment group get lots of attention from the researchers, guaranteeing that their expectations will be higher. In this way, almost any treatment can appear to be supported by research evidence, even if the treatment is in fact quite useless.
To receive the highest accolade, ‘randomized controlled trial’ (RCT), researchers only have to allocate subjects to the two groups randomly. They are allowed to hand-pick the subjects for the experiment (you see ‘screened for suitability’ in the published paper) as long as they do that before the randomization takes place.
There’s widespread abuse of scientific method when experiments are carried out like this without proper methodology, allowing hidden factors like expectations to influence the results. Supporting this abuse, there’s widespread collusion by others who hold up the RCT as a pinnacle of scientific achievement.
This will not last.
A recent paper in the journal Psychological Medicine reports on the evidence base for CBT when the quality of the research methodology is taken into account. The relatively few blind trials that have been carried out have yielded some disappointing results:
Conclusions CBT is…effective in major depression but the size of the effect is small…
The paper is already provoking comment. For example, at the science news service PhysOrg.com (quoting one of the paper’s authors):
“The results of this review are important because in March NICE re-approved CBT for use in all people with schizophrenia. The Government is also investing millions of pounds to provide CBT for depression and anxiety in 250 dedicated therapy centres across England,” said Professor Laws. “Yet the evidence here is that the effectiveness of this form of therapy may be less than previously thought, to the point of being non-existent in schizophrenia.”
It looks like CBT is not really as evidence-based as we have been told. It was all a mistake (or a hoax, depending on how cynical you are), brought about by incompetent (or malevolent) academics who couldn’t (or wouldn’t) design their experiments properly.
So…what to do next. I think I might start a blog about cookery, perhaps open a little restaurant when the recession starts to ease.
Hang on a minute, what about that chap who left here this morning after just his third session of CBT, already virtually cured of a disabling condition that had blighted his life for more than ten years? Does that not mean anything? And it’s not just him — what about all the others whose lives CBT has permanently transformed in only weeks or months?
Hmmm…perhaps the cookery blog can wait a while.
What’s really going on is worse than just an argument over scientific methodology.
It’s true, as that paper points out, that research studies should be blind and use properly-constructed controls, but those are not the only requirements of good research into CBT. The studies should also contain quality checks on the treatment itself — checks to ensure that the CBT being used really is CBT.
If you were doing a drug trial, you would certainly assay the pills to ensure that the ingredients are what they are claimed to be. That has to be done in trials of CBT, too.
When it’s not done, an experiment might show that “CBT” had very little effect, but no one knows what kind of CBT that was — no one knows what was in the pills. That’s no use at all, even if the trial was an RCT, even if it was blind, and even if it had properly-constructed controls.
The challenge that the scientists have to step up to is this. Beck and many others after him observed that certain techniques have a dramatic beneficial effect on people with certain psychological problems. There are many, many people (including some accredited CBT therapists) who can reproduce that effect, and many, many people (including some accredited CBT therapists) who can’t. Why is that?
We need terminology, backed by science, that reflects our understanding of the world. But what we have now is when a highly skilled, empathic and forensic psychotherapist carefully unravels the origins of a patient’s problems, we call that CBT. And when a dimwit with an MSc does little more than hand out handouts and make encouraging noises, we call that CBT too. It’s time to start discriminating.
The almost universally sloppy research methods that we get at present will never answer the questions that need to be answered. This paper shows that the game is almost up. The appeal of ‘evidence-based’ treatments depended on collective blindness to the poor quaility of much of the evidence. People are beginning to see through it.