Medication is often a useful adjunct to CBT, relieving symptoms in the short term and allowing patients to focus on underlying issues in therapy. And doctors — both psychiatrists and GPs — are usually very willing to collaborate directly with a psychotherapist in the patient’s interest. This does not mean that a psychotherapist can prescribe medication, but it often comes close…
A different view
Canadian psychologist Dr. Brian Grady has a different view of this, however, and he writes some surprising things in his recent post, Psychotherapy vs. medication for depression.
One surprise is that he doesn’t advise people about their use of medication. For someone with 15 years experience in therapy and counselling not to share his undoubted expertise in this matter with his patients seems strange to me. Stranger still, he doesn’t seem to realize that his approach might seem strange to anyone, and he offers no explanation for it.
Another surprise is that he passes on research reports for patients to discuss with their pysicians. Research findings are statistical generalizations that might not apply to a particular patient. What’s more, research reports are notoriously difficult to evaluate. Journalists, the public, doctors and even other researchers frequently draw faulty conclusions from them.
A case in point
The research that Dr. Grady quotes is a case in point: Antidepressant medications v. cognitive therapy in people with depression with or without personality disorder
The abstract concludes:
Comorbid personality disorder was associated with differential initial response rates and sustained response rates for two well-validated treatments for depression.
But the Results section tells us (my italics):
For people with personality disorder, sustained response rates over the 12-month follow-up were nearly identical (38%) in the prior cognitive therapy and continuation-medication treatment arms.
How can “nearly identical” in the results become “differential…sustained response” in the conclusion? And how can patients in discussion with their physicians be expected to make sense of this?
An additional obstacle in interpreting research is how to evaluate funding bias. This particular research study was funded by the makers of the controversial antidepressant Seroxat (Paxil).
There is widespread debate about the possible bias that might be introduced into published research findings depending on the source of funding, and indeed that’s why sources of funding have to be declared when papers are published. But evaluating bias of this kind is very difficult.
How can patients in discussion with their physicians be expected to evaluate the context in which research was conducted, and the possibility of bias resulting from that context?
Finally, there’s a spectrum of effects that routinely complicate medical research, and that are usually lumped together and labelled “the placebo effect”. In this example, the abstract does not discuss this pervasive effect. (It uses the term “placebo” only to describe a form of treatment.)
Before discussing how the placebo effect might change the interpretation, here’s a summary of the original results (PD means personality disorder):
|4 months||12 months|
A possible interpretation of these results is that there’s an overall placebo effect of 38% — that is, 38% of people improve no matter what kind of treatment they receive. Antidepressants have a real 11% effect, and cognitive therapy a real 32% effect (three times as effective as antidepressants).
Patients with personality disorder (PD) particularly like medication (perhaps because being included in an experiment and being given pills makes them feel valued and supported), and this causes a 17% improvement, but the effect is not sustained.
Patients with PD dislike cognitive therapy (perhaps because it involves interacting with a challenging person), and this causes a 26% relapse. The effect disappears when therapy ceases.
Patients with PD particularly dislike having their medication withdrawn (perhaps because they feel tricked and let down, activating fears of abandonment), and this causes a 32% relapse.
Patients with PD do not sustain short-term improvements in depression, because the underlying personality disorder has not been addressed.
Now we can use this interpretation to reconstruct the original results:
|4 months||12 months|
You can check that these numbers add up to the same as the original results.
This interpretation is pure fantasy, but it seems to fit the facts. When research allows fantastical interpretations to fit the facts, it indicates that the experiments were badly designed.
This a particularly bad case because in addition to using a badly designed experiment, some of the results have been withheld. Withholding results makes it more likely that fantastical interpretations will seem to fit the facts
In theory, bad experimental design should be eliminated by stringent checks at various stages of the research process, so that badly designed research never reaches publication, but in practice these checks often fail. I just don’t believe that patients and physicians can make good choices based on this kind of flaky evidence.