Probably one of the most important books about CBT in a long while was published at the end of last year, but I’ve been putting off reviewing it because of a feeling that there was something about it I was not able to grasp. There’s an irony about that.
The book is Collaborative Case Conceptualization by Willem Kuyken, Christine A. Padesky and Robert Dudley. The irony is that ‘conceptualization’ is the process of creating a rational explanatory framework in relation to the patient’s difficulties, and that — a rational explanatory framework — is exactly what I was having difficulty with in relation to the book.
Christine Padesky is a Distinguished Founding Fellow of the Academy of Cognitive Therapy (in the US), former Assistant Clinical Professor in the Department of Psychiatry and Human Behavior at the University of California, Irvine, and co-founder of the Center for Cognitive Therapy, Huntington Beach, California.
So this is a high-powered book by high-powered authors. I think part of the difficulty is that it contains a lot of complex material. But it is not just that. I have a feeling that some of the material is not as well integrated and thought out as it could be. I wonder if this is because there are three authors, and I wonder if separate books (or separate sections in the book) by each author might not have worked better.
To illustrate, I’ll focus on the concept of reliability. This is not because reliability is very important in practice — it’s not — but it’s a fairly simple concept that should be easy to explain.
The first thing to say about reliability is that it’s ambiguous. In ordinary language the word ‘reliable’ means something like ‘trustworthy’. But in statistics it has a specialized meaning — lack of variation.
For example, suppose you decide to check your tyre pressures using a little pressure gauge you got free with a motoring magazine. You find they are all exactly 22 PSI, which is too low. So you get out a foot pump, pump the tyres up for a while, and try again. Still 22 PSI. After repeating this a few times, you realise what’s going on. The pressure gauge is broken — it always reads 22.
Statistically, this pressure gauge is completely reliable. It always gives the same reading. But in ordinary language it’s completely unreliable. You cannot trust the reading to tell you whether to pump the types up or to let air out.
A consideration of reliability in case conceptualization requires some clear thinking. Is it important for case conceptualization to be statistically reliable, or reliable in the everyday sense, or both, or neither?
The discussion of reliability in the book seems to go with the statistical sense. Reliability is said to be achieved when therapists agree on the conceptualization (p. 19):
These studies converge in suggesting that therapists generally agree on the descriptive aspects of the conceptualization (e.g. clients’ problem list) but reliability breaks down as more inference is required…
This subtle merging of the concepts of statistical reliability and everyday agreement skims over the need for clear thinking on the subject.
Towards the end of the book there is a proposal that agreement between therapist and client is more important than agreement between therapists, but at this late stage in the book the tyre-pressure-gauge problem suddenly appears (p. 322):
Within our model, the most appropriate test of reliability is whether a therapist and client agree on the content of the conceptualization and whether the level of agreement is maintained over the course of therapy.
Rather like an archer who repeatedly misses the target even while shooting arrows in the same vicinity, it is quite possible for the therapist and the client to show high levels of agreement on an erroneous conceptualization.
I suspect that this confused approach is revealing. To understand more, it is useful to think about what conceptualization means.
Conceptualization is when you perceive some complex data, and you make a mental model that simplifies it so as to make thinking about it manageable. Suppose you have to make a copy of this line drawing using pencil and paper:
One way of looking at it is to see it as thirteen (or fourteen) straight lines. You could even measure the lengths of the lines. That’s quite complicated, and it is insufficient for reproducing the drawing. Even so, the idea has a significance that I’ll return to.
Here’s another way to look at it:
Now there are five groups of lines. (I added colour and shifted the groups a little only to make them clearer.) This breaks the problem down into smaller chunks. Each group contains a more manageable number of lines. Chunking is a common method in conceptualization.
Someone else might see pairs of lines:
This has more chunks — seven — but the chunks are simpler and more consistent.
A completely different way to see the drawing is as three shapes (two rectangles and a triangle), with an extra line at the left:
The point to notice about these conceptualizations is that they are all completely accurate representations of the original drawing. In the everyday sense, they are completely reliable.
But in the statistical sense they are completely unreliable because they do not agree. There is no agreement even on the “groups of lines” methodology.
Your conceptualization of the problem might be different from all of these. Even so, it can be completely accurate without agreeing. This illustrates that an accurate and useful conceptualization does not have to agree with any other accurate and useful conceptualization.
As an aside, I want to dismiss the idea that just one conceptualization is ‘true’ and the others ‘false’. Truth is not the issue in this discussion. (As an aside within an aside, truth is related to the slippery concept of validity in statistics.)
The truth is that my original drawing has been mangled by the process of converting it to an image that I can publish here, so the true nature of my original drawing has been lost. In place of whatever it was that I drew, there are now only shaded dots. You can see them if you magnify part of the image:
Thus none of the conceptualizations is true, or all of them are equally true. It is not necessary to believe in the unique truth of a particular conceptualization in order to complete the task.
The logic is the same in therapy. The actual events that led the patient to have this condition are irretrievable. To travel back in time and participate in those events is not possible. Instead, you have to work with the shaded dots of memory and emotion, and see the patterns in them. The patterns are not the events, they are only conceptualizations. They are not unique truth, but they enable you to complete the task.
It is, however, quite possible to conceptualize wrongly, so that when you draw your copy of the diagram it does not match the original. For example, you might see a rectangle, a triangle and some extra lines — and draw this:
This is not wrong for the reason that you cannot draw very well. It is not wrong for the reason that you do not properly understand what rectangles and triangles are. The errors here are not errors of procedure or knowledge. The errors are errors of conceptualization.
This is where the book is powerful and thorough. It emphasises that procedure and knowledge combine with conceptualization to make therapy effective. And it emphasises collaboration with the patient and constant checking of facts (what the book calls ‘collaborative empiricism’) as ways to avoid faulty conceptualizations.
Returning to the book’s treatment of reliability, I think this illustrates how the book fits in to the current politics of CBT. In some circles there is an over-emphasis on anything “evidence-based” to the extent that bad evidence is given more weight than rational thought.
In the example of the line drawing, an evidence-based approach might be to count the lines and measure them. Then, assuming that that’s all you need to know because it’s evidence-based, you can reproduce the diagram:
An evidence-based approach must be right. Right? It’s not really right, and that’s because there was an assumption that the evidence was sufficient. The reason the evidence is insufficient is not peculiar to this diagram. The evidence-based approach here has a fundamental limitation. It’s limited to counting and measuring.
In CBT there is often the same assumption. You assess a patient and look for an evidence-based formulation and treatment plan. You are assuming that the evidence is sufficient. It rarely is. The reason for this is not that this particular patient is unusual, but rather that evidence-based formulations are fundamentally limited.
It seems to me that the book tries to sneak around this problem, refusing to tackle it head-on. Instead, it tries to work with the faulty assumption that where evidence exists, the evidence is sufficient. It tries to argue that evidence-based formulations are not perfect because patients are unusual, but this is a weak argument that ignores fundamental limitations in evidence-based approaches.
I guess that is why the concept of reliability is not dissected in the book in the way it could have been. Readers who think only in terms of research might be uncomfortable with any other way to think about reliability than the statistical way. Because of that, it is not tackled head-on.
So my impression of this book is that it aims to appease, and perhaps thereby influence, those who have fallen in love with the idea of “evidence-based” treatments and who do not like to think beyond them. I think this approach weakens the book and makes it somewhat confusing, though its analysis of conceptualization is in itself important and powerful.