Back in 2005 an article accurately critiqued the (still-current in 2009) fashion for evidence-based medicine. It sounds harmless, doesn’t it? How could medicine that is based on evidence possibly be a bad thing?
Originally published in Perspectives in Biology and Medicine, the article examines the intertwined theoretical, practical, and philosophical dimensions of evidence-based medicine (EBM), expressing skepticism and frustration at the way EBM is used to stifle enquiry and critical thinking. It was written by Ross E. G. Upshur, associate professor and director of the University of Toronto Joint Centre for Bioethics:
Noting that EBM is essentially a belief system, a creed, Upshur points out that EBM is not itself evidence-based. That is, EBM requires clinicians to base their decisions about treatment on scientiifc research, but the decision to believe in EBM is not to be based on scientific research.
So EBM is fundamentally irrational. In my view, Upshur does not place enough emphasis on this central point. Perhaps that’s because it is much more important when dealing with mental health issues.
In an interaction between a mentally ill patient and a psychotherapist, the fundamental distinction that characterizes the interaction is that the therapist is rational and the patient is not (entirely). EBM requires the therapist to be irrational too. What happens next in psychotherapy is anyone’s guess.
Implementing EBM leads to enormous practical difficulties, too. For example, the questions that EBM answers are questions posed by researchers, not by clinicians or patients. The volume of information required to determine evidence-based treatment in anything other than the most routine of cases is daunting. And in complex cases, determining which, if any, of many evidence-based treatments might be appropriate for a particular patient is often impossible:
There are vast areas of care and decision making for which appropriately high-quality evidence does not or will not exist… Unfortunately, I live in a world where single problems and single therapies rarely present themselves. When one adds complexity to the mix, the number of possible options increases dramatically
This is illustrated by many recent clinical discussions I have come across (but which, alas, I cannot provide public links to). Therapists looking for evidence-based treatments frequently resort to asking each other for help, and the help they get is often random and patchy. They almost always discover that they do not know enough about their patient to apply the available evidence, and when they get to know their patient better, they discover that the available evidence does not always apply.
Evidence for sale
EBM is based on several assumptions about scientific research.
One of the assumptions is that the quality (and truthfulness) of research is easy to assess. For example, the kind of research known as a randomized controlled trial (RCT) is often assumed to be of adequate quality to make clinical decisions. But RCTs suffer from many problems that can make their results useless.
Just one of the problems is the bias caused by financial interest. It’s easy to find examples of this in, for example, the decisions made by the National Institute for Clinical Excellence (NICE) based on RCTs carried and submitted to NICE out by the very folks who stand to profit from NICE’s decision.
RCTs can serve potent economic interests, and the ascendancy of the randomized trial as the most reliable form of evidence detracts from considering other, equally cogent forms of evidence as informative or having standing in debates about the safety and harm of treatments.
An inferential gap is where logic (inference) fails and instead you have to guess. This is what happens when you try to apply evidence about the average effect of a treatment to an individual patient. The evidence-based average does give you an expectation about how the treatment will affect the patient, but it does not tell you what will actually happen. It could be that nothing will happen, or it could be that the patient will be harmed.
This means that logic fails at the point where you try to apply research evidence to an individual case. EBM enthusiasts tend to think that this inferential gap does not matter.
However, the same EBM enthusiasts make much of the failings of critical reason to address all possible cases. They tend to think that this inferential gap does matter.
As EBM is not itself based on either evidence or reason — it’s just a belief — there’s nothing much anyone can do about this inconsistency.
Evidence and CBT
CBT is a system for treating mental illness that is based on reason. It is also capable of being validated in scientific experiments. In the years since CBT was invented, however, the experiments that were originally intended to validate CBT have taken over, so to speak, in some people’s minds.
A false way of thinking has emerged, in which anything that seems true in a research study is held to be universally true, even if it makes no sense, and even if the research was biased, and anything that makes sense is held to be untrue unless a research study has validated it. This puts the cart before the horse.
CBT was never designed to work that way, and indeed it doesn’t work that way. No successful therapist really thinks like that, not even those therapists who claim to think like that when asked about their beliefs.
Conversely, any therapist who really does think like that will be unsuccessful. EBM is just fine as a creed. It’s no worse for a therapist to believe in EBM than for a therapist to have religious beliefs. But a CBT therapist who tries to do therapy based on research studies and no more will harm patients just as much as a CBT therapist who tries to help patients through the power of prayer and no more.
Hat tip to TM, who brought this important article to the attention of so many CBT therapists.