A couple of months ago, at around 11:30pm one evening, I had an idea for a journal article. The next morning, I started writing it, and by 11:30pm that evening, working non-stop, I’d finished. The article was in the form of a systematic review: a type of study that usually takes months of painstaking study of a multitude of medical databases. This one was somewhat less grand in scale: it took a mere 24 hours from inception to first draft. I then told the delightfully helpful Adam Jacobs from Dianthus Medical Ltd about it on Twitter, and he agreed to take a look at it. Less than a week later, following his suggestions, I had a manuscript ready to be submitted for publication in a scientific journal.
The reason for this particular review taking so little time to complete was that instead of painstakingly poring through large numbers of scientific articles in the analysis, my analysis took place on a grand total of zero articles. The reason for this was that in my analysis, I searched for literature on an utterly implausible intervention for treating a completely fictional disease, whose mechanism for working was based on an utterly ridiculous and made-up theory with absolutely no basis in reality.
My guess is that this is also the reason why my manuscript was rejected, which, I guess, is fair enough. But oddly enough, systematic reviews of this type do exist, and do get published in medical journals. Mostly, to be fair, the ailments involved in these studies are real. But the interventions and the underlying theories that describe how they are supposed to work are, well, real in the sense that some people have been known to try them, but based on little more than an odd combination of folklore and belligerence. I am, of course, talking about systematic reviews of complementary and alternative medical products and services.
Edzard Ernst, from the Peninsula School at Exeter University, is involved in the writing of many of these. In fact it was through a message on his Twitter account drawing attention to one such review that I got the idea to write this article. In conducting these reviews, he and his colleagues are doing a fantastic job in synthesising the scientific literature on a whole range of products and ailments. Occasionally, for very particular pairings of intervention and ailment (such as osteopathy and lower back pain, or St. John’s Wort and depression), they find that the evidence supports the claims. More often than not, however, the evidence does not back up the claims made—even if individual studies, when cherry-picked out of the body of literature, appear to be supportive.
Overall though, there is one criticism I have of systematic reviews of CAM treatments (and this is what I aimed to highlight with this article): it’s that they tend to say that “more research is needed” when faced with negative evidence, when in fact, the reverse is true. This problem is usually highlighted in the introductory paragraphs which describe the disease, the treatment, and how it works in theory. This last bit is the crucial part, since very little attention is paid to it in the writing of the review. This strikes me as odd: anyone could just make a theory up out of thin air, perform a systematic review, find no clinical trials investigating this made up theory, and conclude that more clinical trials are needed. This is what I’ve done in this article.
I argue that this is also exactly what has been done in some of these systematic reviews, and that this approach risks conferring a sense of legitimacy (via a “the jury is still out” message) onto products and services that really don’t deserve it.
Enough of the introductions, though. Here is my article, in self-published form. Thoughts and comments welcome.