Why I Accepted a PLOS One Article about Homeopathy for Depression

plos onePLOS One recently published Homeopathy for Depression: A Randomized, Partially Double-Blind, Placebo-Controlled, Four-Armed Study (DEP-HOM). I’m proud to have been the Academic Editor accepting this paper.

I wrote this blog post as an independent blogger to present my personal views about my decision to recommend acceptance, in the context of some larger issues.

The same day, PLOS Medicine published a paper evaluating acupuncture for depression in primary care. I tweeted (@coyneoftherealm) that if PLOS Medicine was going to keep publishing clinical trials of acupuncture without suitable sham acupuncture controls, I might have to resign as an Academic Editor at PLOS one.

Besides blogging at PLOS Mind the Brain, I’m an occasional blogger at Science Based Medicine. The heavily accessed blog site is well known for not mincing words in expressing contempt for complementary and alternative medicine (CAM) approaches, often referred to as SCAM in its blog posts. A couple of my posts (1,2) there were scathing criticisms of a PLOS Medicine article claiming acupuncture had effects equivalent to antidepressants and psychotherapy for depression.

I have not asked them, but I doubt many of my Science Based Medicine colleagues approve my accepting the homeopathy paper, especially if they were unaware of my rationale.

And then there is the inconsistency. Why, if I accepted a homeopathy paper, did I so strongly objected to publishing of an acupuncture paper? My recovery from a momentary lapse of reason?

I will explain, but offer no apologies.

Thanks to the open access afforded by PLOS One, you can get the article here.

A Clinical Trial of Homeopathy for Depression in PLOS One

The article describes an attempt to recruit patients into a four-armed randomized trial. A homeopathic remedy was compared to placebo. A homeopathic interview that involves a lot of history taking to personalize the choice of medication was compared to a more conventional shorter interview. Thus, the trial had a 2 x 2 placebo controlled design, with practitioners and patients blind to whether placebo or control was being administered.

The investigators intended that 224 patients would be randomized. However, despite extensive efforts, they were only able to recruit 44 patients. They abandoned their efforts and wrote up the study.

The investigators acknowledged that there was a lack of scientific rationale for homeopathic medicine.  They reported finding only one previous study in which its efficacy for major depression had been examined. There are actually more studies, but bad quality.

Their interest in conducting a trial was pragmatic.

Many depressed persons in the community are drawn to this treatment because of their belief that it is effective and lacks the side effects conventional medication or the extensive time commitment required by psychotherapy.

Homeopathy has been recommended by both Prince Charles and Mother Teresa. A controversial Swiss Health Technology Assessment concluded that homeopathy was safe and effective and resulted in continued reimbursement for treatment by Swiss insurance companies.

The German government funded this trial, presumably assuming that results of a well-designed clinical trial could settle the issue of efficacy in a way that could be persuasively communicated to the lay public and professional community.

Based on this inability to recruit patients, the investigators concluded

Although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.

They went on to explain why a further trial was not recommended.

How Is Homeopathy Supposed to Work?

Practitioners claim homeopathic medicine works by stimulating a self-healing mechanism in the body’s defense. This is accomplished by administering a substance that would cause the symptoms, except that is provided in very diluted form.

In the case of this clinical trial, it was diluted to a standard quinquagintamillesimal (Q or LM) potency, diluted 50,000 times.  But it’s important that homeopathic preparations not simply be diluted, they must be violently shaken between dilutions. Dilution just reduces potency, but dilution plus shaking or succussion as it is called, is believed to increase potency.

It is possible that there is not even a single molecule of the original substance left in the final, diluted remedy. So homeopathic remedies may consist of nothing but water. That does not bother homeopaths because they believe that because of dilution and succussion, the original compound leaves an “imprint” in the water that no longer depends on the substance still being physically present.

Saving Thousands of Lives With Homeopathy

A spoof video posted on YouTube by Myles Power Powerm1985-72936129announced that he was going to save thousands of lives by dumping small quantities of homeopathic remedies into Scottish streams that flowed into the North Sea. He obtained the remedies from a homeopathic first aid kit advertised on Amazon that promised cures for stroke, heart attacks, poisoning and drowning. The remedies would be appropriately diluted and people all over Europe and perhaps eventually the rest the world would obtain the protection of homeopathy against these conditions.homeopathic first aid kit

The problem with this spoof is that the joke had impeccable logic from a homeopathic perspective, except maybe that being rolled around in North Sea storms did not provide sufficient succession.

Many uses of homeopathy are drawn to claims that it is safe and draws upon the body’s natural healing potential. I doubt many users understand the dilution. Dana Ullman notes that Alexa Ray Joel, daughter of Billy Joel and model Christy Brinkley attempted suicide by taking an “overdose” of her homeopathic medicine.

Homeopathy as Evidence-Based Medicine

A Dana Ullman 2010 article in Huffington Post, Homeopathy: A Healthier Way to Treat Depression drew over 500 “likes.”  Ullman bills himself as an evidence-based homeopath.

Another Ullman article, the 2012 The Homeopathic Alternative to Antidepressants, is a spirited defense of the advantages of homeopathy over conventional antidepressants. Ullman is obviously aware of the scientific literature and draws freely, even if selectively interpreting articles that have appeared in New England Journal of Medicine and (ugh) PLOS Medicine to argue that antidepressants are no more effective than a placebo. Ullman also argues that even if antidepressants are effective in relieving the symptoms of depression, their effectiveness comes at the cost of frustrating the body’s natural reactions to depression and so any improvement obtained with cannot be expected to continue after stopping antidepressants.

A United Kingdom National Health Services (NHS) webpage denounces the lack of a scientific basis for homeopathy, cites the authoritative 2010 UK House of Commons Science Technology Committee Report on Homeopathy to argue that

The ideas that underpin homeopathy are not accepted by mainstream science, and are not consistent with long-accepted principles on the way that the physical world works.

As for the succession process, the NHS further quotes the 2010 report

We consider the notion that ultra-dilutions can maintain an imprint of substances previously dissolved in them to be scientifically implausible.

However,  the NHS article then wimped out, indicating NHS does not take a stand against homeopathic medicine and offers web links for referrals.

Why I Liked the PLOS Article

The article had a number of strengths in terms of trial design and a transparent reporting of what actually happened. No confirmatory bias here—or Barnum conclusion that further research is needed.

  • The protocol for the study had been pre-registered and was publically available.
  • The patients and the whole study team remained blinded to the identity of the four treatment groups until the end of the study.
  • The use of both a placebo control for the medication and a more conventional interview for the longer homeopathic interview.

This latter feature allowed for some control of the ritual, attention, and support with which homeopathic medications are delivered. Without the interview, homeopathic practitioners could argue that the medication was administered without appropriate personalization. Yet, knowing that depression is responsive to support inattention delivered with positive expectations, it was imperative to control for these elements of the treatment.

  • The write up of the trial complied with CONSORT in its transparent report of rationale, methods, and results.
  • The frank admission that the investigators failed in their effort to recruit sufficient numbers of patients and this  failure suggests that another attempt might not be warranted.

PLOS One is Not Just Any Journal

The PLOS One website notes

PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).

PLOS ONE publication criteria are

  1. The study presents the results of primary scientific research.
  2. Results reported have not been published elsewhere.
  3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail.
  4. Conclusions are presented in an appropriate fashion and are supported by the data.
  5. The article is presented in an intelligible fashion and is written in standard English.
  6. The research meets all applicable standards for the ethics of experimentation and research integrity.
  7. The article adheres to appropriate reporting guidelines and community standards for data availability.

Science-Based Medicine Instead of Evidence-Based Medicine?

Paul Ingraham, an editor at Science Based Medicine asked

Why “Science”-Based Instead of “Evidence”-Based?

And summarized a recurring theme going all the way back to the first post at the blog

The idea of emphasizing science in general instead of evidence in particular was first publicly proposed by Yale neurologist Dr. Steven Novella and infamous medical blogger and surgical oncologist Dr. David Gorski in early 2008, along with several other physician co-authors:

EBM is a vital and positive influence on the practice of medicine, but it has its limitations. Most relevant to this blog is the focus on evidence to the exclusion of scientific plausibility. The focus on evidence has its utility, but fails to properly deal with medical modalities that lie outside the scientific paradigm, or for which the scientific plausibility ranges from very little to nonexistent.


It is not that we are opposed to EBM, nor is it that we believe EBM and SBM to be mutually exclusive. On the contrary: EBM is currently a subset of SBM, because EBM by itself is incomplete. We eagerly await the time that EBM considers all the evidence and will have finally earned its name. When that happens, the two terms will be interchangeable..

A comment left on the inaugural SBM blog post

Why is homeopathy implausible? Among other matters, its signature proposition is implausible mainly because never in the whole of human experience or research has dilution of solutions been found to enhance their intrinsic physical, chemical or biological properties (Hormesis is a property of a few biological systems, not the consistent behavior of solutions that homeopathy requires). Thus, dilution doesn’t make our coffee taste stronger and we don’t expect otherwise no matter how much we shake or stir it.

You can find lots of posts at Science Based Medicine concerning homeopathy, including Harriet Hall’s fine discussion of homeopathy first aid kits and Steven Novella’s expression of upset over the Swiss endorsement of homeopathy.

What if

  • …I had rejected the article?

The authors could have gone elsewhere and presented the results with the more confirmatory spin and a call for further research.

Maybe they wouldn’t get published anywhere that would attract attention from anybody but homeopaths. But if so,  maybe the German government would be tempted to finance another trial that was less responsibly conducted and well reported.

  • …The trial had recruited a sufficient number of patients and found a significant effect favoring homeopathic medication when was it administered based on the extensive interview?

I might still have accepted the article, but would not be persuaded of the efficacy of homeopathic medication. I’m enough of a Bayesian to be unshaken by one trial in my belief that a scientifically absurd mechanism could produce effects. I would require the author to acknowledge the lack of a scientific basis for effective homeopathy on depression and propose other mechanisms, perhaps the greater ritual and positive expectations in the longer interview.

One trial does not undo 200 years of claims that are scientifically nonsense.

  • … I had been in a position to participate in the grant review that resulted in funding of the study?

I would say there is not a sufficient scientific basis for homeopathy that would justify the resources required of a well-designed study.

Just because I would accept this article, doesn’t mean that I would approve funding of the study, or be construed in collaborating in the study.

So why was I indignant that PLOS Medicine published a clinical trial comparing acupuncture to antidepressants?

Acupuncture similarly lacks a credible scientific explanation for its effects beyond the rituals in which it is administered. Appeals to ancient Chinese medicine are not scientific.

I would expect a sham treatment having the same ritual with provider and patient blinded would produce the same effect, unless some risk of bias had been introduced.

I think the lack of evidence for the mechanisms proposed by practitioners of acupuncture is sufficient to require that the role of rituals be tested with an appropriate control group, such as sham acupuncture delivered by someone blind to the purpose of hypotheses of the study.

The PLOS Medicine in question did not have an appropriate comparison group controlling for ritual. The authors were allowed to interpret the results with a confirmatory bias.

The article should not have been published in PLOS Medicine because it was scientifically flawed and the authors did not acknowledge the flaws. It greatly embarrasses me that this article got published, and should embarrass the editor who accepted it.

My reaction to publication of the article is to make a determined effort to educate PLOS  editors about the necessity of insisting on appropriate control groups. And about the need to protect the journal from those who would want to exploit its interest in a broader range of of articles to promote fake treatments based on bad science. I am also going to seek some sort of general recommendations from the PLOS management prevent this happening in the future.




What I learned as an Academic Editor for PLOS ONE

Open access week is just around the corner, and I thought I’d take the opportunity to share my experience as an Academic Editor for PLOS ONE.

I was invited to join the team following a conversation at Science Online 2010 with I think Steve Koch, who recommended me to PLOS ONE, and before I knew it I was receiving lots of emails asking me to handle a manuscript.

The nice thing about PLOS ONE is that I get to choose which articles I get to handle, and I am very picky. I think that my role is not just to’ handle’ the manuscript but also make sure that the review process is fair. To do this, I need to understand the manuscript myself. I read every article that I take on and write a ‘mini-review’ of it for myself. When I get the external peer reviews I go through every comment they make against the submitted version, compare the different reviews and revisit my first impression of the manuscript. I have learned a lot from the reviewers, they see things I have missed, and they miss things I have detected. It has been a great insight into the peer review process. And I love not having to pull my crystal ball out to determine whether the article is ‘important’ but just having to decide whether it is scientifically solid.

Image by Wiertz Sébastien on Flickr, licenced under CC-BY

If the science is fundamentally good the articles are sent back to the authors for either minor or major changes, and then it falls back into my inbox. I have found it really interesting to see how authors deal with the reviewer’s comments. The re-submission is also a lot of work. I need to compare the original and new version, make sure that the authors have done what they say they have done, make sure that all reviewer’s comments have been addressed. And then I decide if I send it back for re-review or not. One thing that I found interesting in this second phase is when authors respond to the reviewer’s comments in the letter but do not incorporate that into the article. It is almost as if the responses are for my and the reviewer’s benefit only. So back it goes asking them to incorporate that rationale into the actual manuscript. Oh well. That means another round. Luckily this does not happen that often.

And then it is time to ‘accept’ the paper – and so back to the manuscript where I go through commas, colons, paragraphs, spelling mistakes, in text citations, reference lists, formatting, image quality, figure legends, etc. This I normally send to the authors together with their acceptance letter but don’t ask for the article to be re-submitted.

The main challenge I find with the process is time management.

When I get the request to handle an article, I accept or nor based on how much time I have to process the article. That is all good. Except that I cannot predict when the reviews, resubmissions, etc will eventually happen – and many times these articles ‘ready for decision’ show up in my inbox at a time when I cannot give it the full attention it deserves.  Let alone being able to predict when the revised version will be submitted! I find it impossible to plan ahead for this, especially since I have very little control over a lot of my time commitments (like the days I need to lecture, submit exam questions, mark exams). So if an article arrives while I am somewhere at a conference with limited internet connection… How can I plan for this?

Finding reviewers is another challenge. Sometimes they are hard to find. Nothing as discouraging as finding the “reviewer declined…” emails in my inbox indicating that it is back to the system to do something that I thought was done and dusted. The other day someone asked what is a reasonable amount of reviewing one should do a year? My answer was that one should probably at minimum return the number of reviews provided for one’s articles. Say I publish 3 articles a year, each with 3 reviews, then I should not start complaining about reviewing until I have reviewed at least 9 articles. (of course, one can factor in rejection rate, number of authors, etc) but a tit for tat trade-off seems like a fair expectation. So then why is it so hard to find reviewers? Come on people – if it was your paper getting delayed you’d be sending letters to the journal asking how come the article shows as still sitting with the Editor!

And that is the other thing I learned. Editors don’t just sit on papers because they are lazy. There are many reasons why handling an article may take more or less time. In some cases, after receiving the reviews I feel that something has been raised that needs a specialist to look at a specific aspect of the paper. Sometimes I need a second opinion because there is too little agreement between reviewers. Sometimes the reviewers don’t submit in the agreed time. There are many reasons why an article can be delayed, and so what I learned is to be patient with the editors when I send my papers for publication.

But despite the headaches, the stress and the struggle of being an Academic Editor, it is also an extremely rewarding experience. I keep learning more about science because I see a range of articles before they take their final shape, because I get to look into the discussion of what is good and what is weak. And I get to be part of what makes science great: trying to put out the best we can produce.

It is unfortunate that this process is locked up. I think that there is a lot to learn from it. I think that students and early career scientists would really benefit from seeing the process in articles that are not their own, how variable the quality of the reviews are, what dealing well with reviewers comments and suggestions looks like. And the public too would benefit from seeing what this peer review is all about – what the strengths and weaknesses of the process are and what having been peer reviewed really means.

So, back to Open Access week. Access to the final product is really good. Access to the process of peer review can make understanding the literature even better, because it exposes a part of the process of science that is also worth sharing.