McMindful: Make money as a mindfulness trainer, no background or weekend retreat required

Can a clinical psychologist ethically offer a product with claims that it can turn anyone quickly into a mindfulness trainer, regardless of background or previous training?

mind the brain logo

With an interview with Lynette Monteiro, PhD, Co-founder Ottawa Mindfulness Clinic and Editor of Practitioner’s Guide to Ethics and Mindfulness Based Interventions with Jane Compson and Frank Musten.

hustleA web-based training package promises to turn anyone quickly into a mindfulness trainer, regardless of background or previous training.

Can a clinical psychologist ethically offer a product with such improbable claims that it can be applied to patients by persons who have not been vetted for competence or fitness to treat patients?

Promoters of the package claim it is backed by more science than its competitors.

There are no legal restraints in most jurisdictions on someone calling themselves a mindfulness trainer, coach, or therapist. No training requirements or background check.

There no enforceable ethical codes applicable to such persons once they hang out their shingles.

Many treatment settings are replacing therapists with mindfulness trainers.

Many persons with serious mental health problems seek mindfulness training, but this training does not prepare trainers to recognize and refer such persons.

I didn’t act quickly enough to a series of frantic emails from Seph Fontane Pennock Positive Psychology Program, and so I missed out on a deep discount for an exciting offer to become his next success story.

If I had been quicker. I could have received a 40% discount on a $750 downloadable training package that promised to turn anyone into a money-making mindfulness trainer, without them having to acquire any background or participate in a weekend retreat. It did not matter if a purchaser did not have any clinical background, because the program would release “the real trainer, teacher and coach in yourself that you’ll be proud of.”

My final invitation to become mindfulness trainer came in a breathless gushy, seemingly personalized email that began:  “Hey Jim, I’m blown away by all the emails about the success our members have started to see..”

The email continued with testimonials from purchasers who were impressed that they could customize the materials to appear to be their own, including by putting their company logo on them.

The wannabe trainer doesn’t even need to study the package before slapping on a relabeling and selling to clients and industry.

The website makes it clear that it is superior to other training because it is better rooted in science. But just what does “rooted in science” mean? Is that as vague and meaningless as saying that performance of your automobile is rooted in physics? I think claims about the efficacy of interventions needed to be rooted in randomized trials or program evaluation and there is no evidence that this package has been put to these kind of tests. And such “evidence” does not establish that similar results will be achieved by trainers without training  or supervision.

The package is billed has instantly turning purchasers into mindfulness trainers.

You can simply take this, go out and teach mindfulness …

No longer will you have to go from A to B, from B to C, etc. Instead, you can go straight from A to Z. Mindfulness X is the ultimate shortcut.

It is claimed that professionals will be able to “instantly and successfully teach mindfulness.”

Who is the mastermind behind Mindfulness X?

Dr. Hugo Alberts (Ph.D.) describes himself as a “professor, entrepreneur and coach” who has touched the lives of thousands. With Mindfulness X, he had become a sought after trainer, but decided to stop live presentations in order to touch even more lives with this downloadable product.

When I checked, I found Hugo (H.J.E.M.) Alberts, Ph.D. is an Assistant Professor in the Clinical Psychological Science Department at Maastricht University. Web of Sciences lists 19 publications for him, including a couple of low quality, underpowered studies of mindfulness.

Most importantly, I find no evidence of any peer-reviewed evaluation of Mindfulness X. The key issue is that Alberts is claiming extraordinary efficacy for this program. If his claims are true, it is more effective than any psychotherapy. Extraordinary claims require….

Elsewhere I have provided continually updated evaluations of mindfulness-based training and therapies. There is still a lack of evidence of any advantage of mindfulness over other active treatments. Claims about mechanism depend on low quality studies that do not rule out anything beyond nonspecific –placebo- effects. There may be no specific mechanism beyond that.

Mindfulness training is mostly a benign treatment, often delivered to persons who are lacking moderate to severe psychological problems. But it can have adverse effects on persons suffering simple or complex PTSD, ruminative chronic depression, or psychosis.

An increasing proportion of the treatment or coaching of persons with serious psychological problems is being done by persons lacking in any protected title or any independent certification of qualifications.

Such providers are not bound by enforceable ethics codes.

My advice to Dr Albert: You are quite junior. If you are serious about your scientific career, concentrate on producing quality research, not so much on making money in ways that threaten perceptions of your integrity. I assume you are a clinical psychologist. You have a responsibility to stick to evidence-based claims and to avoid the harm of turning loose on the community ill-trained or untrained promoters of mindfulness, particularly with vulnerable clients.

I sent Dr. Lynette Monteiro some questions and she kindly responded.

“How much should consumers be concerned about the qualifications and competence of a counselor with a certificate on the wall claiming completion of an internet course in mindfulness?” 

Consumers should be very concerned if the provider is not trained in a specifically-identified program (MBSR, MBCT, MBSM, etc.) and trained by an accredited certified organization. A general “mindfulness” training is not a guarantee of knowledge or skill. Completing a post-graduate degree without evidence of specialized training is insufficient to guarantee competence or even necessarily consumer protection.

How could they tell if the counselor knows what they are doing?

Any individual that makes promises that go beyond reasonable expectations (and the person’s skills) should be suspect. Facilitators should be open to pointed questions about the program: how it was developed, what is the support for it, how were they trained, what are the safeguards in case of negative reactions to meditations. Any suggestion to “just stay with the negative feelings” warrants serious concern. The credentials and training of the facilitator should be transparently stated and available on their website or upon request.

 “These promoters say their product are 100% evidence-based. Is that reassuring?”

Even if the program itself was evidence-based, it would/should not be reassuring because the efficacy of the delivery is contingent on the skills of the facilitator and their sensitivity to interaction effects with the participant. IOW, the efficacy of specific facilitator’s form of delivery is not evidence-based and confounded with demand characteristics.

I had planned to ask Dr. Monteiro about what she thought about a package that promised that those who purchased it would be ready to go out and “instantly and successfully teach mindfulness.” But I know she is very busy and I think we know what she would say.

Thanks, Dr. Monteiro.

Want to see more of Lynette’s thoughts on the need for standards in training and certifying mindfulness instructors? Check out her article in Tricycle,  a “unique and independent public forum for exploring Buddhism, establishing a dialogue between Buddhism and the broader culture, and introducing Buddhist thinking to Western disciplines.”

Opinion: Why the New International Mindfulness Teachers Association Falls Short

Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing

Notes

[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.

 

 

 

 

 

 

 

Creating illusions of wondrous effects of yoga and meditation on health: A skeptic exposes tricks

The tour of the sausage factory is starting, here’s your brochure telling you’ll see.

 

A recent review has received a lot of attention with it being used for claims that mind-body interventions have distinct molecular signatures that point to potentially dramatic health benefits for those who take up these practices.

What Is the Molecular Signature of Mind–Body Interventions? A Systematic Review of Gene Expression Changes Induced by Meditation and Related Practices.  Frontiers in Immunology. 2017;8.

Few who are tweeting about this review or its press coverage are likely to have read it or to understand it, if they read it. Most of the new agey coverage in social media does nothing more than echo or amplify the message of the review’s press release.  Lazy journalists and bloggers can simply pass on direct quotes from the lead author or even just the press release’s title, ‘Meditation and yoga can ‘reverse’ DNA reactions which cause stress, new study suggests’:

“These activities are leaving what we call a molecular signature in our cells, which reverses the effect that stress or anxiety would have on the body by changing how our genes are expressed.”

And

“Millions of people around the world already enjoy the health benefits of mind-body interventions like yoga or meditation, but what they perhaps don’t realise is that these benefits begin at a molecular level and can change the way our genetic code goes about its business.”

[The authors of this review actually identified some serious shortcomings to the studies they reviewed. I’ll be getting to some excellent points at the end of this post that run quite counter to the hype. But the lead author’s press release emphasized unwarranted positive conclusions about the health benefits of these practices. That is what is most popular in media coverage, especially from those who have stuff to sell.]

Interpretation of the press release and review authors’ claims requires going back to the original studies, which most enthusiasts are unlikely to do. If readers do go back, they will have trouble interpreting some of the deceptive claims that are made.

Yet, a lot is at stake. This review is being used to recommend mind-body interventions for people having or who are at risk of serious health problems. In particular, unfounded claims that yoga and mindfulness can increase the survival of cancer patients are sometimes hinted at, but occasionally made outright.

This blog post is written with the intent of protecting consumers from such false claims and providing tools so they can spot pseudoscience for themselves.

Discussion in the media of the review speaks broadly of alternative and complementary interventions. The coverage is aimed at inspiring  confidence in this broad range of treatments and to encourage people who are facing health crises investing time and money in outright quackery. Seemingly benign recommendations for yoga, tai chi, and mindfulness (after all, what’s the harm?) often become the entry point to more dubious and expensive treatments that substitute for established treatments.  Once they are drawn to centers for integrative health care for classes, cancer patients are likely to spend hundreds or even thousands on other products and services that are unlikely to benefit them. One study reported:

More than 72 oral or topical, nutritional, botanical, fungal and bacterial-based medicines were prescribed to the cohort during their first year of IO care…Costs ranged from $1594/year for early-stage breast cancer to $6200/year for stage 4 breast cancer patients. Of the total amount billed for IO care for 1 year for breast cancer patients, 21% was out-of-pocket.

Coming up, I will take a skeptical look at the six randomized trials that were highlighted by this review.  But in this post, I will provide you with some tools and insights so that you do not have to make such an effort in order to make an informed decision.

Like many of the other studies cited in the review, these randomized trials were quite small and underpowered. But I will focus on the six because they are as good as it gets. Randomized trials are considered a higher form of evidence than simple observational studies or case reports [It is too bad the authors of the review don’t even highlight what studies are randomized trials. They are lumped with others as “longitudinal studies.]

As a group, the six studies do not actually add any credibility to the claims that mind-body interventions – specifically yoga, tai chi, and mindfulness training or retreats improve health by altering DNA.  We can be no more confident with what the trials provide than we would be without them ever having been done.

I found the task of probing and interpreting the studies quite labor-intensive and ultimately unrewarding.

I had to get past poor reporting of what was actually done in the trials, to which patients, and with what results. My task often involved seeing through cover ups with authors exercising considerable flexibility in reporting what measures were they actually collected and what analyses were attempted, before arriving at the best possible tale of the wondrous effects of these interventions.

Interpreting clinical trials should not be so hard, because they should be honestly and transparently reported and have a registered protocol and stick to it. These reports of trials were sorely lacking, The full extent of the problems took some digging to uncover, but some things emerged before I got to the methods and results.

The introductions of these studies consistently exaggerated the strength of existing evidence for the effects of these interventions on health, even while somehow coming to the conclusion that this particular study was urgently needed and it might even be the “first ever”. The introductions to the six papers typically cross-referenced each other, without giving any indication of how poor quality the evidence was from the other papers. What a mutual admiration society these authors are.

One giveaway is how the introductions  referred to the biggest, most badass, comprehensive and well-done review, that of Goyal and colleagues.

That review clearly states that the evidence for the effects of mindfulness is poor quality because of the lack of comparisons with credible active treatments. The typical randomized trial of mindfulness involves a comparison with no-treatment, a waiting list, or patients remaining in routine care where the target problem is likely to be ignored.  If we depend on the bulk of the existing literature, we cannot rule out the likelihood that any apparent benefits of mindfulness are due to having more positive expectations, attention, and support over simply getting nothing.  Only a handful  of hundreds of trials of mindfulness include appropriate, active treatment comparison/control groups. The results of those studies are not encouraging.

One of the first things I do in probing the introduction of a study claiming health benefits for mindfulness is see how they deal with the Goyal et al review. Did the study cite it, and if so, how accurately? How did the authors deal with its message, which undermines claims of the uniqueness or specificity of any benefits to practicing mindfulness?

For yoga, we cannot yet rule out that it is better than regular exercising – in groups or alone – having relaxing routines. The literature concerning tai chi is even smaller and poorer quality, but there is the same need to show that practicing tai chi has any benefits over exercising in groups with comparable positive expectations and support.

Even more than mindfulness, yoga and tai chi attract a lot of pseudoscientific mumbo jumbo about integrating Eastern wisdom and Western science. We need to look past that and insist on evidence.

Like their introductions, the discussion sections of these articles are quite prone to exaggerating how strong and consistent the evidence is from existing studies. The discussion sections cherry pick positive findings in the existing literature, sometimes recklessly distorting them. The authors then discuss how their own positively spun findings fit with what is already known, while minimizing or outright neglecting discussion of any of their negative findings. I was not surprised to see one trial of mindfulness for cancer patients obtain no effects on depressive symptoms or perceived stress, but then go on to explain mindfulness might powerfully affect the expression of DNA.

If you want to dig into the details of these studies, the going can get rough and the yield for doing a lot of mental labor is low. For instance, these studies involved drawing blood and analyzing gene expression. Readers will inevitably encounter passages like:

In response to KKM treatment, 68 genes were found to be differentially expressed (19 up-regulated, 49 down-regulated) after adjusting for potentially confounded differences in sex, illness burden, and BMI. Up-regulated genes included immunoglobulin-related transcripts. Down-regulated transcripts included pro-inflammatory cytokines and activation-related immediate-early genes. Transcript origin analyses identified plasmacytoid dendritic cells and B lymphocytes as the primary cellular context of these transcriptional alterations (both p < .001). Promoter-based bioinformatic analysis implicated reduced NF-κB signaling and increased activity of IRF1 in structuring those effects (both p < .05).

Intimidated? Before you defer to the “experts” doing these studies, I will show you some things I noticed in the six studies and how you can debunk the relevance of these studies for promoting health and dealing with illness. Actually, I will show that even if these 6 studies got the results that the authors claimed- and they did not- at best, the effects would trivial and lost among the other things going on in patients’ lives.

Fortunately, there are lots of signs that you can dismiss such studies and go on to something more useful, if you know what to look for.

Some general rules:

  1. Don’t accept claims of efficacy/effectiveness based on underpowered randomized trials. Dismiss them. The rule of thumb is reliable to dismiss trials that have less than 35 patients in the smallest group. Over half the time, true moderate sized effects will be missed in such studies, even if they are actually there.

Due to publication bias, most of the positive effects that are published from such sized trials will be false positives and won’t hold up in well-designed, larger trials.

When significant positive effects from such trials are reported in published papers, they have to be large to have reached significance. If not outright false, these effect sizes won’t be matched in larger trials. So, significant, positive effect sizes from small trials are likely to be false positives and exaggerated and probably won’t replicate. For that reason, we can consider small studies to be pilot or feasibility studies, but not as providing estimates of how large an effect size we should expect from a larger study. Investigators do it all the time, but they should not: They do power calculations estimating how many patients they need for a larger trial from results of such small studies. No, no, no!

Having spent decades examining clinical trials, I am generally comfortable dismissing effect sizes that come from trials with less than 35 patients in the smaller group. I agree with a suggestion that if there are two larger trials are available in a given literature, go with those and ignore the smaller studies. If there are not at least two larger studies, keep the jury out on whether there is a significant effect.

Applying the Rule of 35, 5 of the 6 trials can be dismissed and the sixth is ambiguous because of loss of patients to follow up.  If promoters of mind-body interventions want to convince us that they have beneficial effects on physical health by conducting trials like these, they have to do better. None of the individual trials should increase our confidence in their claims. Collectively, the trials collapse in a mess without providing a single credible estimate of effect size. This attests to the poor quality of evidence and disrespect for methodology that characterizes this literature.

  1. Don’t be taken in by titles to peer-reviewed articles that are themselves an announcement that these interventions work. Titles may not be telling the truth.

What I found extraordinary is that five of the six randomized trials had a title that indicating a positive effect was found. I suspect that most people encountering the title will not actually go on to read the study. So, they will be left with the false impression that positive results were indeed obtained. It’s quite a clever trick to make the title of an article, by which most people will remember it, into a false advertisement for what was actually found.

For a start, we can simply remind ourselves that with these underpowered studies, investigators should not even be making claims about efficacy/effectiveness. So, one trick of the developing skeptic is to confirm that the claims being made in the title don’t fit with the size of the study. However, actually going to the results section one can find other evidence of discrepancies between what was found in what is being claimed.

I think it’s a general rule of thumb that we should be careful of titles for reports of randomized that declare results. Even when what is claimed in the title fits with the actual results, it often creates the illusion of a greater consistency with what already exists in the literature. Furthermore, even when future studies inevitably fail to replicate what is claimed in the title, the false claim lives on, because failing to replicate key findings is almost never a condition for retracting a paper.

  1. Check the institutional affiliations of the authors. These 6 trials serve as a depressing reminder that we can’t go on researchers’ institutional affiliation or having federal grants to reassure us of the validity of their claims. These authors are not from Quack-Quack University and they get funding for their research.

In all cases, the investigators had excellent university affiliations, mostly in California. Most studies were conducted with some form of funding, often federal grants.  A quick check of Google would reveal from at least one of the authors on a study, usually more, had federal funding.

  1. Check the conflicts of interest, but don’t expect the declarations to be informative. But be skeptical of what you find. It is also disappointing that a check of conflict of interest statements for these articles would be unlikely to arouse the suspicion that the results that were claimed might have been influenced by financial interests. One cannot readily see that the studies were generally done settings promoting alternative, unproven treatments that would benefit from the publicity generated from the studies. One cannot see that some of the authors have lucrative book contracts and speaking tours that require making claims for dramatic effects of mind-body treatments could not possibly be supported by: transparent reporting of the results of these studies. As we will see, one of the studies was actually conducted in collaboration with Deepak Chopra and with money from his institution. That would definitely raise flags in the skeptic community. But the dubious tie might be missed by patients in their families vulnerable to unwarranted claims and unrealistic expectations of what can be obtained outside of conventional medicine, like chemotherapy, surgery, and pharmaceuticals.

Based on what I found probing these six trials, I can suggest some further rules of thumb. (1) Don’t assume for articles about health effects of alternative treatments that all relevant conflicts of interest are disclosed. Check the setting in which the study was conducted and whether it was in an integrative [complementary and alternative, meaning mostly unproven.] care setting was used for recruiting or running the trial. Not only would this represent potential bias on the part of the authors, it would represent selection bias in recruitment of patients and their responsiveness to placebo effects consistent with the marketing themes of these settings.(2) Google authors and see if they have lucrative pop psychology book contracts, Ted talks, or speaking gigs at positive psychology or complementary and alternative medicine gatherings. None of these lucrative activities are typically expected to be disclosed as conflicts of interest, but all require making strong claims that are not supported by available data. Such rewards are perverse incentives for authors to distort and exaggerate positive findings and to suppress negative findings in peer-reviewed reports of clinical trials. (3) Check and see if known quacks have prepared recruitment videos for the study, informing patients what will be found (Serious, I was tipped off to look and I found that).

  1. Look for the usual suspects. A surprisingly small, tight, interconnected group is generating this research. You could look the authors up on Google or Google Scholar or  browse through my previous blog posts and see what I have said about them. As I will point out in my next blog, one got withering criticism for her claim that drinking carbonated sodas but not sweetened fruit drinks shortened your telomeres so that drinking soda was worse than smoking. My colleagues and I re-analyzed the data of another of the authors. We found contrary to what he claimed, that pursuing meaning, rather than pleasure in your life, affected gene expression related to immune function. We also showed that substituting randomly generated data worked as well as what he got from blood samples in replicating his original results. I don’t think it is ad hominem to point out a history for both of the authors of making implausible claims. It speaks to source credibility.
  1. Check and see if there is a trial registration for a study, but don’t stop there. You can quickly check with PubMed if a report of a randomized trial is registered. Trial registration is intended to ensure that investigators commit themselves to a primary outcome or maybe two and whether that is what they emphasized in their paper. You can then check to see if what is said in the report of the trial fits with what was promised in the protocol. Unfortunately, I could find only one of these was registered. The trial registration was vague on what outcome variables would be assessed and did not mention the outcome emphasized in the published paper (!). The registration also said the sample would be larger than what was reported in the published study. When researchers have difficulty in recruitment, their study is often compromised in other ways. I’ll show how this study was compromised.

Well, it looks like applying these generally useful rules of thumb is not always so easy with these studies. I think the small sample size across all of the studies would be enough to decide this research has yet to yield meaningful results and certainly does not support the claims that are being made.

But readers who are motivated to put in the time of probing deeper come up with strong signs of p-hacking and questionable research practices.

  1. Check the report of the randomized trial and see if you can find any declaration of one or two primary outcomes and a limited number of secondary outcomes. What you will find instead is that the studies always have more outcome variables than patients receiving these interventions. The opportunities for cherry picking positive findings and discarding the rest are huge, especially because it is so hard to assess what data were collected but not reported.
  1. Check and see if you can find tables of unadjusted primary and secondary outcomes. Honest and transparent reporting involves giving readers a look at simple statistics so they can decide if results are meaningful. For instance, if effects on stress and depressive symptoms are claimed, are the results impressive and clinically relevant? Almost in all cases, there is no peeking allowed. Instead, authors provide analyses and statistics with lots of adjustments made. They break lots of rules in doing so, especially with such a small sample. These authors are virtually assured to get results to crow about.

Famously, Joe Simmons and Leif Nelson hilariously published claims that briefly listening to the Beatles’ “When I’m 64” left students a year and a half older younger than if they were assigned to listening to “Kalimba.”  Simmons and Leif Nelson knew this was nonsense, but their intent was to show what researchers can do if they have free reign with how they analyze their data and what they report and  . They revealed the tricks they used, but they were so minor league and amateurish compared to what the authors of these trials consistently did in claiming that yoga, tai chi, and mindfulness modified expression of DNA.

Stay tuned for my next blog post where I go through the six studies. But consider this, if you or a loved one have to make an immediate decision about whether to plunge into the world of woo woo unproven medicine in hopes of  altering DNA expression. I will show the authors of these studies did not get the results they claimed. But who should care if they did? Effects were laughably trivial. As the authors of this review about which I have been complaining noted:

One other problem to consider are the various environmental and lifestyle factors that may change gene expression in similar ways to MBIs [Mind-Body Interventions]. For example, similar differences can be observed when analyzing gene expression from peripheral blood mononuclear cells (PBMCs) after exercise. Although at first there is an increase in the expression of pro-inflammatory genes due to regeneration of muscles after exercise, the long-term effects show a decrease in the expression of pro-inflammatory genes (55). In fact, 44% of interventions in this systematic review included a physical component, thus making it very difficult, if not impossible, to discern between the effects of MBIs from the effects of exercise. Similarly, food can contribute to inflammation. Diets rich in saturated fats are associated with pro-inflammatory gene expression profile, which is commonly observed in obese people (56). On the other hand, consuming some foods might reduce inflammatory gene expression, e.g., drinking 1 l of blueberry and grape juice daily for 4 weeks changes the expression of the genes related to apoptosis, immune response, cell adhesion, and lipid metabolism (57). Similarly, a diet rich in vegetables, fruits, fish, and unsaturated fats is associated with anti-inflammatory gene profile, while the opposite has been found for Western diet consisting of saturated fats, sugars, and refined food products (58). Similar changes have been observed in older adults after just one Mediterranean diet meal (59) or in healthy adults after consuming 250 ml of red wine (60) or 50 ml of olive oil (61). However, in spite of this literature, only two of the studies we reviewed tested if the MBIs had any influence on lifestyle (e.g., sleep, diet, and exercise) that may have explained gene expression changes.

How about taking tango lessons instead? You would at least learn dance steps, get exercise, and decrease any social isolation. And so what if there were more benefits than taking up these other activities?

 

 

The Prescription Pain Pill Epidemic: A Conversation with Dr. Anna Lembke

back-pain-in-seniors-helped-with-mindfulness-300x200manypills
My colleague, Dr. Anna Lembke is the Program Director for the Stanford University Addiction Medicine Fellowship, and Chief of the Stanford Addiction Medicine Dual Diagnosis Clinic. She is the author of a newly released book on the prescription pain pill epidemic: “Drug Dealer, MD: How Doctors Were Duped, Patients Got Hooked, and Why It’s So Hard to Stop” (Johns Hopkins University Press, October 2016).

I spoke with her recently about the scope of this public health tragedy, how we got here and what we need to do about it.

Dr. Jain: About 15-20 years ago American medicine underwent a radical cultural shift in its attitude towards pain, a shift that ultimately culminated in a public health tragedy. Can you comment on factors that contributed to that shift occurring in the first place?
Dr. Lembke: Sure. So the first thing that happened (and it was really more like the early 1980’s when this shift occurred) was that there were more people with daily pain. Overall, our population is getting healthier, but we also have more people with more pain conditions. No one really knows exactly the reason for that, but it probably involves people living longer with chronic illnesses, and more people getting surgical interventions for all types of condition. Any time you cut into the body, you cut across the nerves and you create the potential for some kind of neuropathic pain problem.
The other thing that happened in the 1980’s was the beginning of the hospice movement. This movement helped people at the very end of life (the last month to weeks to days of their lives) to transition to death in a more humane and peaceful way. There was growing recognition that we weren’t doing enough for people at the end of life. As part of this movement, many doctors began advocating for using opioids more liberally at the end of life.
There was also a broader cultural shift regarding the meaning of pain. Prior to 1900 people viewed pain as having positive value: “what does not kill you makes you stronger” or “after darkness comes the dawn”. There were spiritual and biblical connotations and positive meaning in enduring suffering. What arose, through the 20th century, was this idea that pain is actually something that you need to avoid because pain itself can lead to a psychic scar that contributes to future pain. Today, not only is pain painful, but pain begets future pain. By the 1990’s, pain was viewed as a very bad thing and something that had to be eliminated at all cost.
Growing numbers of people experiencing chronic pain, the influence of the hospice movement, and a shifting paradigm about the meaning and consequences of experiencing pain, led to increased pressures within medicine for doctors to prescribe more opioids. This shift was a departure from prior practice, when doctors were loathe to prescribe opioids, for fear of creating addiction, except in cases of severe trauma, cases involving surgery, or cases of the very end of life.
Dr. Jain: The American Pain Society had introduced “pain as the 5th vital sign,” a term which suggested physicians, who were not taking their patients’ pain seriously, were being neglectful. What are your thoughts about this term?
Dr. Lembke: “Pain is the 5th vital sign” is a slogan. It’s kind of an advertising campaign. We use slogans all the time in medicine, many times to good effect, to raise awareness both inside and outside the profession about a variety of medical issues. The reason that “pain is the 5th vital sign” went awry, however, has to do with the ways in which professional medical societies, like the American Pain Society, and so-called “academic thought leaders”, began to collaborate and cooperate with the pharmaceutical industry. That’s where “pain is the 5th vital sign” went from being an awareness campaign to being a brand for a product, namely prescription opioids.
So the good intentions in the early 1980’s turned into something really quite nefarious when it came to the way that we started treating patients. To really understand what happened, you have to understand the ways in which the pharmaceutical industry, particularly the makers of opioid analgesics, covertly collaborated with various institutions within what I’ll call Big Medicine, in order to promote opioid prescribing.
Dr. Jain: So by Big Medicine what do you mean?
Dr. Lembke: I mean the Federation of State Medical Boards, The Joint Commission (JACHO), pain societies, academic thought leaders, and the Food and Drug Administration (FDA). These are the leading organizations within medicine whose job it is to guide and regulate medicine. None of these are pharmaceutical companies per se, but what happened around opioid pain pills was that Big Pharma infiltrated these various organizations in order to use false evidence to encourage physicians to prescribe more opioids. They used a Trojan Horse approach.. They didn’t come out and say we want you to prescribe more opioids because we’re Big Pharma and we want to make more money, instead what they said was we want you to prescribe more opioids because that’s what the scientific evidence supports.
The story of how they did that is really fascinating. Let’s take The Joint Commission (JACHO) as an example. In 1996, when oxycontin was introduced to the market, JACHO launched a nationwide pain management educational program where they sold educational materials to hospitals, which they acquired for free from Purdue Pharma. These materials included statements which we now know to be patently false. JACHO sold the Purdue Pharma videos and literature on pain to hospitals.
These educational materials perpetuated four myths about opioid prescribing. The first myth was that opioids work for chronic pain. We have no evidence to support that. The second was that no dose is too high. So if your patient responds to opioids initially and then develops tolerance, just keep going up. And that’s how we got patients on astronomical amounts of opioids. The third myth was about pseudo addiction. If you have a patient who appears to be demonstrating drug seeking behavior, they’re not addicted. They just need more pain meds. The fourth and most insidious myth was that there is a halo effect when opioids are prescribed by a doctor, that is, they’re not addictive as long as they’re being used to treat pain.
So getting back to JACHO, not only did they use material propagating myths about the use of opioids to treat pain, but they also did something that was very insidious and, ultimately, very bad for patients. They made pain a “quality measure”. By The Joint Commission’s own definition of a quality measure, it must be something that you can count. So what they did was they created this visual analog scale, also known as the “pain scale”. The scale consists of numbers from one to ten describing pain, with sad and happy faces to match. JAHCO told doctors they needed to use this pain scale in order to assess a patients’ pain. What we know today is that this pain scale has not led to improved treatment or functional outcomes for patients with pain. The only thing that it has been correlated with is increased opioid prescribing.
This sort of stealth maneuver by Big Pharma to use false evidence or pseudo-science to infiltrate academic medicine, regulatory agencies, and academic societies in order to promote more opioid prescribing: that’s an enduring theme throughout any analysis of this epidemic.
Dr. Jain: Can you comment specifically on the breadth and depth of the opioid epidemic in the US? What were the key factors involved?
Dr. Lembke: Drug overdose is now the leading cause of accidental death in this country, exceeding death due to motor vehicle accidents or firearms. Driving this statistic is opioid deaths and driving opioid deaths is opioid pain prescription deaths, which in turn correlates with excessive opioid prescribing. There are more than 16,000 deaths per year due to prescription opioid overdoses.
What’s really important to understand is that an opioid overdose is not a suicide attempt. The vast majority of these people are not trying to kill themselves, and many of them are not even taking the medication in excess. They’re often taking it as prescribed, but over time are developing a low grade hypoxia. They may get a minor cold, let’s say a pneumonia, then they’ll take the pills and they’ll fall asleep and won’t wake up again because their tolerance to the euphorigenic and pain effects of the opioids is very robust, but their tolerance to the respiratory suppressant effect doesn’t keep pace with that. You can feel like you need to take more in order to eliminate the pain, but at the same time the opioid is suppressing your respiratory drive, so you eventually become hypoxemic and can’t breathe anymore and just fall into a gradual sleep that way.
There are more than two million people today who are addicted to prescription opioids. So not only is there this horrible risk of accidental death, but there’s obviously the risk of addiction. We also have heroin overdose deaths and heroin addiction on the rise, most likely on the coattails of the prescription opioids epidemic, driven largely by young people who don’t have reservations about switching from pills to heroin..
Dr. Jain: I was curious about meds like oxycontin, vicodin, and percocet. Are they somehow more addictive than other opioid pills?
Dr. Lembke: All opioids are addictive, especially if you’re dealing with an opioid naive person. But it is certainly true that some of the opioids are more addictive than others because of pharmacology. Let’s consider oxycontin. The major ingredient in oxycontin is oxycodone. Oxycodone is a very potent synthetic opioid. When Purdue formulated it into oxycontin, what they wanted to create was a twice daily pain medication for cancer patients. So they put this hard shell on a huge 12 hours worth of oxycodone. That hard shell was intended to release oxycodone slowly over the course of the day. But what people discovered is that if they chewed the oxycontin and broke that hard shell, then they got a whole day’s worth of very potent oxycodone at once. With that came the typical rush that people who are addicted to opioids describe, as well as this long and powerful and sustained high. So that is why oxycontin was really at the center of the prescription opioid epidemic. It basically was more addictive because of the quantity and potency once that hard shell was cracked.
Dr. Jain: So has the epidemic plateaued? And if so, why?
Dr. Lembke: The last year for which we have CDC data is 2014, when there were more prescription opioid-related deaths, and more opioid prescriptions written by doctors, than in any year prior. This is remarkable when you think that by 2014, there was already wide-spread awareness of the problem. Yet doctors were not changing their prescribing habits, and patients were dying in record numbers.
I’m really looking forward to the next round of CDC data to come out and tell us what 2015 looked like. I do not believe we have reached the end or even the waning days of this epidemic. Doctors continue to write over 250 million opioid prescriptions annually, a mere fraction of what was written three decades ago.
Also, the millions of people who have been taking opioids for years are not easily weaned from opioids.. They now have neuroadaptive changes in their brains which are very hard to undo. I can tell you from clinical experience that even when I see patients motivated to get off of their prescription opioids, it can take weeks, months, and even years to make that happen.
So I don’t think that the epidemic has plateaued, and this is one of the major points that I try to make in my book. The prescription drug epidemic is the canary in the coal mine. It speaks to deeper problems within medicine. Doctors get reimbursed for prescribing a pill or doing a procedure, but not for talking to our patients and educating them. That’s a problem. The turmoil in the insurance system we can’t even establish long term relationships with our patients. So as a proxy for real healing and attachment, we prescribe opioids. ! Those kinds of endemic issues within medicine have not changed, and until they do, I believe this prescription drug problem will continue unabated.

Hans Eysenck’s contribution to cognitive behavioral therapy for physical health problems: fraudulent data

  • The centenary of the birth of Hans Eysenck is being marked by honoring his role in bringing clinical psychology to the UK and pioneering cognitive behavior therapy (CBT).
  • There is largely silence about his publishing fraudulent data, editorial misconduct, and substantial undeclared conflicts of interest.
  • The articles in which Eysenck used fraudulent data are no longer cited much, but the influence of his claims which depended on these data remains profound.
  • Eysenck used fraudulent data to argue that CBT could prevent cancer and cardiovascular disease and extend the lives of persons with advanced cancer.
  • He similarly used fraudulent data to advance the claim that psychoanalysis is, unlike smoking, carcinogenic and has other adverse effects on health.
  • Ironically, Eysenck incorporated into his explanations for how CBT works elements of the psychoanalytic thinking that he seemingly detested.

If there is sufficient interest, a follow-up blog post will discuss:

  • Because of Eysenck’s influence, CBT in the UK exaggerates the role of early childhood adversity and much less to functional behavioral analysis than the American behavior therapy and cognitive behavior therapy.
  • Both CBT in the UK and some quack therapy approaches make assumptions about mechanism tied to Eysenck’s use of fraudulent data.
  • Consistent with Eysenck’s influence, CBT for physical problems in the UK largely focuses on self-report questionnaire assessments of mechanism of change and of outcome, rather than functional behavioral and objective physical health outcome variables.

8th-chocolate-happy-birthday-cake-for-HansHappy Birthday, Hans Eysenck

March 12, 2016 was the centenary of the birth of psychologist Hans Eysenck. The British Psychological Society’s  The Psychologist marked the occasion with release of a free app by which BPS members can access a collection of articles about Hans Eysenck from the archives.  Nonmembers can access the articles here.

The introduction to the collection, Philip Corr’s The centenary of a maverick states

Eysenck’s contributions were many, varied and significant, including: the professional development of clinical psychology; the slaying of the psychoanalytical dragon; pioneering behaviour therapy and, thus, helping to usher in the era of cognitive behavioural therapy…

Corr also wrote in the March 30 2016 Times Higher Education:

in defence corr

hans ensenck portraitThe articles collected in The Psychologist were written over many years. Together they present an unflattering picture of a controversial man who was shunned by his colleagues, blocked from getting awards, and who would humiliate those with whom he disagreed rather than acknowledge any contradictory evidence. Particularly revealing are Roderick Buchanan’s   Looking back: The controversial Hans Eysenck and a review of Buchanan’s book by Eysenck’s son Michael, Playing with fire: The controversial career of Hans J. Eysenck.

However, the collection stops short of acknowledging what was revealed in the early 90s in The BMJ: Eysenck knowingly published fraudulent data to back outrageous claims that CBT prevented cancer and extended the lives of patients with terminal cancer, whereas psychoanalysis was carcinogenic. He published his claims in journals he had founded, liberally self-plagiarizing and duplicate publishing with undeclared conflicts of interest. Eysenck received salary supplements and cash awards from German tobacco companies and from lawyers for the American tobacco companies for these activities.

slide 2 r smith should editors slide1 R Smith EysenckThe BMJ gave psychiatrists Anthony Pelosi and Louis Appleby a forum in the early nineties for criticizing Eysenck, even though the articles they attacked had been published elsewhere. The BMJ Editor Richard Smith followed up,  citing Eysenck as an example in raising the question whether editors should publish research articles in their own journal. Pelosi filed formal charges against Eysenck with the British Psychological Society. But, according to Buchanan’s book:

The BPS investigatory committee deemed it “inappropriate” to set up an investigatory panel to look into the material Pelosi had sent them, and henceforth considered the matter closed. Pelosi disagreed, of course, but was left with little recourse.

In an editorial in The Times Simon Wessely acknowledged Pelosi and Appleby’s criticism of Eysenck, but said “It would take more than a couple of psychiatrists to ruffle Eysenck.”

Simon on EysenckWessely suggested that the matter be dropped: the controversy was distracting everyone from the real progress being made in psychological approaches to cancer, like showing a fighting spirit extends the lives of cancer patients.  There was apparently no further mention in the UK press. Read more here.

Eysenck’s articles involving fraudulent data are seldom cited in the contemporary literature, but the claims the data were used to back remain quite influential. For instance, Eysenck claimed psychological factors presented more risk for cancer than many well-established biological factors. Including Eysenck’s data probably allowed one of the most cited meta-analyses of psychological factors in cancer to pass the threshold of hazard ratios strong enough for publication in the prestigious journal, Nature Clinical Practice: Oncology. Without the inclusion of Eysenck’s data, hazard ratios from methodologically weak studies cluster slightly higher than 1.0, suggesting little association that cannot be explained by confounds. A later blog post will document the broader influence of the Eysenck fraud on psychoneuroimmunology.

Eysenck’s claims concerning effects of CBT on physical health conditions now similarly go uncited.  However, the idiosyncratic definition he gave to CBT and his claims about the presumed mechanism by which it improved physical health pervade both CBT as defined in the UK and a number of quack treatments in the UK and elsewhere.

It is important to establish the connection between fraudulent data, distinctive features of CBT in the UK, and presumed mechanisms of action in order to open for re-examination the forms that CBT for physical health problems take in the UK and the way in which claims of efficacy are evaluated.

Fraudulent Data

Eysenck repeated tables and text in a number of places, but I will mainly draw on data as he presented them in the journal he founded, Behaviour Research and Therapy [1,   2], which correspond with what he presents elsewhere.

Eysenck’s Croatian collaborator Grossarth-Maticek conducted the therapy and collected the predictor and outcome data. A personality inventory  was used to classify participants receiving therapy into four types , a cancer-prone type (Type 1), a coronary heart disease (CHD)-prone type (Type 2), and 2 healthy types (Type 3 and Type 4). The typology was derived from quadrants in a 2×2 dichotomization of high versus low and rationality versus anti-emotionality, quite different from the dimensions and item content of the Eysenck Personality Questionnaire. Indeed, Roderick Buchanan noted in his biography that “Eysenck had struggled to banish typological concepts in favour of continuous dimensions for most of his career.” Grossarth-Maticekis questionnaire and typology has been sharply criticized later by Eysenck son Michael, among many others.

Eysenck and Grossarth-Maticek reported results of individually delivered “creative novation behaviour therapy”:

… Effects of prophylactic behaviour therapy on the cancer-prone and the CHD-prone probands respectively after 13 yr. It will be clear that treatment by means of creative novation behaviour therapy has had a highly significant prophylactic effect, preventing deaths from cancer in probands of Type 1, and death from coronary heart disease in probands of Type 2.

table 3 prophylactic effectsFor creative novation behaviour therapy delivered in a group format:

It will be seen that both cancer and CHD mortality are very significantly higher in the control group, as is death from other causes. Incidence rates are also very significantly higher in the control group for cancer, but with a difference below our selected P = 0.01 level of significance for CHD. Most telling is the difference regarding those ‘still living’-79.9% in the therapy group, 23.9% in the control group. The results of the group therapy study support those of the individual therapy group in demonstrating the value of behaviour therapy in preventing death from cancer and CHD, and in lowering the incidence from cancer and possibly from CHD.

table 4 group therapyStrong effects were reported even when the treatment was delivered as a discussion of a brief pamphlet. The companion paper  described this bibliotherapy and provided the pamphlet as an appendix,  which is reproduced here.

This statement is given to the proband, who also receives an introductory 1-hr treatment in which the meaning of the statement is explained, application considered, and likely advantages discussed. After the patient has been given time to consider the statement, and apply it to his/her own problems, the therapist spends a further 3-5 hr with the patient, suggesting specific applications of the principles in the statement to the needs of the patient, and his/her particular circumstances.

Six hundred probands received the bibliotherapy and a control group of 500 matched for personality type, smoking, age and sex received no treatment. Another 100 matched patients received a placebo condition in which they met with interviewers to discuss a pamphlet with “psychoanalytic explanation and suggestions.”

I encourage readers to take a look at the pamphlet, which is less than a page long. It ends with:

The most important aims of autonomous self-activation: your aim should always be to produce conditions would make it possible for you to lead a happy and contented life.

The results were:

There are no statistically significant differences between the control group and the placebo group, which may therefore be combined and considered a single control group. Compared with this control group, the treatment group fared significantly better. In the control group, 128 died of cancer, 176 of CHD; in the treatment group only 27 died of cancer, and 47 of CHD. For ‘death from other causes’, the figures are 192 and 115. Clearly the bibliographic method had a very strong prophylactic effect.

table 5 group and biblioEysenck and Grossarth-Maticek reported numerous other studies, including one in which 24 matched pairs of patients with inoperable cancer were assigned to either creative novation behaviour therapy or a control group. The patients receiving the behaviour therapy lived five years versus the three years of those in the control group, a difference which was highly significant.

Keep in mind that in these studies that all of the creative novation behaviour therapy sessions were solely provided by Grossarth-Maticek.

But let’s jump to a final in a series of tables constructed to make the argument that psychoanalysis was harmful to physical health.

We are here dealing with three groups. Group I is constituted of patients who terminated their  psychoanalytical treatment after 2 yr or less, and were then treated with behaviour therapy.

Group 2 is a control group matched with the members of group I on age, sex, smoking and personality type. Group 3 is a control group which discontinued psychoanalysis, like Group I, but did not receive behaviour therapy. Members of Group I and 2 do not differ significantly in mortality, but Group 3 has significantly greater mortality than either. Looking again at the percentage of patients still living, we find for Group 1 92, 95 and 95%, for Group 2 96, 89 and 95%, for Group 3 the figures are: 72, 63 and 61%. Clearly behaviour therapy can reverse the negative impact psychoanalysis has on survival.

table 15 psychoanalysisIn a number of places, this is explained in identical words:

Theoretically, this conclusion is not unreasonable. We have shown that stress is a powerful factor in causing cancer and CRD, and it is widely agreed, even among psychoanalysts, that their treatment imposes a considerable strain on patients. The hope is often expressed that finally the treatment will resolve these strains, but there is no evidence to suggest that this is true (Rachman & Wilson, 1980; Eysenk & Martin, 1987). Indeed, there is good evidence that even in cases of mental disorder psychoanalysis often does considerable harm (Mays & Franks, 1985). A theoretical model to account for these negative outcomes of psychoanalysis and psychotherapy generally has been presented elsewhere (Eysenck, 1985); it would apply equally well in the psychosomatic as in the purely psychiatric field.

dog breakfastCBT for physical health problems: a dog’s breakfast approach

Grossarth-Maticek had already formulated his approach and delivered all psychotherapy before Eysenck began co-authored papers and promoting him. In a 1982 article without Eysenck as an author, Grossarth-Maticek is quite explicit about the psychoanalytic theory behind his approach:

A central proposition of our research program is that cancer patients are either preoccupied with traumatic events of early childhood or with excessive expectations of the parents during their whole life. They are characterized by intensive internal inhibitions toward expressing feelings and desires. Therefore, we speak of a chronic blockade of expression of feelings and desires. We assume that parents of cancer patients did not respond adequately to the child’s cries for help and these children were obliged very early to do non-conforming daily task. Cancer patients have never learned to express persistent cries for help…

The specific family dynamics in the special educational pattern which block hysterical reactions determine the behavior, which in turn is characterized by excessive persistence of performance of the daily task, disregard of symptoms and lack of aggressiveness in behavior. Through the currents of negative life events (i.e., death of closely connected persons) expressions of loneliness and reactive depression can appear intensively and chronically.

If this is not clear enough:

In our approach we try not to deny the psycho analytic propositions but to integrate the psychoanalytic research program with social psychological and sociological factors, hereby assuming that they have interactive effects on carcinogenesis.

Strangely, Grossarth-Maticek suggests in this article, that the psychoanalytic factors interact with “organic risk factors such as cigarette smoking in the case of lung cancer.” Grossarth-Maticek and Eysenck would soon be receiving tens of thousands of dollars in support from the German tobacco companies and lawyers from the American tobacco companies to promote the idea that personality caused smoking and lung cancer, but any connection between smoking and lung cancer was spurious. Product liability suits against tobacco companies should therefore be dismissed.

In the articles co-authored by Grossarth-Maticek and Eysenck, these roots of what Eysenck repackaged as creative novation behaviour therapy are only hinted at, but are noticeable to the observant reader in references to the role of dependency and autonomy. Fraudulent data are mustered to show the powerful positive effects of this behaviour therapy versus the toxicity of psychoanalysis.

On page 8 of this article, ten  explicitly labeled behavioural techniques are identified as occurring across individual, group, and bibliotherapy:

  • Training for reduction of the planned behaviors initiation of autonomous behavior.
  • Training for cognitive alteration under conditions of relaxation
  • Training for alternative reactions.
  • Training for the integration of cognition, emotionality and intuition.
  • Training to achieve stable expression of feelings.
  • Training for potentiating social behavioral control
  • Training to suppress stress-creating ideas
  • Training to achieve a behavior-directing hierarchic value structure
  • Training in the suppression of stress-creating thought.
  • Abolition of dependence reactions.

This approach has only superficial resemblance to American behavioral therapy and CBT. The emphasis on expression of emotional feelings and abolition of dependent reactions is incomprehensible when it is detached from its psychoanalytic roots. The paper refers to behavioral analysis, but interviews about the past, including childhood experiences are emphasized, rather than applied behavioral analysis. The hierarchies of behavior do not correspond to operant approaches, but to a value structure of autonomy versus dependence.

There is also considerable reference to the use of hypnosis to achieve these goals.

In short, neither the goals nor the methods have much relationship to learning theory at the time that Eysenck was writing nor to contemporary developments in operant conditioning. His approach is a tortured extension of classical conditioning. Outside of the fraudulent data that Grossarth-Maticek developed and that he published with Eysenck, there is little basis for assuming that psychological factors were related to physical health in the way the treatment approach postulated.

It should be kept in mind that Eysenck was not a psychotherapist. He actually detested psychotherapy and generated considerable controversy earlier by arguing that any apparent effects of psychotherapy were due to natural remission. It should also be noted that Eysenck was claiming creation novation behaviour therapy modified personality traits, even when delivered in a brief pamphlet, in ways that could not be anticipated by his other writings about personality. Finally, the particular personality characteristics that Eysenck was talking about modifying were very different than what he assessed with the Eysenck Personality Inventory.

Only “controversial” and “too good to be true” or fraud?

 Before Eysenck began collaborating with Grossarth-Maticek, there was widespread doubts about the validity of Grossarth-Maticek’s work.  In 1973, Grossarth-Maticek’s work had been submitted to the University of Heidelberg as a Habilitation, a second doctoral degree required for a full professorship. It was rejected. One member of the committee, Manfred Amelung, declared the results “too good to be true.” He retained a copy and would later put his knowledge of its details into a devastating critique. According to Buchanan’s biography, Eysenck demanded of Grossarth-Maticek “you must let me check your data, for if you deceive me I will never forgive you.”

Eysenck gained access to the data set, sometimes directing reanalyses by Grossarth-Maticek and his statistician. Other analyses were done by Eysenck’s statisticians in London. Eysenck’s biographer Buchanan noted “there were ample opportunities to select, tease out, or redirect attention – given a data set that was apparently sprawling chaotic but rich and ambitious….From the mid-1980s, Eysenck did virtually all of the writing for publication in English and presumably exerted a strong editorial control.” Buchanan also notes that tobacco companies became skeptical of the strength of findings that were reported, but also their inconsistency. They refused to continue to support Eysenck unless an independent team was set up to check analyses and the conclusions that Eysenck was drawing from them.

Eysenck single-authored a target article for Psychological Inquiry that reproduced many of the tables that we have been discussing. More than a dozen commentators included the members of the independent team, but also others who did not have access to the data, but who examined the tables with forensic attention. The commentary started off with Manfred Manfred Amelung who made use of what he had learned from Grossarth-Maticek’s doctoral work.

Many of the commentators suggested that the intervention studies presented conclusions that were “too good to be true,” not only in terms of the efficacy claim for the intervention, but for the negative outcomes claimed for the control group. But other commentators pointed to gross inconsistencies across different reports in terms of methods and results, clear evidence of manipulation of data, including some patients being counted a number of times, other patients dying twice, Eysenck and Grossarth-Maticek’s improbable ability to obtain matching of intervention patients and controls, and too perfect predictions. In the end, even Grossarth-Maticek’s Heidelberg statistician expressed concerns that there had been tampering with the data.

Both Grossarth-Maticek and Eysenck got opportunities to respond and were defensive and dismissive of the overwhelming evidence of exaggeration of the results and even fraud.

The exchanges in Psychological Inquiry occurred over two issues. Taken together, the critical commentaries are devastating, but the criticisms became diffuse because commentators focused on different problems. It took a more succinct, pithy critique by Anthony Pelosi and Louis Appleby in The BMJ to bring the crisis of credibility to a head.

Anthony Pelosi and Louis Appleby in The BMJ

 In the first round of their two-part attack, Pelosi and Appleby centered on Eysenck and Grossarth-Maticek’s  two articles in Behaviour Research and Therapy, but referenced the critiques in Psychological Inquiry. The incredible effectiveness of these two psychiatrists depended largely on their pointing  out what was hiding in plain sight in the two Behaviour Research and Therapy articles. For instance:

After 13 years, 16 of 50 untreated type 1 subjects had died of a carcinoma. Not one of the 50 cancer prone subjects receiving the psychotherapy died of cancer. The therapy was a genuine panacea, giving equivalent results for type 2 subjects and heart disease. The all cause mortality was over 60% in untreated and 15% in treated subjects. The death rate in the untreated subjects was truly alarming as they began the trial healthy and most were between 40 and 60 years of age.

I encourage readers to compare the Pelosi and Appleby paper to the tables I presented here and see what they missed.

Pelosi and Appleby calculated the effort required by Grossarth-Maticek if he had – as Eysenck insisted- single-handedly carried out all of the treatment.

It is striking that all the individual and group therapy was given by Professor Grossarth-Maticek. The trials were undertaken between 1972 and 1974 and involved 96 subjects (or perhaps 192 subjects, see below) in at least 20 hours of individual work, and at least 10 groups (245 subjects with 20-25 in each) for six to 15 sessions each. Add to this Grossarth-Maticek’s explanatory introduction to bibliotherapy for 600 people, and it can be seen that the amount of time spent by this single senior academic on his experimental psychotherapies is huge and certainly unprecedented.

They summarized inconsistencies and contradictions reported in the Psychological Inquiry, but then added their own observation that a matching of 192 pairs of intervention and control patients had only produced a sample of 192! They suggested that in the two Behaviour Research and Therapy articles there were at least  “10 elaborate misprints or misstatements in the description of the methods” that the editor or reviewers should have caught.

At no point, does the word “fraud” or “fraudulent” appear in Pelosi and Appleby’s first article. Rather, they suggest that  “Eysenck and Grossarth-Maticek… are:

making claims which, if correct, would make creative novation therapy a vital part of public health policy throughout the world.”

They conclude with

For these reasons there should be a total reexamination and proper analysis of the original data from this research in an attempt to answer the questions listed above. The authors give their address as the Institute of Psychiatry in London, which must be concerned about protecting its reputation. Therefore the institute should, in our view, assist in this clarification of the meaning of the various studies. There should also be some stern questions asked of the editors of the various journals involved, especially those concerned among the editorial staff of Behaviour Research and Therapy who, in our opinion, have done a disservice to their scientific disciplines, and indeed to Professors Eysenck and Grossarth-Maticek, in allowing this ill considered presentation of research on such a serious topic.

Eysenck’s reply and Pelosi and Appleby’s response

 Readers can consult Eysenck’s reply  for themselves, but it strikes me as evasive and dismissive. Specific criticisms are not directly answered, but Eysenck points to consistency between his results and those of David Spiegel, who had claimed to get even stronger effects in his small study of supportive expressive therapy for women with metastatic breast cancer. Rather than demolishing the credibility of his work with Grossarth-Maticek, Eysenck argues that Pelosi and Appleby only point to the need for funding of a replication. Eysenck closes with:

Their critical review, however incorrect, full of errors and misunderstandings, and lacking in objectivity, may have been useful in drawing attention to a large body of work, of both scientific and social relevance, that has been overlooked for too long.

Pelosi and Appleby took Eysenck’s reply as an opportunity to get even more specific in the criticisms:

We are accused of being vague in mentioning many errors, inappropriate analyses, and missing details in the publications on this research programme. We value this opportunity to be more specific, to clarify just a few of the questions raised by ourselves and others, which Eysenck has failed to answer, and to outline additional findings from these authors’ investigations.

After a detailed reply, they wrap up with references to the criticisms that Eysenck received in Psychological Inquiry, in an ironic note, turning Eysenck’s attacks on proponents of the link between smoking and lung cancer on to Eysenck himself:

Our concern has been to clarify the methods and analyses of a body of research which, if accurate, would profoundly influence public health policies on cancer and heart disease. Other critics have been more challenging in what they have alleged, and in our opinion the controversy which now surrounds one of academic psychology’s most influential figures constitutes a crisis for the subject itself. The seriousness of the detailed allegations by van der Ploeg, although refuted by Eysenck and Grossarth-Maticek, should in themselves prompt these authors to reexamine their own findings after appropriate further training in the methodology of medical research. Perhaps the most skilfully worded criticism on this subject was made not about Eysenck but by him in a debate on the relation between smoking and cancer. In disputing the findings of Doll and Hill’s epidemiological studies on this association he comments: “What we have found are serious methodological weaknesses in the design of the studies quoted in favour of these theories, statistical errors, and unsubstantiated extrapolations from dubious data to unconfirmed conclusions.” Eysenck owes it to himself and to his discipline to reconsider critically his own work on this subject.

In the over 20 years since this exchange, Pelosi and Appleby and their ally editor Richard Smith of The BMJ failed to get an appropriate response from the British Psychological Society, King’s College London or the Institute of Psychiatry, the journal Behaviour Research and Therapy, or the Committee on Publication Ethics (COPE). This situation demonstrates the inability of British academia to correct bad and even fraudulent science. It stands as a cautionary note to those of us now attempting to correct what we perceive as bad science. Efforts are likely to be futile. On the other hand, the editorship of Behaviour Research and Therapy has passed to an American, Michelle Craske, a professor at UCLA. Perhaps she can be persuaded to make a long overdue correction to the scientific record and remove a serious blemish on the credibility of that Journal.

If there is sufficient interest, I will survey the profound influence of the fraudulent work of Eysenck and Grossarth-Maticek in a future blog post.

  • Because of their influence, CBT in the UK gives an exaggerated emphasis to early childhood adversity and much less to functional behavioural analysis than the American behavior therapy and CBT.
  • Consistent with Eysenck’s influence, CBT for physical problems in the UK largely focuses on self-report questionnaire assessments of mechanism of change and of outcome, rather than functional behavioral and objective physical health outcome variables.

Influences can also be seen in:

Contemporary CBT for physical conditions as practiced in UK, including CBT for irritable bowel syndrome (IBS), fibromyalgia, and other “all in the head” conditions that are deemed Medically Unexplained Symptoms (MUS) in the UK, as in PRINCE trial of Trudie Chalder and Simon Wessely.

The “psychosomatic” approach as seen in neurologist Suzanne O’Sullivan’s  recent editorial in The Lancet and her “It’s All in Your Head”, which won the 2016 Wellcome Book Award and her.

Quack treatments, such as Phil Parker’s Lightning Process, which the UK’s Advertising Standards Authority (ASA) ruled against advertising its effectiveness in treatment of chronic fatigue syndrome/ myalgic Encephalopathy,  multiple sclerosis, and irritable bowel syndrome/digestive issues. The Lightning Process is nonetheless implemented in the UK NHS under the direction of University of Bristol Professor Esther Crawley 

Quack cancer treatments such as Simonton visualization method.

More mainstream, but unproven psychological treatments for cancer including David Spiegel’s supportive expressive therapy. Neither Spiegel –nor anyone else– has ever been able to replicate the finding praised by Eysenck, but repeats his claims in a recent non-peer reviewed article in the UK-based Psycho-Oncology and with a closely related article in BPS’ British Journal of Health Psychology.

More mainstream, but unproven psychological approaches to cancer that claim to improve immune functioning by reducing stress.

Some Scottish readers will understand this message concerning Eysenck’s fraud: The ice cream man cometh.

My usual disclaimer: All views that I express are my own and do not necessarily reflect those of PLOS or other institutional affiliations.

Pay $1000 to criticize a bad ‘blood test for depression’ article?

pay to play-1No way, call for retraction.

Would you pay $1,000 for the right to criticize bad science in the journal in which it originally appeared? That is what it costs to participate in postpublication peer review at the online Nature Publishing Group (NPG) journal, Translational Psychiatry.

Damn, NPG is a high-fashion brand, but peer review is quite fallible, even at an NPG npgxJournal. Should we have to pay to point out the flawed science that even NPG inevitably delivers? You’d think we were doing them a favor in terms of quality control.

Put differently, should the self-correction on which scientific progress so thoroughly depends require critics be willing to pay, presumably out of their own personal funds? Sure, granting agencies now reimburse publication costs for the research they fund, but a critique is unlikely to qualify.

Take another perspective: Suppose you have asmall data set of patients for whom you have blood samples.  The limited value of the data set was further comporimsed by substantial, nonrandom loss to follow-up. But you nonetheless want to use it to solicit industry funding for a “blood test for depression.” Would you be willing to pay a premium of $3,600-$3,900 to publish your results in a prestigious NPG journal, with the added knowledge that it would be insulated from critics?

I was curious just who would get so worked up about an article that they would pay $1,000 to complain.

So, I put Translational Psychiatry in PUBLICATION NAME at Web of Science. It yielded 379 entries. I then applied the restriction CORRESPONDENCE and that left only two entries.

Both were presenting original data and did not even cite another article in Translational Psychiatry.  Maybe the authors were trying to get a publication into an NPG journal on the cheap, at a discount of $2,600.

P2PinvestIt appears that nobody has ever published a letter to the editor in Translational Psychiatry. Does that mean that there has never ever been anything about which to complain? Is everything we find in Translational Psychiatry perfectly trustworthy?

I recently posted at Mind the Brain and elsewhere about a carefully-orchestrated media campaign promoting some bad science published in Translational Psychiatry. An extraordinary publicity effort disseminated a Northwestern University press release and video to numerous media outlets. There was an explicit appeal for industry funding for the development of what was supposedly a nearly clinic-ready inexpensive blood test for depression.

The Translational Psychiatry website where I learned of these publication costs displays the standard NPG message that becomes mocking by a paywall that effectively blocks critics:

“A key strength of NPG is its close relationship with the scientific community. Working closely with scientists, listening to what they say, and always placing emphasis on quality rather than quantity, has made NPG the leading scientific publisher at finding innovative solutions to scientists’ information needs.”

The website also contains the standard NPG assurances about authors’ disclosures of conflicts of interest:

“The statement must contain an explicit and unambiguous statement describing any potential conflict of interest, or lack thereof, for any of the authors as it relates to the subject of the report”

The authors of this particular paper declared:

“EER is named as an inventor on two pending patent applications, filed and owned by Northwestern University. The remaining authors declare no conflict of interest.”

Does this disclosure give readers much clarity concerning the authors’ potential financial conflict of interest? Check out this marketing effort exploiting the Translational Psychiatry article.

Northwestern Researchers Develop RT-qPCR Assay for Depression Biomarkers, Seek Industry Partners

I have also raised questions about a lack of disclosures of conflicts of interest from promoters of Triple P Parenting. The developers claimed earlier that their program was owned by the University of Queensland, so there was no conflict of interest to declare. Further investigation  of the university website revealed that the promoters got a lucrative third of proceeds. Once that was revealed, a flood of erratum notices disclosing the financial conflicts of interest of Triple P promoters followed – at least 10 so far. For instance

triple P erratum PNG
Please Click to Enlarge

How bad is the bad science?

You can find the full Translational Psychiatry article here. The abstract provides a technical but misleading summary of results:

“Abundance of the DGKA, KIAA1539 and RAPH1 transcripts remained significantly different between subjects with MDD and ND controls even after post-CBT remission (defined as PHQ-9 <5). The ROC area under the curve for these transcripts demonstrated high discriminative ability between MDD and ND participants, regardless of their current clinical status. Before CBT, significant co-expression network of specific transcripts existed in MDD subjects who subsequently remitted in response to CBT, but not in those who remained depressed. Thus, blood levels of different transcript panels may identify the depressed from the nondepressed among primary care patients, during a depressive episode or in remission, or follow and predict response to CBT in depressed individuals.”

This was simplified in a press release that echoed in shamelessly churnalized media coverage. For instance:

“If the levels of five specific RNA markers line up together, that suggests that the patient will probably respond well to cognitive behavioral therapy, Redei said. “This is the first time that we can predict a response to psychotherapy,” she added.”

The unacknowledged problems of the article began with the authors only having 32 depressed primary-care patients at baseline and their diagnostic status not having been  confirmed by gold standard semi-structured interviews by professionals.

But the problems get worse. For the critical comparison of patients who recovered in cognitive behavioral therapy versus those that did not occurred in the subsample of nine recovered versus 13 unrecovered patients remaining after a loss-to-follow-up of 10 patients. Baseline results for the 9 +13= 22 patients in the follow-up sample did not even generalize back to the original full sample. How, then, could the authors argue that the results apply to the 23 million or so depressed patients in the United States? Well, they apparently felt they could better-generalize back to the original sample, if not the United States, by introducing an analysis of covariance that controlled for age, race and sex.  (For those of you who are tracking the more technical aspects of this discussion, contemplate the implications of controlling for three variables in a between-groups comparison of nine versus 13 patients. Apparently the authors believe that readers would accept the adjusted analyses in place of the unadjusted analyses which had obvious problems of generalizability. The reviewers apparently accepted this.).

Finally, treatment with cognitive behavior therapy was confounded with uncontrolled treatment with antidepressants.

I won’t discuss here the other problems of the study noted in my earlier blog posts. But I think you can see that these are analyses of a small data set truly unsuitable for publication in Translational Psychiatry and serving as a basis for seeking industry funding for a blood test for depression.

As I sometimes do, I tried to move from blog posts about what I considered problematic to a formal letter to the editor to which the authors would have an opportunity to reply. It was then that I discovered the publication costs.

So what are the alternatives to a letter to the editor?

Letters to the editor are a particularly weak form of post-publication peer review. There is little evidence that they serve as an effective self-correction mechanism for science. Letters to the editor seldom alter the patterns of citations of the articles about which they complain.

pay to paly2Even if I paid the $1,000 fee, I would only have been entitled to 700 words to make my case that this article is scientifically flawed and misleading. I’m not sure that a similar fee would be required from the authors to reply. Maybe responding to critics is part of the original package that they purchased from NPG. We cannot tell from what appears in the journal because the necessity of responding to a critic has not yet occurred.

It is quite typical across journals, even those not charging for a discussion of published papers, to limit the exchanges to a single letter per correspondent and a single response from the authors. And the window for acceptance of letters is typically limited to a few weeks or months after an article has appeared. While letters to the editor are often peer-reviewed, replies from authors typically do not receive peer review.

A different outcome, maybe

I recently followed up blogging about the serious flaws of a paper published in PNAS by Fredrickson and colleagues with a letter to the editor. They in turn responded. Compare our letters to see why the uninformed reader might infer that only confusion had been generated by either of them. But stay tuned…

The two letters would have normally ended any exchange.

However, this time my co-authors and I thoroughly re-analyzed the Fredrickson et al data and PNAS allowed us to publish our results. This time, we did not mince words:

“Not only is Fredrickson et al.’s article conceptually deficient, but more crucially statistical analyses are fatally flawed, to the point that their claimed results are in fact essentially meaningless.”

In the supplementary materials, we provided in excruciating detail our analytic strategy and results. The authors’ response was again dismissive and confusing.

The authors next refused our offer for an adversarial collaboration in which both parties would lay responses to each other with a mediator in order to allow readers to reach some resolution. However, the strengths of our arguments and reanalysis – which included thousands of regression equations, some with randomly generated data – are such others have now calling for a retraction of the original Fredrickson and Cole paper. If that occurs, it would be an extraordinarily rare event.

Limits on journals impose on post-peer-review commentary severely constrain the ability of science to self-correct.

The Reproducibility Project: Psychology is widely being hailed as a needed corrective for the crisis of credibility in science. But replications of studies such as this one involving pre-post sampling of genomic expression from an intervention trial are costly and unlikely to be undertaken. And why attempt a “replication” of findings that have no merit in the first place? After all, the authors’ results for baseline assessments did not replicate in the baseline results of patients still available at follow-up. That suggests a table problem, and that attempts at replication would be futile.

plosThe PLOS journals have introduced the innovation of allowing comments to be placed directly at the journal article’s webpage, with their existence acknowledged on the article itself. Anyone can respond and participate in a post-publication peer review process that can go on for the life of the interest in a particular article. The next stage in furthering post-publication peer review is that such comments be indexed and citable and counted in traditional metrics, as well as altmetrics. This would recognize citizen scientists’ contributions to cleaning up what appears to be a high rate of false positives and outright nonsense in the current literature.

pubmedcommonsPubMed Commons offers the opportunity to post comments on any of the over 23 million entries in PubMed, expanding the PLOS initiative to all journals, even those of the Nature Publishing Group. Currently, the only restriction is that someone attempting to place a comment have authored any of the 23,000,000+ entries in PubMed, even a letter to the editor. This represents progress.

But similar to the PLOS initiative, PubMed Commons will get more traction when it can provide conventional academic credit –countable citations– to contributors identifying and critiquing bad science. Currently authors can get credit for putting bad science into the literature that no one can get for  helping getting it recognized as such.

So, the authors of this particular article have made indefensibly bad claims about having made substantial progress toward developing an inexpensive blood test for depression. It’s not unreasonable to assume their motive is to cultivate financial support from industry for further development. What’s a critic to do?

In this case, the science is bad enough and the damage to the public and professionals’ perception of the state of the science of ‘blood test for depression’ sufficient for  a retraction is warranted. Stay tuned – unless Nature Publishing Group requires a $1,000 payment for investigating whether an article warrants retraction.

Postscript: As I was finishing this post, I discovered that the journals published by the  Modern Language Society requires  payment of a $3,000 membership fee to publish a letter to the editor in one of their journals. I guess they need to keep the discussion within the club. 

Views expressed in this blog post are entirely those of the author and not necessarily those of PLOS or its staff.

Special thanks to Skeptical Cat.skeptical sleuth cat 8 30 14-1