A science-based medicine skeptic struggles with his as-yet medically unexplained pain and resists alternative quack treatments

Paul: “For three years I kept my faith that relief had to be just around the corner, but my disappointment is now as chronic as my pain. Hope has become a distraction.”

mind the brain logo

Chronic pain and tragic irony…

Paul: “For three years I kept my faith that relief had to be just around the corner, but my disappointment is now as chronic as my pain. Hope has become a distraction.”

Paul Ingraham is quite important in the Science-Based Skeptics movement and in my becoming involved in it. He emailed me after a long spell without contact. He wanted to explain how he had been out of touch. His life had been devastated by as-yet medically unexplained pain and other mysterious symptoms.

Paul  modestly describes himself at his blog site as “a health writer in Vancouver, Canada, best known for my work debunking common myths about treating common pain problems on PainScience.com. I actually make a living doing that. On this blog, I just mess around.  ~ Paul Ingraham (@painsci, Facebook).”

Some of Paul’s posts at his own blog site

massage

on fire

stretching

Paul’s Big Self-Help Tutorials for Pain Problems are solidly tied to the best peer-reviewed evidence.

Detailed, readable tutorials about common stubborn pain problems & injuries, like back pain or runner’s knee.

Many common painful problems are often misunderstood, misdiagnosed, and mistreated. Made for patients, but strong enough for professionals, these book-length tutorials are crammed with tips, tricks, and insights about what works, what doesn’t, and why. No miracle cures are for sale here — just sensible information, scientifically current, backed up by hundreds of free articles and a huge pain and injury science bibliography.

 

science-based-medicine-logo

Paul offered me invaluable assistance and support when I began blogging at the prestigious Science Based Medicine. See for instance, my:

Systematic Review claims acupuncture as effective as antidepressants: Part 1: Checking the past literature

And

Is acupuncture as effective as antidepressants? Part 2. Blinding readers who try to get an answer

I have not consistently blogged there, because my topics don’t always fit. Whenever I do blog there, I learn a lot from  the wealth of thoughtful comments I received.

I have great respect for Science Based Medicine’s authoritative, well documented and evidence-based analyses. I highly recommend the blog for those who are looking for sophistication  delivered in a way that an intelligent lay person could understand.

What’s the difference between Sciencebased medicine (SBM) versus evidence-based medicine (EBM)?

I get some puzzlement every time I bring up this important distinction – Bloggers at SBM frequently make a distinction between science-based- and evidence-based- medicine. They offer careful analyses of unproven treatments like acupuncture and homeopathy. Proponents of these treatment increasingly sell them as evidence-based, citing randomized trials that do not involve an active treatment. The illusion of efficacy is often created by the positive expectations and mysterious rituals with which these treatments are delivered. Comparison treatments in these studies often lack this boost, particularly when tested in in unblinded comparisons.

The SBM bloggers like to point out that there are no plausible tested scientific mechanisms by which these treatments might conceivably work. The name of  blog,  Science-Based Medicine calls  attention to their higher standards for considering treatments efficacious: to be considered science based medicine, they have to be proven as effective as evidence-based active treatments, and have to have a mechanism beyond nonspecific, placebo effects.

Paul Ingram reappears from a disappearance.

Paul mysteriously disappeared for a while. Now he’s reemerged with a tale that is getting a lot of attention. He gave me permission to blog about excerpts. I enclose a link to the full story that I strongly recommend.

Paul Ingram title

http://www.paulingraham.com/chronic-pain-tragic-irony.html

A decade ago I devoted myself to helping people with chronic pain, and now it’s time to face my ironic new reality: I have serious unexplained chronic pain myself. It may never stop, and I need to start learning to live with it rather than trying to fix it.

I have always been “prone” to aches and pains, and that’s why I became a massage therapist and then moved on to publishing PainScience.com. But that tendency was a pain puppy humping my leg compared to the Cerberus of suffering that’s mauling me now. I’ve graduated to the pain big leagues.

For three years I kept my faith that relief had to be just around the corner, but my disappointment is now as chronic as my pain. Hope has become a distraction. I’ve been like a blind man waiting for my sight to return instead of learning braille. It’s acceptance time.

Paul describes how is pain drove him into hiding.

… why I’ve become one of those irritating people who answers every invitation with a “maybe” and bails on half the things I commit to. I never know what I’m going to be able to cope with on a given day until it’s right in front of me.

He struggled to define the problem:

Mostly widespread soreness and joint pain like the early stages of the flu, a parade of agonizing hot spots that are always on the verge of breaking my spirit, and a lot of sickly fatigue. All of which is easily provoked by exercise.

But there was a dizzying array of other symptoms…

Any diagnosis would be simply a label, not an explanation.

Nothing turned up in a few phases of medical investigation in 2015 and 2016. My “MS hug” is not caused by MS. My thunderclap headaches are not brain bleeds. My tremors are not Parkinsonian. I am not deficient in vitamins B or D. There is no tumour lurking in my chest or skull, nor any markers of inflammation in my blood. My heart beats as steadily as an atomic clock, and my nerves conduct impulses like champs.

Paul was not seriously tempted by alternative and complementary medicine

I am not tempted to try alternative medicine. The best of alt-med is arguably not alternative at all — e.g. nutrition, mindfulness, relaxation, massage, and so on — and the rest of what alt-med offers ranges from dubious at best to insane bollocks at the worst. You can’t fool a magician with his own tricks, and you can’t give false hope to an alt-med apostate like me: I’ve seen how the sausage is made, and I feel no surge of false hope when someone tells me (and they have) “it’s all coming from your jaw, you should see this guy in Seattle, he’s a Level 17 TMJ Epic Master, namaste.” Most of what sounds promising to the layperson just sounds like a line of bull to me.

Fascinating how many people clearly think Paul’s story was almost identical to their own.

All these seemingly “identical” cases have got me pondering: syndromes consist of non-specific symptoms by definition, and batches of such symptoms will always seem more similar than they actually are… because blurry pictures look more alike than sharp and clear ones. Non-specific symptoms are generalized biological reactions to adversity. Anxiety can cause any of them, and so can cancer. Any complex cases without pathognomic (specific, defining) symptoms are bound to have extensive overlap of their non-specific symptoms.

There are many ways to be sick, and relatively few ways to feel bad.

Do check out his full blog post. http://www.paulingraham.com/chronic-pain-tragic-irony.html

Power pose: I. Demonstrating that replication initiatives won’t salvage the trustworthiness of psychology

An ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science.

 

mind the brain logo

Bad publication practices keep good scientists unnecessarily busy, as in replicability projects.- Bjoern Brembs

Power-PoseAn ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science. Psychologists need to reconsider pitfalls of an exclusive reliance on this strategy to improve lay persons’ trust in their field.

Despite the consistency of null findings across seven attempted replications of the original power pose study, editorial commentaries in Comprehensive Results in Social Psychology left some claims intact and called for further research.

Editorial commentaries on the seven null studies set the stage for continued marketing of self-help products, mainly to women, grounded in junk psychological pseudoscience.

Watch for repackaging and rebranding in next year’s new and improved model. Marketing campaigns will undoubtedly include direct quotes from the commentaries as endorsements.

We need to re-examine basic assumptions behind replication initiatives. Currently, these efforts  suffer from prioritizing of the reputations and egos of those misusing psychological science to market junk and quack claims versus protecting the consumers whom these gurus target.

In the absence of a critical response from within the profession to these persons prominently identifying themselves as psychologists, it is inevitable that the void be filled from those outside the field who have no investment in preserving the image of psychology research.

In the case of power posing, watchdog critics might be recruited from:

Consumer advocates concerned about just another effort to defraud consumers.

Science-based skeptics who see in the marketing of the power posing familiar quackery in the same category as hawkers using pseudoscience to promote homeopathy, acupuncture, and detox supplements.

Feminists who decry the message that women need to get some balls (testosterone) if they want to compete with men and overcome gender disparities in pay. Feminists should be further outraged by the marketing of junk science to vulnerable women with an ugly message of self-blame: It is so easy to meet and overcome social inequalities that they have only themselves to blame if they do not do so by power posing.

As reported in Comprehensive Results in Social Psychology,  a coordinated effort to examine the replicability of results reported in Psychological Science concerning power posing left the phenomenon a candidate for future research.

I will be blogging more about that later, but for now let’s look at a commentary from three of the over 20 authors get reveals an inherent limitation to such ambitious initiatives in tackling the untrustworthiness of psychology.

Cesario J, Jonas KJ, Carney DR. CRSP special issue on power poses: what was the point and what did we learn?.  Comprehensive Results in Social Psychology. 2017

 

Let’s start with the wrap up:

The very costly expense (in terms of time, money, and effort) required to chip away at published effects, needed to attain a “critical mass” of evidence given current publishing and statistical standards, is a highly inefficient use of resources in psychological science. Of course, science is to advance incrementally, but it should do so efficiently if possible. One cannot help but wonder whether the field would look different today had peer-reviewed preregistration been widely implemented a decade ago.

 We should consider the first sentence with some recognition of just how much untrustworthy psychological science is out there. Must we mobilize similar resources in every instance or can we develop some criteria to decide what is on worthy of replication? As I have argued previously, there are excellent reasons for deciding that the original power pose study could not contribute a credible effect size to the literature. There is no there to replicate.

The authors assume preregistration of the power pose study would have solved problems. In clinical and health psychology, long-standing recommendations to preregister trials are acquiring new urgency. But the record is that motivated researchers routinely ignore requirements to preregister and ignore the primary outcomes and analytic plans to which they have committed themselves. Editors and journals let them get away with it.

What measures do the replicationados have to ensure the same things are not being said about bad psychological science a decade from now? Rather than urging uniform adoption and enforcement of preregistration, replicationados urged the gentle nudge of badges for studies which are preregistered.

Just prior to the last passage:

Moreover, it is obvious that the researchers contributing to this special issue framed their research as a productive and generative enterprise, not one designed to destroy or undermine past research. We are compelled to make this point given the tendency for researchers to react to failed replications by maligning the intentions or integrity of those researchers who fail to support past research, as though the desires of the researchers are fully responsible for the outcome of the research.

There are multiple reasons not to give the authors of the power pose paper such a break. There is abundant evidence of undeclared conflicts of interest in the huge financial rewards for publishing false and outrageous claims. Psychological Science about the abstract of the original paper to leave out any embarrassing details of the study design and results and end with a marketing slogan:

That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.

 Then the Association for Psychological Science gave a boost to the marketing of this junk science with a Rising Star Award to two of the authors of this paper for having “already made great advancements in science.”

As seen in this special issue of Comprehensive Results in Social Psychology, the replicationados share responsibility with Psychological Science and APS for keeping keep this system of perverse incentives intact. At least they are guaranteeing plenty of junk science in the pipeline to replicate.

But in the next installment on power posing I will raise the question of whether early career researchers are hurting their prospects for advancement by getting involved in such efforts.

How many replicationados does it take to change a lightbulb? Who knows, but a multisite initiative can be combined with a Bayesian meta-analysis to give a tentative and unsatisfying answer.

Coyne JC. Replication initiatives will not salvage the trustworthiness of psychology. BMC Psychology. 2016 May 31;4(1):28.

The following can be interpreted as a declaration of financial interests or a sales pitch:

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

 Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.

 

“ACT: The best thing [for pain] since sliced bread or the Emperor’s new clothes?”

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

mind the brain logo

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

my title slideDavid Gillanders  and I held our debate “ACT: best thing since sliced bread or the Emperor’s new clothes?” at the British Pain Society meeting on Thursday, September 15, 2017 in Glasgow. We will eventually make our slides and a digital recording of the debate available.

I enjoyed hanging out with David Gillanders. He is a great guy who talks the talk, but also walks the walk. He lives ACT as a life philosophy. He was an ACT trainer speaking before a sympathetic audience, many who had been trained by him.

Some reflections from a few days later.

I was surprised how much Acceptance and Commitment Therapy (along with #mindfulness) has taken over UK pain services. A pre-debste poll showed most of the  audience  came convinced that indeed, ACT was the best thing since sliced bread.

I was confident that my skepticism was firmly rooted in the evidence. I don’t think there is debate about that. David Gillanders agreed that higher quality studies were needed.

But in the end, even I did not convert many, I came away quite pleased with the debate.

Standards for evaluating the  evidence for ACT for pain

 I recently wrote that ACT may have moved into a post-evidence phase, with its chief proponents switching from citing evidence to making claims about love, suffering, and the meaning of life. Seriously.

Steve Hayes prompted me on Twitter to take a closer look at the most recent evidence for ACT. As reported in an earlier blog, I took a close look.  I was not impressed that proponents of ACT are making much progress in developing evidence in any way as strong as their claims. We need a lot less ACT research that doesn’t add any quality evidence despite ACT being promoted enthusiastically as if it does. We need more sobriety from the promoters of ACT, particularly those in academia, like Steve Hayes and Kelly Wilson who know something about how to evaluate evidence. They should not patronize workshop goers with fanciful claims.

David Gillanders talked a lot about the philosophy and values that are expressed in ACT, but he also made claims about its research base, echoing the claims made by Steve Hayes and other prominent ACT promoters.

Standards for evaluating research exist independent of any discussion of ACT

There are standards for interpreting clinical trials and integration of their results in meta analysis that exist independent of the ACT literature. It is not a good idea to challenge these standards in the context of defending ACT against unfavorable evaluations, although that is exactly how Hayes and his colleagues often respond. I will get around to blogging about the most recent example of this.

Atkins PW, Ciarrochi J, Gaudiano BA, Bricker JB, Donald J, Rovner G, Smout M, Livheim F, Lundgren T, Hayes SC. Departing from the essential features of a high quality systematic review of psychotherapy: A response to Öst (2014) and recommendations for improvement. Behaviour Research and Therapy. 2017 May 29.

Within-group (pre-post) differences in outcome. David Gillanders echoed Hayes in using within-group effects sizes to describe the effectiveness of ACT. Results presented in this way are better and may look impressive, but they are exaggerated when compared to results obtained between groups. I am not making that up. Changes within the group of patients who received ACT reflect the specific effects of ACT plus whatever nonspecific factors were operating. That is why we need an appropriate comparison-control group to examine between-group differences, which are always more modest than just looking at the within-group effects.

Compared to what? Most randomized trials of ACT involve a wait list, no-treatment, or ill-described standard care (which often represents no treatment). Such comparisons are methodologically weak, especially when patients and providers know what is going on-called an unblinded trial– and when outcomes are subjective self-report measures.

homeopathyA clever study in New England Journal of Medicine showed that with such subjective self-report measures, one cannot distinguish between a proven effective inhaled medication for asthma, an inert substance simply inhaled, and sham acupuncture. In contrast, objective measures of breathing clearly distinguish the medication from the comparison-control conditions.

So, it is not an exaggeration to say that most evaluations of ACT are conducted under circumstances that even sham acupuncture or homeopathy would look effective.

Not superior to other treatments. There are no trials comparing ACT to a credible active treatment in which ACT proves superior, either for pain or other clinical problems. So, we are left saying ACT is better than doing nothing, at least in trials where any nonspecific effects are concentrated among the patients receiving ACT.

Rampant investigator bias. A lot of trials of ACT are conducted by researchers having an investment in showing that ACT is effective. That is a conflict of interest. Sometimes it is called investigator allegiance, or a promoter or originator bias.

Regardless, when drugs are being evaluated in a clinical trial, it is recognized that there will be a bias toward the drug favored by the manufacturer conducting the trial. It is increasingly recognized that meta analyses conducted by promoters should also be viewed with extra skepticism. And that trials conducted with researchers having such conflicts of interest should be considered separately to see if they produced exaggerated.

ACT desperately needs randomized trials conducted by researchers who don’t have a dog in the fight, who lack the motivation to torture findings to give positive results when they are simply not present. There’s a strong confirmation bias in current ACT trials, with promoter/researchers embarrassing themselves in their maneuvers to show strong, positive effects when their only weak or null findings available. I have documented [ 1, 2 ] how this trend started with Steve Hayes dropping two patients from his study of effects of brief ACT on re-hospitalization of inpatients with Patricia Bach. One patient had died by suicide and another was in jail and so they couldn’t be rehospitalized and were drop from the analyses. The deed could only be noticed by comparing the published paper with Patricia Bach’s dissertation. It allowed an otherwise nonsignificant finding a small trial significant.

Trials that are too small to matter. A lot of ACT trials have too few patients to produce a reliable, generalizable effect size. Lots of us in situations far removed from ACT trials have shown justification for the rule of thumb that we should consider effect sizes from trials having less than 35 patients per treatment of comparison cell. Even this standard is quite liberal. Even if a moderate effect would be significantly larger trial, there is less than a 50% probability it be detected the trial this small. To be significant with such a small sample size, differences between treatments have to be large, and there probably either due to chance or something dodgy that the investigators did.

Many claims for the effectiveness of ACT for particular clinical problems come from trials too small to generate a reliable effect sizes. I invite readers to undertake the simple exercise of looking at the sample sizes in a study cited has support of the effectiveness of ACT. If you exclude such small studies, there is not much research left to talk about.

Too much flexibility in what researchers report in publications. Many trials of ACT involve researchers administering a whole battery of outcome measures and then emphasizing those that make ACT look best and either downplaying or not mentioning further the rest. Similarly, many trials of ACT deemphasize whether the time X treatment interaction is significant in and simply ignore it if it is not all focus on the within-group differences. I know, we’re getting a big tactical here. But this is another way of saying is that many trials of ACT gives researchers too much latitude in choosing what variables to report and what statistics are used to evaluate them.

Under similar circumstances, showed that listening to the Beatles song When I’m 64 left undergraduates 18 months younger than when they listen to the song Karamba. Of course, the researchers knew damn well that the Beatles song didn’t have this effect, but they indicated they were doing what lots of investigators due to get significant results, what they call p-hacking.

Many randomized trials of ACT are conducted with the same researcher flexibility that would allow a demonstration that listening to a Beatles song drops the age of undergraduates 18 months.

Many of the problems with ACT research could be avoided if researchers were required to publish ahead of time their primary outcome variables and plans for analyzing them. Such preregistration is increasingly recognized as best research practices, including by NIMH. There is  no excuse not to do it.

My take away message?

ACT gurus have been able to dodge the need to develop quality data to support their claims that their treatment is effective (and their sometime claim it is more effective than other approaches). A number of them are university-based academics and have ample resources to develop better quality evidence.

Workshop and weekend retreat attendees are convinced that ACT works on the strength of experiential learning and a lot of theoretical mumbo jumbo.

But the ACT promoters also make a lot of dodgy claims that there is strong evidence that the specific ingredients of ACT, techniques and values, account for the power of ACT. But some of the ACT gurus, Steve Hayes and Kelly Wilson at least, are academics and should limit their claims of being ‘evidence-based” to what is supported by strong, quality evidence. They don’t. I think they are being irresponsible in throwing in “evidence-based’ with all the

What should I do as an evidence-based skeptic wanting to improve the conversation about ACT?

 Earlier in my career, I spent six years in live supervision in some world-renowned therapists behind the one-way mirror including John Weakland, Paul Watzlawick, and Dick Fisch. I gave workshops world wide on how to do brief strategic therapies with individuals, couples, and families. I chose not to continue because (1) I didn’t like the pressure for drama and exciting interventions when I interviewed patients in front of large groups; (2) Even when there was a logic and appearance of effectiveness to what I did, I didn’t believe it could be manualized; and (3) My group didn’t have the resources to conduct proper outcome studies.

But I got it that workshop attendees like drama, exciting interventions, and emotional experiences. They go to trainings expecting to be entertained, as much as informed. I don’t think I can change that.

Many therapists have not had the training to evaluate claims about research, even if they accept that being backed by research findings is important. They depend on presenters to tell them about research and tend to trust what they say. Even therapist to know something about research, tennis and critical judgment when caught up in emotionality provided by some training experiences. Experiential learning can be powerful, even when it is used to promote interventions that are not supported by evidence.

I can’t change the training of therapists nor the culture of workshops and training experience. But I can reach out to therapist who want to develop skills to evaluate research for themselves. I think some of the things that point out in this blog post are quite teachable as things to look for.

I hope I can connect with therapists who want to become citizen scientists who are skeptical about what they hear and want to become equipped to think for themselves and look for effective resources when they don’t know how to interpret claims.

This is certainly not all therapists and may only be a minority. But such opinion leaders can be champions for the others in facilitating intelligent discussions of research concerning the effectiveness of psychotherapies. And they can prepare their colleagues to appreciate that most change in psychotherapy is not as dramatic or immediate as seen in therapy workshops.

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.

 

Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing

Notes

[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.

 

 

 

 

 

 

 

No, JAMA Internal Medicine, acupuncture should not be considered an option for preventing migraines.

….And no further research is needed.

These 3 excellent articles provide some background for my blog, but their titles alone are worth leading with:

Acupuncture is astrology practice with needles.

Acupuncture: 3000 studies and more research is not needed.

Acupuncture is  theatrical placebo.

Each of these articles helps highlights an important distinction between an evidence-based medicine versus science based medicine perspective on acupuncture that will be discussed here.

A recent article in the prestigious JAMA Internal Medicine concluded:

“Acupuncture should be considered as one option for migraine prophylaxis in light of our findings.”

The currently freely accessible article can be found here.

A pay-walled editorial from Dr.Amy Gefland can be found here.

The trial was registered long after patient recruitment had started and the trial protocol can be found here 

[Aside: What is the value of registering a trial long after recruitment commenced? Do journal articles have a responsibility to acknowledge a link they publish for trial registration is for what occurred after the trial commenced? Is trial registration another ritual like acupuncture?]

Uncritical reports of the results of the trial as interpreted by the authors echoed through both the lay and physician-aimed media.

news coverage

Coverage by Reuters was somewhat more interesting than the rest. The trial authors’ claim that acupuncture for preventing migraines was ready for prime time was paired with some reservations expressed in the accompanying editorial.

reuters coverage

“Placebo response is strong in migraine treatment studies, and it is possible that the Deqi sensation . . . that was elicited in the true acupuncture group could have led to a higher degree of placebo response because there was no attempt made to elicit the Deqi sensation in the sham acupuncture group,” Dr. Amy Gelfand writes in an accompanying editorial.

gelfand_amy_antmanCome on, Dr. Gelfand, if you checked the article, you would have that Deqi is not measured. If you checked the literature, even proponents concede that Deqi remains a vague, highly subjective judgment in, this case, being made by an unblinded acupuncturist. Basically the acupuncturist persisted in whatever was being done until there was indication that a sensation of soreness, numbness, distention, or radiating seemed to be elicited from the patient. What part of a subjective response to acupuncture, with or without Deqi, would you consider NOT a placebo response?

 Dr. Gelfand  also revealed some reasons why she may bother to write an editorial for a treatment with an incoherent and implausible nonscientific rationale.

“When I’m a researcher, placebo response is kind of a troublesome thing, because it makes it difficult to separate signal from noise,” she said. But when she’s thinking as a doctor about the patient in front of her, placebo response is welcome, Gelfand said.

“You know, what I really want is my patient to feel better, and to be improved and not be in pain. So, as long as something is safe, even if it’s working through a placebo mechanism, it may still be something that some patients might want to use,” she said.

Let’s contemplate the implications of this. This editorial in JAMA Internal Medicine accompanies an article in which the trial author suggests acupuncture is ready to become a standard treatment for migraine. There is nothing in the article to which suggests that the unscientific basis of acupuncture had been addressed, only that it might have achieved a placebo response. Is Dr. Gelfand suggesting that would be sufficient, although there are some problems in the trial. What if that became the standard for recommending medications and medical procedures?

With increasing success in getting acupuncture and other now-called “integrative medicine” approaches ensconced in cancer centers and reimbursed by insurance, will be facing again and again some of the issues that started this blog post. Is acupuncture not doing obvious from a reason for reimbursing it? Trials like this one can be cited in support for reimbursement.

The JAMA: Internal Medicine report of an RCT of acupuncture for preventing migraines

Participants were randomly assigned to one of three groups: true acupuncture, sham acupuncture, or a waiting-list control group.

Participants in the true acupuncture and sham acupuncture groups received treatment 5 days per week for 4 weeks for a total of 20 sessions.

Participants in the waiting-list group did not receive acupuncture but were informed that 20 sessions of acupuncture would be provided free of charge at the end of the trial.

As the editorial comment noted, this is incredibly intensive treatment that burdens patients coming in five days a week for treatment for four weeks. Yet the effects were quite modest in terms of number of migraine attacks, even if statistically significant:

The mean (SD) change in frequency of migraine attacks differed significantly among the 3 groups at 16 weeks after randomization (P < .001); the mean (SD) frequency of attacks decreased in the true acupuncture group by 3.2 (2.1), in the sham acupuncture group by 2.1 (2.5), and the waiting-list group by 1.4 (2.5); a greater reduction was observed in the true acupuncture than in the sham acupuncture group (difference of 1.1 attacks; 95%CI, 0.4-1.9; P = .002) and in the true acupuncture vs waiting-list group (difference of 1.8 attacks; 95%CI, 1.1-2.5; P < .001). Sham acupuncture was not statistically different from the waiting-list group (difference of 0.7 attacks; 95%CI, −0.1 to 1.4; P = .07).

There were no group by time differences in use of medication for migraine. Receiving “true” versus sham acupuncture did not matter.

Four acupoints were used per treatment. All patients received acupuncture on 2 obligatory points, including GB20 and GB8. The 2 other points were chosen according to the syndrome differentiation of meridians in the headache region. The potential acupoints included SJ5, GB34, BL60, SI3, LI4, ST44, LR3, and GB40.20. The use of additional acupoints other than the prescribed ones was not allowed.We chose the prescriptions as a result of a systematic review of ancient and modern literature,22,23 consensus meetings with clinical experts, and experience from our previous study.

Note that the “headache region” is not the region of the head where headaches occur, the selection of which there is no scientific basis. Since when does such a stir fry of ancient and contemporary wisdom, consensus meetings with experts, and the clinical experience of the investigators become the basis of the mechanism justified for study in a clinical trial published in a prestigious American medical journal?

What was sham about the sham acupuncture (SA) treatment?

The number of needles, electric stimulation, and duration of treatment in the SA group were identical in the TA group except that an attempt was not made to induce the Deqi sensation. Four nonpoints were chosen according to our previous studies.

From the trial protocol, we learn that the effort to induce the Deqi sensation involves the acupuncturist twirling and rotating the needles.

In a manner that can easily escape notice, the authors indicate that they acupuncture was administered by electro stimulation.

In the methods section, they abruptly state:

Electrostimulation generates an analgesic effect, as manual acupuncture does.21

I wonder if the reviewers or the editorialist checked this reference. It is to an article that provides the insight that “meridians” -the 365 designated acupuncture points- are identified on a particular patient by

feeling for 12 organ-specific pulses located on the wrists and with cosmological interpretations including a representation of five elements: wood, water, metal, earth, and fire.

The authors further state that they undertook a program of research to counter the perception in the United States in the 1970s that acupuncture was quackery and even “Oriental hypnosis.” Their article describes some of the experiments they conducted, including one in which the benefits of a rabbit having received finger-pressure acupuncture was transferred to another via a transfusion of cerebrospinal fluid.

In discussing the results of the present study in JAMA Internal Medicine, the authors again comment in passing:

We added electrostimulation to manual acupuncture because manual acupuncture requires more time until it reaches a similar analgesic effect as electrical stimulation.27 Previous studies have reported that electrostimulation is better than manual acupuncture in relieving pain27-30 and could induce a longer lasting effect.28

The citations are to methodologically poor laboratory studies in which dramatic results are often obtained with very small cell size (n= 10).

Can we dispense with the myth that the acupuncture provided in this study is an extension of traditional Chinese needle therapy?

It is high time that we dispense with the notion that acupuncture applied to migraines and other ailments represents a traditional Chinese medicine that is therefore not subject to any effort to critique its plausibility and status as a science-based treatment. If we dispense with that idea, we still have to  confront how unscientific and nonsensical the rationale is for the highly ritualized treatment provided in this study.

An excellent article by Ben Kavoussi offers a carefully documented debunking of:

 reformed and “sanitized” acupuncture and the makeshift theoretical framework of Maoist China that have flourished in the West as “Traditional,” “Chinese,” “Oriental,” and most recently as “Asian” medicine.

Kavoussi, who studied to become an acupuncturist, notes that:

Traditional theories for selecting points and means of stimulation are not based on an empirical rationale, but on ancient cosmology, astrology and mythology. These theories significantly resemble those that underlined European and Islamic astrological medicine and bloodletting in the Middle-Ages. In addition, the alleged predominance of acupuncture amongst the scholarly medical traditions of China is not supported by evidence, given that for most of China’s long medical history, needling, bloodletting and cautery were largely practiced by itinerant and illiterate folk-healers, and frowned upon by the learned physicians who favored the use of pharmacopoeia.

In the early 1930s a Chinese pediatrician by the name of Cheng Dan’an (承淡安, 1899-1957) proposed that needling therapy should be resurrected because its actions could potentially be explained by neurology. He therefore repositioned the points towards nerve pathways and away from blood vessels-where they were previously used for bloodletting. His reform also included replacing coarse needles with the filiform ones in use today.38 Reformed acupuncture gained further interest through the revolutionary committees in the People’s Republic of China in the 1950s and 1960s along with a careful selection of other traditional, folkloric and empirical modalities that were added to scientific medicine to create a makeshift medical system that could meet the dire public health and political needs of Maoist China while fitting the principles of Marxist dialectics. In deconstructing the events of that period, Kim Taylor in her remarkable book on Chinese medicine in early communist China, explains that this makeshift system has achieved the scale of promotion it did because it fitted in, sometimes in an almost accidental fashion, with the ideals of the Communist Revolution. As a result, by the 1960s acupuncture had passed from a marginal practice to an essential and high-profile part of the national health-care system under the Chinese Communist Party, who, as Kim Taylor argues, had laid the foundation for the institutionalized and standardized format of modern Chinese medicine and acupuncture found in China and abroad today.39 This modern construct was also a part of the training of the “barefoot doctors,” meaning peasants with an intensive three- to six-month medical and paramedical training, who worked in rural areas during the nationwide healthcare disarray of the Cultural Revolution era.40 They provided basic health care, immunizations, birth control and health education, and organized sanitation campaigns. Chairman Mao believed, however, that ancient natural philosophies that underlined these therapies represented a spontaneous and naive dialectical worldview based on social and historical conditions of their time and should be replaced by modern science.41 It is also reported that he did not use acupuncture and Chinese medicine for his own ailments.42

What is a suitable comparison/control group for a theatrical administration of a placebo?

A randomized double-blind crossover pilot study published in NEJM highlight some of the problems arising from poorly chosen control groups. The study compared an inhaled albuterol bronchodilator to one of three control conditions placebo inhaler, sham acupuncture, or no intervention. Subjective self-report measures of perceived improvement in asthma symptoms and perceived credibility of the treatments revealed only that the no-intervention condition was inferior to the active treatment of inhaled albuterol and the two placebo conditions, but no difference was found between the active treatment and the placebo conditions. However, strong differences were found between the active treatment in the three comparison/control conditions in an objective measure of physiological responses – improvement in forced expiratory volume (FEV1), measured with spirometry.

One take away lesson is we should be careful about accepting subjective self-report measures when objective measures are available. One objective measure in the present study was the taking of medication for migraines and there were no differences between groups. This point is missed in both the target article in JAMA Internal Medicine and the accompanying editorial.

The editorial does comment on the acupuncturists being unblinded – they clearly knew when they are providing the preferred “true” acupuncture and when they were providing sham. They had some instructions to avoid creating a desqi sensation in the sham group, but some latitude in working till it was achieved in the “true” group. Unblinded treatment providers are always a serious risk of bias in clinical trials, but we here we have a trial where the primary outcomes are subjective, the scientific status of desqi is dubious, and the providers might be seen as highly motivated to promote the “true” treatment.

I’m not sure why the editorialist was not stopped in her tracks by the unblinded acupuncturists – or for that matter why the journal published this article. But let’s ponder a bit difficulties in coming up with a suitable comparison/control group for what is – until proven otherwise – a theatrical and highly ritualized placebo. If a treatment has no scientifically valid crucial ingredient, how we construct a comparison/control group differs only in the absence of the active ingredient, but is otherwise equivalent?

There is a long history of futile efforts to apply sham acupuncture, defined by what practitioners consider the inappropriate meridians. An accumulation of failures to distinguish such sham from “true” acupuncture in clinical trials has led to arguments that the distinction may not be valid: the efficacy of acupuncture may depend only on the procedure, not choice of a correct meridian. There are other studies would seem to show some advantage to the active or “true” treatments. These are generally clinical trials with high risk of bias, especially the inability to blind practitioners as to what she treatment they are providing.

There are been some clever efforts to develop sham acupuncture techniques that can fool even experienced practitioners. A recent PLOS One article  tested needles that collapsed into themselves.

Up to 68% of patients and 83% of acupuncturists correctly identified the treatment, but for patients the distribution was not far from 50/50. Also, there was a significant interaction between actual or perceived treatment and the experience of de qi (p = 0.027), suggesting that the experience of de qi and possible non-verbal clues contributed to correct identification of the treatment. Yet, of the patients who perceived the treatment as active or placebo, 50% and 23%, respectively, reported de qi. Patients’ acute pain levels did not influence the perceived treatment. In conclusion, acupuncture treatment was not fully double-blinded which is similar to observations in pharmacological studies. Still, the non-penetrating needle is the only needle that allows some degree of practitioner blinding. The study raises questions about alternatives to double-blind randomized clinical trials in the assessment of acupuncture treatment.

This PLOS One study is supplemented by a recent review in PLOS One, Placebo Devices as Effective Control Methods in Acupuncture Clinical Trials:

Thirty-six studies were included for qualitative analysis while 14 were in the meta-analysis. The meta-analysis does not support the notion of either the Streitberger or the Park Device being inert control interventions while none of the studies involving the Takakura Device was included in the meta-analysis. Sixteen studies reported the occurrence of adverse events, with no significant difference between verum and placebo acupuncture. Author-reported blinding credibility showed that participant blinding was successful in most cases; however, when blinding index was calculated, only one study, which utilised the Park Device, seemed to have an ideal blinding scenario. Although the blinding index could not be calculated for the Takakura Device, it was the only device reported to enable practitioner blinding. There are limitations with each of the placebo devices and more rigorous studies are needed to further evaluate their effects and blinding credibility.

Really, must we we await better technology the more successfully fool’s acupuncturists and their patients whether they are actually penetrating the skin?

Results of the present study in JAMA: Internal Medicine are seemingly contradicted by the results of a large German trial  that found:

Results Between baseline and weeks 9 to 12, the mean (SD) number of days with headache of moderate or severe intensity decreased by 2.2 (2.7) days from a baseline of 5.2 (2.5) days in the acupuncture group compared with a decrease to 2.2 (2.7) days from a baseline of 5.0 (2.4) days in the sham acupuncture group, and by 0.8 (2.0) days from a baseline if 5.4 (3.0) days in the waiting list group. No difference was detected between the acupuncture and the sham acupuncture groups (0.0 days, 95% confidence interval, −0.7 to 0.7 days; P = .96) while there was a difference between the acupuncture group compared with the waiting list group (1.4 days; 95% confidence interval; 0.8-2.1 days; P<.001). The proportion of responders (reduction in headache days by at least 50%) was 51% in the acupuncture group, 53% in the sham acupuncture group, and 15% in the waiting list group.

Conclusion Acupuncture was no more effective than sham acupuncture in reducing migraine headaches although both interventions were more effective than a waiting list control.

I welcome someone with more time on their hands to compare and contrast the results of these two studies and decide which one has more credibility.

Maybe we step should back and ask “why is anyone care about such questions, when there is such doubt that a plausible scientific mechanism is in play?”

Time for JAMA: Internal Medicine to come clean

The JAMA: Internal Medicine article on acupuncture for prophylaxis of migraines is yet another example of a publication where revelation of earlier drafts, reviewer critiques, and author responses would be enlightening. Just what standard to which the authors are being held? What issues were raised in the review process? Beyond resolving crucial limitations like blinding of acupuncturists, under what conditions would be journal conclude that studies of acupuncture in general are sufficiently scientifically unsound and medically irrelevant to warrant publication in a prestigious JAMA journal.

Alternatively, is the journal willing to go on record that it is sufficient to establish that patients are satisfied with a pain treatment in terms of self-reported subjective experiences? Could we then simply close the issue of whether there is a plausible scientific mechanism involved where the existence of one can be seriously doubted? If so, why stop with evaluations with subjective pain would days without pain as the primary outcome?

acupuncture treatmentWe must question the wisdom of JAMA: Internal Medicine of inviting Dr. Amy Gelfand for editorial comment. She is apparently willing to allow that demonstration of a placebo response is sufficient for acceptance as a clinician. She also is attached to the University of California, San Francisco Headache Center which offers “alternative medicine, such as acupuncture, herbs, massage and meditation for treating headaches.” Endorsement of acupuncture in a prestigious journal as effective becomes part of the evidence considered for its reimbursement. I think there are enough editorial commentators out there without such conflicts of interest.

 

eBook_Mindfulness_345x550I will soon be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. Sign up at my new website to get notified about these courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.