Headspace mindfulness training app no better than a fake mindfulness procedure for improving critical thinking, open-mindedness, and well-being.

The Headspace app increased users’ critical thinking and being open-minded. So did practicing a shame mindfulness procedure- participants simply sat with their eyes closed, but thought they were meditating.

mind the brain logo

The Headspace app increased users’ critical thinking and open-mindedness. So did practicing a sham mindfulness procedure. Participants simply sat with their eyes closed, but thought they were meditating.

cat_ dreamstime_164683 (300 x 225)Results call into question claims about Headspace  coming from other studies that did not have such a credible, active control group comparison.

Results also call into question the widespread use of standardized self-report measures of mindfulness to establish whether someone is in the state of mindfulness. These measures don’t distinguish between the practice of standard versus fake mindfulness.

Results can be seen as further evidence that practicing mindfulness depends on nonspecific factors (AKA placebo), rather than any active, distinctive ingredient.

Hopefully this study will prompt better studies evaluating the Headspace App, as well as evaluations of mindfulness training more generally, using credible active treatments, rather than no treatment or waitlist controls.

Maybe it is time for a moratorium on trials of mindfulness without such an active control or at least a tempering of claims based on poorly controlled  trials.

This study points to the need for development of more psychometrically sophisticated measures of mindfulness that are not so vulnerable to experiment expectations and demand characteristics.

Until the accumulation of better studies with better measures, claims about the effects of practicing mindfulness ought to be recognized as based on relatively weak evidence.

The study

Noone, C & Hogan,M. Randomised active-controlled trial of effects of online mindfulness intervention on executive control, critical thinking and key thinking dispositionsBMC Psychology, 2018

Trial registration

The study was initially registered in the AEA Social Science Registry before the recruitment was initiated (RCT ID: AEARCTR-0000756; 14/11/2015) and retrospectively registered in the ISRCTN registry (RCT ID: ISRCTN16588423) in line with requirements for publishing the study protocol.

Excerpts from the Abstract

The aim of this study was…investigating the effects of an online mindfulness intervention on executive function, critical thinking skills, and associated thinking dispositions.


Participants recruited from a university were randomly allocated, following screening, to either a mindfulness meditation group or a sham meditation group. Both the researchers and the participants were blind to group allocation. The intervention content for both groups was delivered through the Headspace online application, an application which provides guided meditations to users.


Primary outcome measures assessed mindfulness, executive functioning, critical thinking, actively open-minded thinking, and need for cognition. Secondary outcome measures assessed wellbeing, positive and negative affect, and real-world outcomes.


Significant increases in mindfulness dispositions and critical thinking scores were observed in both the mindfulness meditation and sham meditation groups. However, no significant effects of group allocation were observed for either primary or secondary measures. Furthermore, mediation analyses testing the indirect effect of group allocation through executive functioning performance did not reveal a significant result and moderation analyses showed that the effect of the intervention did not depend on baseline levels of the key thinking dispositions, actively open-minded thinking, and need for cognition.

The authors conclude

While further research is warranted, claims regarding the benefits of mindfulness practice for critical thinking should be tempered in the meantime.

Headscape Be used on an iPhone

The active control condition

The sham treatment control condition was embarrassingly straightforward and simple. But as we will see, participants found it credible.

This condition presented the participants with guided breathing exercises. Each session began by inviting the participants to sit with their eyes closed. These exercises were referred to as meditation but participants were not given guidance on how to control their awareness of their body or breath. This approach was designed to control for the effects of expectations surrounding mindfulness and physiological relaxation to ensure that the effect size could be attributed to mindfulness practice specifically. This content was also delivered by Andy Puddicombe and was developed based on previous work by Zeidan and colleagues [55, 57, 58].

What can we conclude about the standard self-report measures of the state of mindfulness?

The study used the Five Facet Mindfulness Questionnaire, which is widely used to assess whether people are in a state of mindfulness. It has been cited almost 4000 times.

Participants assigned to the mindfulness condition had significant changes for all five facets from baseline to follow up: observing, non-reactivity, non-judgment, acting with awareness, and describing. In the absence of a comparison with change in the sham mindfulness group, these pre-post results would seem to suggest that the measure was sensitive to whether participants had practiced mindfulness. However, there were no differences from the changes observed for the participants assigned to mindfulness and those which were simply asked to sit with their eyes closed.

I asked Chris Noone about the questionnaires his group used to assess mindfulness:

The participants genuinely thought they were meditating in the sham condition so I think both non-specific and demand characteristics were roughly equivalent across both groups. I’m also skeptical regarding the ability of the Five-Facet Mindfulness Questionnaire (or any mindfulness questionnaire for that matter) to capture anything other than “perceived mindfulness”. The items used in these questionnaires feature similar content to the scripts used by the people delivering the mindfulness (and sham) guided meditations. The improvement in critical thinking across both groups is just a mix of learning across a semester and habituation to the task (as the same problems were posed at both measurements).

What I like about this trial

The trial provides a critical test of a key claim for mindfulness:

Mindfulness should facilitate critical thinking in higher-education, based on early Buddhist conceptualizations of mindfulness as clarity of thought.

The trial was registered before recruitment and departures from protocol were noted.

Sample size was determined by power analysis.

The study had a closely matched, active control condition, a sham mindfulness treatment.

The credibility and equivalence of this sham condition versus the active treatment under study was repeatedly assessed.

“Manipulation checks were carried out to assess intervention acceptability, technology acceptance and meditation quality 2 weeks after baseline and 4 weeks after baseline.”

The study tested some a priori hypotheses about mediators and moderation:

Analyses were intention to treat.

 How the study conflicts with past studies

Previous studies claimed to show positive effects of mindfulness on aspects of executive functioning [25 and  26]

How the contradiction of past studies by these results is resolved

 “There are many studies using guided meditations similar to those in our mindfulness meditation condition, delivered through smartphone applications [49, 50, 52, 90, 91], websites [92, 93, 94, 95, 96, 97] and CDs [98, 99], which show effects on measures of outcomes reliably associated with increases in mindfulness such as depression, anxiety, stress, wellbeing and compassion. There are two things to note about these studies – they tend not to include a measure of dispositional mindfulness (e.g. only 4% of all mindfulness intervention studies reviewed in a recent meta-analysis include such measures at baseline and follow-up; [54]) and they usually employ a weak form of control group such as a no-treatment control or waitlist control [54]. Therefore, even when change in mindfulness is assessed in mindfulness meditation intervention studies, it is usually overestimated and this must be borne in mind when comparing the results of this study with those of previous studies. This combined with generally only moderate correlations with behavioural outcomes [54] suggests that when mindfulness interventions are effective, dispositional measures do not fully capture what has changed.”

The broader take away messages

“Our results show that, for most outcomes, there were significant changes from baseline to follow-up but none which can be specifically attributed to the practice of mindfulness.’

This creative use of a sham mindfulness control condition is a breakthrough that should be widely followed. First, it allowed a fair test of whether mindfulness is any better than another active, credible treatment. Second, because the active treatment was a sham, results provide a challenge to the notion that apparent effects of mindfulness on critical thinking are anything more than a placebo effect.

The Headspace App is enormously popular and successful, based on claims about what benefits its use will provide. Some of these claims may need to be tempered, not only in terms of critical thinking, but effects on well-being.

The Headspace App platform lends itself to such critical evaluations with respect to a sham treatment with a degree of standardization that is not readily possible with face-to-face mindfulness training. This opportunity should be exploited further with other active control groups constructed on the basis of specific hypotheses.

There is far too much research on the practice of mindfulness being done that does not advance understanding of what works or how it works. We need a lot fewer studies, and more with adequate control/comparison groups.

Perhaps we should have a moratorium on evaluations of mindfulness without adequate control groups.

Perhaps articles being aimed at audiences making enthusiastic claims for the benefits of mindfulness should routinely note whether these claims are based on adequately controlled studies. Most are not.

Better days: When PLOS Blogs honored my post about fatal flaws in the PACE chronic fatigue syndrome follow-up study (2015)

The back story on my receiving this honor was that PLOS Blogs only days before had shut down the blog site because of complaints from someone associated with the PACE trial. I was asked to resign. I refused. PLOS Blogs relented when I said it would be a publicity disaster for PLOS Blogs.

mind the brain logoThe back story on my receiving this honor was that PLOS Blogs only days before had shut down the blog site because of complaints from someone associated with the PACE trial. I was asked to resign. I refused. PLOS Blogs relented when I said it would be a publicity disaster for PLOS Blogs.

screen shot 11th most accessedA Facebook memory of what I was posting two years ago reminded me of better days when PLOS Blogs honored my post about the PACE trial.

Your Top 15 in ’15: Most popular on PLOS BLOGS Network

I was included in a list of the most popular blog posts in a network that received over 2.3 million visitors reading more than 600 new posts. [It is curious that the sixth and seventh most popular posts were omitted from this list, but that’s another story]

I was mentioned for number 11:

11) Uninterpretable: Fatal flaws in PACE Chronic Fatigue Syndrome follow-up study Mind the Brain 10/29/15

Investigating and sharing potential errors in scientific methods and findings, particularly involving psychological research, is the primary reason Clinical Health Psychologist (and PLOS ONE AE) Jim Coyne blogs on Mind the Brain and elsewhere. This closely followed post is one such example.

Earlier decisions by the investigator group preclude valid long-term follow-up evaluation of CBT for chronic fatigue syndrome (CFS). At the outset, let me say that I’m skeptical whether we can hold the PACE investigators responsible… Read more

The back story was that only days before, I had gotten complaints from readers of Mind the Brain who found they were blocked from leaving comments at my blog site. I checked and found that I couldn’t even access the blog as an author.

I immediately emailed Victoria Costello and asked her what it happened. We agreed to talk by telephone, even though it was already late night where I was in Philadelphia. She was in the San Francisco PLOS office.

In the telephone conversation,  I was reminded me that there were some topics about which was not supposed to blog. Senior management at PLOS found me in violation of that prohibition and wanted me to stop blogging.

As is often the case with communication with the senior management of PLOS, no specifics had been given.  There was no formal notice or disclosure about what topics I couldn’t blog or who had complained. And there had been no warning when my access to the blog site was cut. Anything that I might say publicly could be met with a plausible denial.

I reminded Victoria that I had never received any formal specification about what I could blog nor from whom the complaint hand come. There had been a vague communication from her about not blogging about certain topics. I knew that complaints from either Gabrielle Oettingen or her family members had led to request the blog about the flaws in her book,  Rethinking Positive Thinking . That was easy to do because I was not planning another post about that dreadful self-help book.  Any other prohibition was left so vague that had no idea that I couldn’t blog about the PACE trial. I had known that the authors of the British Psychological Society’s Understanding Psychosis were quite upset with what I had said in heavily accessed blog posts. Maybe that was the source of the other prohibition, but no one made that clear. And I wasn’t sure I wanted to honor it, anyway.

I pressed Victoria Costello for details. She said an editor had complained. When I asked if it was Richard Horton, she paused and mumbled something that I took as an affirmative. Victoria then suggested that  it would be best for the blog network and myself if we had a mutually agreed-upon parting of ways. I told her that I would probably publicly comment that the breakup was not mutual and it would be a publicity disaster for the blog.

igagged_jpg-scaled500Why I was even blogging for PLOS Blogs? Victoria Costello had recruited me over after I expressed discontent with the censorship that I was receiving at Psychology Today. The PT editors there had complained that some of my blogging about antidepressants might discourage ads from pharmaceutical companies for which they depended for revenue. The editors had insisted on  the right to approve my posts before I uploaded them. In inviting me to PLOS Blogs, Victoria told me that she too was a refugee from blogging at Psychology Today.  I wouldn’t have to worry about restrictions on what I could say at Mind the Brain, beyond avoiding libel.

I ended the conversation accepting the prohibition about blogging about the PACE trial. This is was despite disagreeing with the rationale that it would be a conflict of interest for me to blog about it after requesting the data from the PLOS One paper.

Since then, I repeatedly requested that the PLOS management acknowledge the prohibition on my blogging or at least put it in writing. My request was met with repeated refusals from Managing Editor Iratxe Puebla, who always cited my conflict of interest.

In early 2017, I began publicly tweeting about the issue, stimulating some curiosity others about whether there was a prohibition. InJuly 2017, the entire Mind the Brain site, not just my blog, was shut.

In early 2018, I will provide more backstory on that shutdown and dispute what was said in the blog post below. And more about the collusion between PLOS One senior management and the PACE investigators in the data not being available 2 years after I requested it.

Message for Mind the Brain readers from PLOSBLOGS

blank plos blogs thumb nail
This strange thumbnail is the default for when no preferred image is provided. It could indicate the haste with which this blog was posted.

Posted July 31, 2017 by Victoria Costello in Uncategorized

After five years and over a hundred posts, PLOSBLOGS is retiring its psychology blog, Mind the Brain, from our PLOS-hosted blog network. By mutual agreement with the primary Mind the Brain blogger, James Coyne, Professor Coyne will retain the name of this blog and will take his archive of posts for reuse on his independent website, http://www.coyneoftherealm.com.

According to PLOSBLOGS’ policy for all our retired (inactive) blogs, any and all original posts published on Mind the Brain will retain their PLOS web addresses as intact urls, so links made previously from other sites will not be broken. In addition, PLOS will supply the archive of his posts directly to Prof Coyne so that he may repost them anywhere he may wish.

PLOS honors James Coyne’s voice as an important one in peer-to-peer scientific criticism. As discussed with Professor Coyne in recent days, after careful consideration PLOSBLOGS has concluded that it does not have the staff resources required to vet the sources, claims and tone contained in his posts, to assure they are aligned with our PLOSBLOGS Community Guidelines. This has lead us to the conclusion that Professor Coyne and his content would be better served on his own independent blog platform. We wish James Coyne the best with his future blogging.

—Victoria Costello, Senior Editor, PLOSBLOGS & Communities


The SMILE Trial Lightning Process for Children with CFS: Results too good to be true?

The SMILE trial holds many anomalies and leaves us with more questions than answers.

keith'ds pouting girl

A guest post by Dr. Keith Geraghty

Honorary Research Fellow at the University of Manchester, Centre for Primary Care, Division of Population Health and Health Services Research

ASA ruling left some awkward moments in Phil Parker’s videos promoting his Lightning Process.

The Advertising Standards Authority previously ruled that the Lightning Process (LP) should not be advertised as a treatment for CFS/ME. So how then, did LP end up getting tested as a treatment in a clinical trial involving adolescents with CFS/ME? Publication of the trial sparked controversy after it was claimed that LP, in addition to specialist medical care, out-performed specialist medical care alone. This blog attempts to shed light on just how a quack alternative online teaching programme, ended up in a costly clinical trial and discusses how the SMILE trial exemplifies all that is wrong with contemporary psycho-behavioural trials; that are clearly vulnerable to bias and spin.

The SMILE trial compared LP plus specialist medical care (SMC) to SMC alone (commonly a mix of cognitive behavioural therapy and graded exercise therapy). LP is a trademarked training programme created by Phil Parker from osteopathy, life coaching and neuro-linguistic programming. It costs over £600 and after assessment and telephone briefings, clients attend group sessions over three days. While there is much secrecy about what exactly these sessions involve, a cursory search online shows us that past clients were told to ‘block out all negative thoughts’ and to consider themselves well, not sick. A person with an illness is said to be ‘doing illness’ (LP spells doing as duing, to signify LP means more than just doing). LP appears to attempt to get a participant to ‘stop doing’ by blocking negative thoughts and making positive affirmations.

Leading psychologists have raised concerns. Professor James Coyne called LP “quackery” and said neuro-linguistic programming “…has been thoroughly debunked for its pseudoscience”. In an expert reaction to the SMILE trial for the Science Media Centre, Professor Dorothy Bishop of Oxford University stated: “the intervention that was assessed is commercial and associated with a number of warning signs. The Lightning Process appears based on neuro-linguistic programming, which, despite its scientific-sounding name, has long been recognised as pseudoscience“.

The first and most obvious question is why did the SMILE trial take place? Trial lead Professor Esther Crawley, who runs an NHS paediatric CFS/ME clinic, says she undertook the trial after many of her patients and their parents asked about LP. Patients with CFS/ME often report a lack of support from doctors and health care providers and some turn to the internet seeking help; some are drawn to try alternative approaches, such as LP. But is that justification enough for spending over £160,000 on testing LP on children? I think not. Should we test every quack approach peddled online: herbs, crystals, spiritual healing – particularly when funding in CFS/ME research is so limited currently? There must also be a compelling scientific plausibility to justify a trial. Simply wanting to see if something helps, does not merit adequate justification.

The SMILE trial has a fundamental design flaw. The trial compared specialist medical care alone (SMC) against SMC plus LP (SMC&LP). To the novice observer this may appear acceptable, but clinical trials are used to test item x against item y. For example, imagine trying to see which drug works better, drug A or drug B, you would not give drug A to one group and both drugs A and B to another group – yet this is exactly what happened in SMILE. In seeking to test LP, Prof. Crawley gave LP&SMC together – rendering any findings from this trial arm as pretty meaningless. The proper controls were missing. In addition, a trial of this magnitude would normally have a third arm, a do-nothing or usual care group, or another talk therapy control – yet such controls were missing.

Next we turn to the trial’s primary outcome measures. These were subjective self-reports of changes in physical function (using SF-36). Secondary outcomes were quality of life, anxiety and school attendance. These outcomes were assessed at 6 months with a follow-up at 12 months. It is reported that SMC+LP outperformed SMC alone on these measures at 6 and maintained at 12 months. However, there is no way to determine whether any claimed improvements came from LP alone, given LP was mixed with SMC. We could assume that LP+SMC meant more support, positive expectations and increased contact time. Here we see how farcical SMILE is as a trial. We have one group getting two treatments (possible double help) and one group getting one treatment (possible half help).

Of particular concern is how few of the available patients enrolled in and completed the trial: 637 children aged 12-18 attended screening or appointment at a specialist CFS/ME clinic; fewer than half (310) were deemed eligible; just 136 consented to receiving trial information and then only 100 were randomised (less than 1/3 of the eligible group). 49 had SMC and 51 had SMC+LP. Overall 207 patients either declined to participate or were not sufficiently interested to return the consent form. Were patients self-selecting? Were those less likely to respond to nonspecific factors choosing not to participate, and were we left with a group interested in LP – give Prof. Crawley said many patients asked about LP?

As the trial progressed, patients dropped out: of the 51 participants allocated to SMC+LP, only 39 received full SMC+LP. At 6-month assessment just 38 of the 48 allocated to SMC and 46 of the 51 in SMC+LP are fully recorded. At 12 months there are further losses to follow-up in both cohorts: 14% in LP and 24% in SMC.  The reasons for participant loss are not fully clear, though the paper reports 5 adverse events (3 in the SMC+LP arm). It is worth noting that physical function at 6 months deteriorated in 9 participants (roughly 10% overall), 8 in the SMC arm, with 5 participants having a fall of ≤10 on the SF-36 physical function subscale (deemed not clinically important). Again questions are raised as to whether some degree of self-selection took place? The fact 3 of the participants assigned to SMC alone appear to have received LP reflects possible contamination of research cohorts that are meant to be kept apart.

 Seven problems stand out in SMILE:

  1. The use of the SF-36 physical function test was questionable. This self-report instrument is not designed or adequately validated for use in children.
  2. Many of the participants appear to have had symptoms of anxiety and depression at the start of the trial. SMILE defined anxiety and depression as a score of ≥12 out of 22 on the self-report HADS. Usually a score of 8 or above is considered positive for mild anxiety and depression, and of above 12 for moderate anxiety and depression[1]. The average mean HADS score at trial entry was 9.6 (meaning using standard cut-offs, most participants met a criteria for anxiety and depression). On the Spence Anxiety Scale (SCAS) the average entry score was 35, with above 33 indicative of anxiety in this age group. Such mild to moderate elevations in depression and anxiety symptoms are very responsive to nonspecific support.
  3. There is an anomaly in the data on improvement: in the physical function test, the average base level of the children at entry into the trial was 54.5 (n=99), considered severely physically impaired. Only 52.5% of participants had been able to attend at least 3 days of school in the week prior to their entry into the study. Yet those assigned to SMC+LP were well enough to attend 3 consecutive days of sessions lasting 4 hours. The reports of severe physical disablement do not match the capabilities of those who participated in the course. Were the children’s self-reported poor physical abilities exaggerated to justify enrolment in the trial? Were the children’s elevated depression and anxiety symptoms responsive to the nonspecific elements in extra time of being assigned to LP plus standard care?
  4. If the subjective self-report is accepted as a recovery criterion, in LP, just 12 hours of talk therapy, added to SMC would cure the majority of children with CFS. Such an effect would be astonishing, if true. In randomized controlled trials in adults with CFS/ME, such dramatic restoration of physical function (a wholesale return to near normal) is universally not seen. The SMILE Trial is clearly unbelievable.
  5. SMILE’s reliance on the broad NICE criteria means there is a clear risk patients were included in the trial who would not have met stricter definitions of the illness. There is a growing concern that loose entry criteria in clinical trials in ME/CFS allow enrolments of many participants who do not in fact have ME/CFS. A detailed study of CFS prevalence found many children are wrongly diagnosed with CFS, when they may just be suffering from general fatigue and/or mental health complaints (Jones et al., 2004). SMILE uses NICE guidelines to diagnose CFS: fatigue must be present for at least 3 months with one or more of four other symptoms, which can be as general as sleep disturbance[2]. In contrast, Jones et al. showed that using the Centre for Disease Control criteria of at least four specific symptoms alongside detailed clinical examination, many children believed to have CFS are diagnosed with other exclusionary disorders, often general fatigue, mental health complaints, drug and alcohol abuse or eating disorders (that are often not readily disclosed to parents or doctors)[3].
  6. LP involves attempting to coerce clients into thinking that they have control over their symptoms and to block out symptoms. This alone would distort any response by a participant in a follow-on questionnaire about symptoms.
  7. LP was delivered by people from the Lightning Process Company. Phil Parker and his employees held a clear financial interest in a positive outcome in SMILE. Such an obvious conflict of interest is hard to disentangle and totally nullifies any outcomes from this trial.

Final Thoughts

The SMILE trial holds many anomalies and leaves us with more questions than answers.

It is not clear whether the children enrolled in the trial, diagnosed with CFS using NICE criteria, might of been deemed non-CFS using more stringent clinical screening (e.g. CDC or IOM Criteria).

There is no way of determining whether any effect following SMC+LP was anything more than the result of non-specific factors, psychological tricks and persuasion.

The fact LP+SMC appears to have cured the majority of participants with as little as 12 hours talk therapy is a big flashing red light that this trial is clearly fundamentally flawed.

There is a very real danger of promoting LP as a treatment for CFS/ME: The UK ME Association conducted a survey of members (4,217 members) and found that 20% of those who tried LP reported feeling worse (7.9% slightly worse,12.9% much worse). SMILE cannot be, and should not be, used to justify LP as a treatment for CFS/ME.

The Lightning Process has no scientific credibility and this trial highlights a fundamental flaw in contemporary clinical trials: they are susceptible to suggestion, bias and spin. The SMILE trial appears to draw paediatric CFS/ME clinical care for children into a swamp of pseudoscience and mysticism. This is a clear step backward. There is little to smile about after reviewing the SMILE trial.

Dr. Geraghty is currently an Honorary Research Fellow within the Centre for Primary Care, Division of Population Health and Health Services Research at the University of Manchester. He previously worked as a research associate at Cardiff University and Imperial College London. He left a career in clinical medicine after becoming ill with ME/CFS. The main themes of his work are doctor-patient relationships, medically unexplained symptoms, quality and safety in health care delivery, physician well-being and evidence-based medicine. He has a special interest in medically unexplained symptoms (MUS), and Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. 

Although only recently published, his recent ‘PACE-Gate’: When clinical trial evidence meets open data access is already ranked #2 out of 1,350 papers in altmetics in Journal of Health Psychology.

A recent Times article cited Dr Geraghty on reasons why NICE need to update their recommendations for ME/CFS

Special thanks to John Peters and David Marks for their feedback.

Coyne, J. (2017) Mind the Brain Blog, https://www.coyneoftherealm.com/blogs/mind-the-brain/embargo-broken-bristol-university-professor-to-discuss-trial-of-quack-chronic-fatigue-syndrome-treatment
Dorothy Bishop andExpert Commentary to the SMC (2017) http://www.sciencemediacentre.org/expert-reaction-to-controversial-treatment-for-cfsme/

1. Crawley, E., et al., Chronic disabling fatigue at age 13 and association with family adversity. Pediatrics, 2012. 130(1): p. e71-e79.
2. Crawley, E.M., et al., Clinical and cost-effectiveness of the Lightning Process in addition to specialist medical care for paediatric chronic fatigue syndrome: randomised controlled trial. Archives of Disease in Childhood, 2017.
3. Jones, J.F., et al., Chronic fatigue syndrome and other fatiguing illnesses in adolescents: a population-based study. Journal of Adolescent Health, 2004. 35(1): p. 34-40.

Science Media Centre concedes negative reaction from scientific community to coverage of Esther Crawley’s SMILE trial.

“It was the criticism from within the scientific community that we had not anticipated.”

mind the brain logo

Editorial from the

science media centre logo

eat-crow-humble-pieSEPTEMBER 28, 2017

Inconvenient truths



“It was the criticism from within the scientific community that we had not anticipated.”

“This time the SMC also came under fire from our friends in science…Quack buster extraordinaire David Colquhoun tweeted, ‘More reasons to be concerned about @SMC_London?’

Other friends wrote to us expressing concern about the unintended consequences of SMC briefings – with one saying that policy makers were furious at having to deal with the fallout from our climate briefing and others worried that the briefing on the CFS/ME trial would allow the only private company offering the treatment to profit by over-egging preliminary findings.

Eat more crowThose of us who are accustomed to the Science Media Centre UK (SMC) highly slanted coverage of select topics  can detect a familiar defensive, yet self-congratulatory tone to an editorial put out by the SMC in reaction to its broad coverage of Esther Crawley’s SMILE trial of the quack treatment, Phil Parker’s Lightning Process. Once again, critics, both patients and professionals, of ineffectual treatments being offered for chronic fatigue syndrome/myalgic encephalomyelitis  are lumped with climate change deniers. Ho-hum, this comparison is getting so clichéd.

Perhaps even better, the SMC editorial’s concessions of poor coverage of the SMILE trial drew sharp amplifications from commentators that SMC had botched the job.

b1f9cdb8747b66edb7587c798153d4bfHere are some comments below, with emphases added. But let’s not be lulled by SMC into assuming that these intelligent, highly articulate comments, not necessarily from the professional community. I wouldn’t be surprised if hiding behind the pseudonyms are some of the excellent citizen scientists that the patient community has had to grow in the face of vilification and stigmatization led by SMC.

I actually think I recognize a spokesperson from the patient community writing under the pseudonym ‘Scary vocal critic.’

Scary vocal critic says:

September 29, 2017 at 5:59 am

The way that this blog glosses over important details in order to promote a simplistic narrative is just another illustration of why so many are concerned by Fiona Fox’s work, and the impact [of] the Science Media Centre.

Let’ s look in a bit more detail at the SMILE trial, from Esther Crawley at Bristol University. This trial was intended to assess the efficacy of Phil Parker’s Lightning Process©. Phil Parker has a history of outlandish medical claims about his ability to heal others, selling training in “the use of divination medicine cards and tarot as a way of making predictions” and providing a biography which claimed: “Phil Parker is already known to many as an inspirational teacher, therapist, healer and author. His personal healing journey began when, whilst working with his patients as an osteopath. He discovered that their bodies would suddenly tell him important bits of information about them and their past, which to his surprise turned out to be factually correct! He further developed this ability to step into other people’s bodies over the years to assist them in their healing with amazing results. After working as a healer for 20 years, Phil Parker has developed a powerful and magical program to help you unlock your natural healing abilities. If you feel drawn to these courses then you are probably ready to join.” https://web.archive.org/web/20070615014926/http://www.healinghawk.com/prospectushealing.htm

While much of the teaching materials for the Lightning Process are not available for public scrutiny (LP being copyrighted and controlled by Phil Parker), it sells itself as being founded on neurolinguistic programming and osteopathy, which are themselves forms of quackery. Those who have been on the course have described a combination of strange rituals, intensive positive affirmations, and pseudoscientific neuro-babble; all adding up to promote the view that an individual’s ill-health can be controlled if only they are sufficiently committed to the Lightning Programme. Bristol University appears to have embraced the neurobabble, and in their press release about the SMILE results they describe LP thus: “It is a three-day training programme run by registered practitioners and designed to teach individuals a new set of techniques for improving life and health, through consciously switching on health promoting neurological pathways.”


Unsurprisingly, many patients have complained about paying for LP and receiving manipulative quackery. This can have unpredictable consequences. This article reports a child attempting to kill themselves after going on the Lightning Process:  Before conducting a trial, the researchers involved had a responsibility to examine the course and training materials and remove all pseudo-science, yet this was not done. Instead, those patient groups raising concerns about the trial were smeared, and presented as being opposed to science.

The SMILE trial was always an unethical use of research funding, but if it had followed its original protocol, it would have been less likely to generate misleading results and headlines. The Skeptics Dictionary’s page on the Lightning Process features a contribution which explains that: “the Lightning Process RCT being carried out by Esther Crawley changed its primary outcome measure from school attendance to scores on a self-report questionnaire. Given that LP involves making claims to patients about their own ability to control symptoms in exactly the sort of way likely to lead to response bias, it seems very likely that this trial will now find LP to be ‘effective’. One of the problems with EBM is that it is often difficult to reliably measure the outcomes that are important to patients and account for the biases that occur in non-blinded trials, allowing for exaggerated claims of efficacy to be made to patients.”

The SMILE trial was a nonblinded, A vs A+B design, testing a ‘treatment’ which included positive affirmations, and then used subjective self-report questionnaires as a primary outcome. This is not a sensible way of conducting a trial, as anyone who has looked at how junk-science can be used to promote quackery will be aware.

You can see the original protocol for the SMILE trial here (although this protocol refers to merely a feasibility study, this is the same research, with the same ethical review code, the feasibility study having seemingly been converted to a full trial a year into the research):

The protocol that: “The primary outcome measure for the interventions will be school attendance/home tuition at 6 months.” It is worth noting that the new SMILE paper reported that there was no significant difference between groups for what was the trial’s primary outcome. There was a significant difference at 12 months, but by this point data on school attendance was missing for one third of the participants of the LP arm. The SMC failed to inform journalists of this outcome switching, instead presenting Prof Crawley as a critic converted by a rigorous examination of the evidence, despite her having told the ethics review board in 2010 that “she has worked before with the Bath [LP] practitioner who is good”. https://meagenda.wordpress.com/2011/01/06/letter-issued-by-nres-following-scrutiny-of-complaints-in-relation-to-smile-lighting-process-pilot-study/

Also, while the original protocol, and a later analysis plan, refer to verifying self-reported school attendance with school records, I could see no mention of this in the final paper, so it may be that even this more objective outcome measure has been rendered less useful and more prone to problems with response bias.

Back to Fiona Fox’s blog: “If you had only read the headlines for the CFS/ME story you may conclude that the treatment tested at Bristol might be worth a try if you are blighted by the illness, when in truth the author said repeatedly that the findings would first have to be replicated in a bigger trial.”

How terrible of sloppy headline writers to misrepresent research findings. This is from the abstract of Esther Crawley’s paper: “Conclusion The LP is effective and is probably cost-effective when provided in addition to SMC for mild/moderately affected adolescents with CFS/ME.” http://adc.bmj.com/content/early/2017/09/20/archdischild-2017-313375

Fox complains of “vocal critics of research” in the CFS and climate change fields. There has been a prolong campaign from the SMC to smear those patients and academics who have been pointing out the problems with poor quality UK research into CFS, attempting to lump them with climate change deniers, anti-vaccinationists and animal rights extremists. The SMC used this campaign as an example of when they had “engineered the coverage” by “seizing the agenda”:


Despite dramatic claims of a fearsome group of dangerous extremists (“It’s safer to insult the Prophet Mohammed than to contradict the armed wing of the ME brigade”), a Freedom of Information request helped us gain some valuable information about exactly what behaviour most concerned victimised researchers such as Esther Crawley:

“Minutes from a 2013 meeting held at the Science Media Centre, an organisation that played an important role in promoting misleading claims about the PACE trial to the UK media, show these CFS researchers deciding that “harassment is most damaging in the form of vexatious FOIs [Freedom of Information requests]”.[13,16, 27-31] The other two examples of harassment provided were “complaints” and “House of Lords debates”.[13] It is questionable whether such acts should be considered forms of harassment.


[A full copy of the minutes is included at the above address.]

Since then, a seriously ill patient managed to win a legal battle against researchers attempting to release key trial data, picking apart the prejudices that were promoted and left the Judge to state that “assessment of activist behaviour was, in our view, grossly exaggerated and the only actual evidence was that an individual at a seminar had heckled Professor Chalder.” http://www.informationtribunal.gov.uk/DBFiles/Decision/i1854/Queen%20Mary%20University%20of%20London%20EA-2015-0269%20(12-8-16).PDF

So why would there be an attempt to present request for information, complaints, and mere debate, as forms of harassment? Rather embarrassingly for Fiona and the SMC, it has since become clear. Following the release of (still only some of) the data from the £5 million PACE trial it is now increasingly recognised within the academic community that patients were right to be concerned about the quality of these researchers’ work, and the way in which people had been misled about the trial’s rsults. The New York Times reported on calls for the retraction of a key PACE paper (Robin Murray, the journal’s editor and a close friend of Simon Wessely’s, does not seem keen to discuss and debate the problems with this work): https://www.nytimes.com/2017/03/18/opinion/sunday/getting-it-wrong-on-chronic-fatigue-syndrome.html The Journal of Health Psychology has published as special issue devoted to the PACE trial debacle: http://journals.sagepub.com/doi/full/10.1177/1359105317722370 The CDC has dropped promotion of CBT and GET: https://www.statnews.com/2017/09/25/chronic-fatigue-syndrome-cdc/ And NICE has decided to a full review of its guidelines for CFS is necessary, citing concerns about research such as PACE as one of the key reasons for this: https://www.nice.org.uk/guidance/cg53/resources/surveillance-report-2017-chronic-fatigue-syndromemyalgic-encephalomyelitis-or-encephalopathy-diagnosis-and-management-2007-nice-guideline-cg53-4602203537/chapter/how-we-made-the-decision https://www.thetimes.co.uk/edition/news/mutiny-by-me-sufferers-forces-a-climbdown-on-exercise-treatment-npj0spq0w

The SMC’s response to this has not been impressive.

Fox writes: “Both briefings fitted the usual mould: top quality scientists explaining their work to smart science journalists and making technical and complex studies accessible to readers.”

I’d be interested to know how it was Fox decided that Crawley was a top quality scientist. Also, it is worrying that the culture of UK science journalism seems to assume that making technical and complex studies (like SMILE?!) accessible for readers is their highest goal. It is not a surprise that it is foreign journalists who have produced more careful and accurate coverage of the PACE trial scandal.

Unlike the SMC and some CFS researchers, I do not consider complaints or debate to be a form of harassment, and would be quite happy to respond to anyone who disagrees with the concerns I have laid out here. I have had to simplify things, but believe that I have not done so in a way which favours my case. It seems that there are few people willing to try to publicly defend the PACE trial anymore, and I have never seen anyone from the SMC attempt to respond to anything other than a straw-man representation of their critics. Lets see what response these inconvenient truths receive.


Michael Emmans-Dean says:

October 2, 2017 at 8:22 am

The only point I would add to this excellent post is to ask why on earth the SMC decided to feature such a small, poorly-designed trial as SMILE. The most likely explanation is that it was intended as a smokescreen for an inconvenient truth. NICE’s retrieval of their CFS guideline from the long grass (the “static list”) is a far bigger story and it was announced in the same week that SMILE was published.


Fiona Roberts says:

September 29, 2017 at 9:03 am

Hear hear!

Power pose: I. Demonstrating that replication initiatives won’t salvage the trustworthiness of psychology

An ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science.


mind the brain logo

Bad publication practices keep good scientists unnecessarily busy, as in replicability projects.- Bjoern Brembs

Power-PoseAn ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science. Psychologists need to reconsider pitfalls of an exclusive reliance on this strategy to improve lay persons’ trust in their field.

Despite the consistency of null findings across seven attempted replications of the original power pose study, editorial commentaries in Comprehensive Results in Social Psychology left some claims intact and called for further research.

Editorial commentaries on the seven null studies set the stage for continued marketing of self-help products, mainly to women, grounded in junk psychological pseudoscience.

Watch for repackaging and rebranding in next year’s new and improved model. Marketing campaigns will undoubtedly include direct quotes from the commentaries as endorsements.

We need to re-examine basic assumptions behind replication initiatives. Currently, these efforts  suffer from prioritizing of the reputations and egos of those misusing psychological science to market junk and quack claims versus protecting the consumers whom these gurus target.

In the absence of a critical response from within the profession to these persons prominently identifying themselves as psychologists, it is inevitable that the void be filled from those outside the field who have no investment in preserving the image of psychology research.

In the case of power posing, watchdog critics might be recruited from:

Consumer advocates concerned about just another effort to defraud consumers.

Science-based skeptics who see in the marketing of the power posing familiar quackery in the same category as hawkers using pseudoscience to promote homeopathy, acupuncture, and detox supplements.

Feminists who decry the message that women need to get some balls (testosterone) if they want to compete with men and overcome gender disparities in pay. Feminists should be further outraged by the marketing of junk science to vulnerable women with an ugly message of self-blame: It is so easy to meet and overcome social inequalities that they have only themselves to blame if they do not do so by power posing.

As reported in Comprehensive Results in Social Psychology,  a coordinated effort to examine the replicability of results reported in Psychological Science concerning power posing left the phenomenon a candidate for future research.

I will be blogging more about that later, but for now let’s look at a commentary from three of the over 20 authors get reveals an inherent limitation to such ambitious initiatives in tackling the untrustworthiness of psychology.

Cesario J, Jonas KJ, Carney DR. CRSP special issue on power poses: what was the point and what did we learn?.  Comprehensive Results in Social Psychology. 2017


Let’s start with the wrap up:

The very costly expense (in terms of time, money, and effort) required to chip away at published effects, needed to attain a “critical mass” of evidence given current publishing and statistical standards, is a highly inefficient use of resources in psychological science. Of course, science is to advance incrementally, but it should do so efficiently if possible. One cannot help but wonder whether the field would look different today had peer-reviewed preregistration been widely implemented a decade ago.

 We should consider the first sentence with some recognition of just how much untrustworthy psychological science is out there. Must we mobilize similar resources in every instance or can we develop some criteria to decide what is on worthy of replication? As I have argued previously, there are excellent reasons for deciding that the original power pose study could not contribute a credible effect size to the literature. There is no there to replicate.

The authors assume preregistration of the power pose study would have solved problems. In clinical and health psychology, long-standing recommendations to preregister trials are acquiring new urgency. But the record is that motivated researchers routinely ignore requirements to preregister and ignore the primary outcomes and analytic plans to which they have committed themselves. Editors and journals let them get away with it.

What measures do the replicationados have to ensure the same things are not being said about bad psychological science a decade from now? Rather than urging uniform adoption and enforcement of preregistration, replicationados urged the gentle nudge of badges for studies which are preregistered.

Just prior to the last passage:

Moreover, it is obvious that the researchers contributing to this special issue framed their research as a productive and generative enterprise, not one designed to destroy or undermine past research. We are compelled to make this point given the tendency for researchers to react to failed replications by maligning the intentions or integrity of those researchers who fail to support past research, as though the desires of the researchers are fully responsible for the outcome of the research.

There are multiple reasons not to give the authors of the power pose paper such a break. There is abundant evidence of undeclared conflicts of interest in the huge financial rewards for publishing false and outrageous claims. Psychological Science about the abstract of the original paper to leave out any embarrassing details of the study design and results and end with a marketing slogan:

That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.

 Then the Association for Psychological Science gave a boost to the marketing of this junk science with a Rising Star Award to two of the authors of this paper for having “already made great advancements in science.”

As seen in this special issue of Comprehensive Results in Social Psychology, the replicationados share responsibility with Psychological Science and APS for keeping keep this system of perverse incentives intact. At least they are guaranteeing plenty of junk science in the pipeline to replicate.

But in the next installment on power posing I will raise the question of whether early career researchers are hurting their prospects for advancement by getting involved in such efforts.

How many replicationados does it take to change a lightbulb? Who knows, but a multisite initiative can be combined with a Bayesian meta-analysis to give a tentative and unsatisfying answer.

Coyne JC. Replication initiatives will not salvage the trustworthiness of psychology. BMC Psychology. 2016 May 31;4(1):28.

The following can be interpreted as a declaration of financial interests or a sales pitch:

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

 Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.


“ACT: The best thing [for pain] since sliced bread or the Emperor’s new clothes?”

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

mind the brain logo

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

my title slideDavid Gillanders  and I held our debate “ACT: best thing since sliced bread or the Emperor’s new clothes?” at the British Pain Society meeting on Thursday, September 15, 2017 in Glasgow. We will eventually make our slides and a digital recording of the debate available.

I enjoyed hanging out with David Gillanders. He is a great guy who talks the talk, but also walks the walk. He lives ACT as a life philosophy. He was an ACT trainer speaking before a sympathetic audience, many who had been trained by him.

Some reflections from a few days later.

I was surprised how much Acceptance and Commitment Therapy (along with #mindfulness) has taken over UK pain services. A pre-debste poll showed most of the  audience  came convinced that indeed, ACT was the best thing since sliced bread.

I was confident that my skepticism was firmly rooted in the evidence. I don’t think there is debate about that. David Gillanders agreed that higher quality studies were needed.

But in the end, even I did not convert many, I came away quite pleased with the debate.

Standards for evaluating the  evidence for ACT for pain

 I recently wrote that ACT may have moved into a post-evidence phase, with its chief proponents switching from citing evidence to making claims about love, suffering, and the meaning of life. Seriously.

Steve Hayes prompted me on Twitter to take a closer look at the most recent evidence for ACT. As reported in an earlier blog, I took a close look.  I was not impressed that proponents of ACT are making much progress in developing evidence in any way as strong as their claims. We need a lot less ACT research that doesn’t add any quality evidence despite ACT being promoted enthusiastically as if it does. We need more sobriety from the promoters of ACT, particularly those in academia, like Steve Hayes and Kelly Wilson who know something about how to evaluate evidence. They should not patronize workshop goers with fanciful claims.

David Gillanders talked a lot about the philosophy and values that are expressed in ACT, but he also made claims about its research base, echoing the claims made by Steve Hayes and other prominent ACT promoters.

Standards for evaluating research exist independent of any discussion of ACT

There are standards for interpreting clinical trials and integration of their results in meta analysis that exist independent of the ACT literature. It is not a good idea to challenge these standards in the context of defending ACT against unfavorable evaluations, although that is exactly how Hayes and his colleagues often respond. I will get around to blogging about the most recent example of this.

Atkins PW, Ciarrochi J, Gaudiano BA, Bricker JB, Donald J, Rovner G, Smout M, Livheim F, Lundgren T, Hayes SC. Departing from the essential features of a high quality systematic review of psychotherapy: A response to Öst (2014) and recommendations for improvement. Behaviour Research and Therapy. 2017 May 29.

Within-group (pre-post) differences in outcome. David Gillanders echoed Hayes in using within-group effects sizes to describe the effectiveness of ACT. Results presented in this way are better and may look impressive, but they are exaggerated when compared to results obtained between groups. I am not making that up. Changes within the group of patients who received ACT reflect the specific effects of ACT plus whatever nonspecific factors were operating. That is why we need an appropriate comparison-control group to examine between-group differences, which are always more modest than just looking at the within-group effects.

Compared to what? Most randomized trials of ACT involve a wait list, no-treatment, or ill-described standard care (which often represents no treatment). Such comparisons are methodologically weak, especially when patients and providers know what is going on-called an unblinded trial– and when outcomes are subjective self-report measures.

homeopathyA clever study in New England Journal of Medicine showed that with such subjective self-report measures, one cannot distinguish between a proven effective inhaled medication for asthma, an inert substance simply inhaled, and sham acupuncture. In contrast, objective measures of breathing clearly distinguish the medication from the comparison-control conditions.

So, it is not an exaggeration to say that most evaluations of ACT are conducted under circumstances that even sham acupuncture or homeopathy would look effective.

Not superior to other treatments. There are no trials comparing ACT to a credible active treatment in which ACT proves superior, either for pain or other clinical problems. So, we are left saying ACT is better than doing nothing, at least in trials where any nonspecific effects are concentrated among the patients receiving ACT.

Rampant investigator bias. A lot of trials of ACT are conducted by researchers having an investment in showing that ACT is effective. That is a conflict of interest. Sometimes it is called investigator allegiance, or a promoter or originator bias.

Regardless, when drugs are being evaluated in a clinical trial, it is recognized that there will be a bias toward the drug favored by the manufacturer conducting the trial. It is increasingly recognized that meta analyses conducted by promoters should also be viewed with extra skepticism. And that trials conducted with researchers having such conflicts of interest should be considered separately to see if they produced exaggerated.

ACT desperately needs randomized trials conducted by researchers who don’t have a dog in the fight, who lack the motivation to torture findings to give positive results when they are simply not present. There’s a strong confirmation bias in current ACT trials, with promoter/researchers embarrassing themselves in their maneuvers to show strong, positive effects when their only weak or null findings available. I have documented [ 1, 2 ] how this trend started with Steve Hayes dropping two patients from his study of effects of brief ACT on re-hospitalization of inpatients with Patricia Bach. One patient had died by suicide and another was in jail and so they couldn’t be rehospitalized and were drop from the analyses. The deed could only be noticed by comparing the published paper with Patricia Bach’s dissertation. It allowed an otherwise nonsignificant finding a small trial significant.

Trials that are too small to matter. A lot of ACT trials have too few patients to produce a reliable, generalizable effect size. Lots of us in situations far removed from ACT trials have shown justification for the rule of thumb that we should consider effect sizes from trials having less than 35 patients per treatment of comparison cell. Even this standard is quite liberal. Even if a moderate effect would be significantly larger trial, there is less than a 50% probability it be detected the trial this small. To be significant with such a small sample size, differences between treatments have to be large, and there probably either due to chance or something dodgy that the investigators did.

Many claims for the effectiveness of ACT for particular clinical problems come from trials too small to generate a reliable effect sizes. I invite readers to undertake the simple exercise of looking at the sample sizes in a study cited has support of the effectiveness of ACT. If you exclude such small studies, there is not much research left to talk about.

Too much flexibility in what researchers report in publications. Many trials of ACT involve researchers administering a whole battery of outcome measures and then emphasizing those that make ACT look best and either downplaying or not mentioning further the rest. Similarly, many trials of ACT deemphasize whether the time X treatment interaction is significant in and simply ignore it if it is not all focus on the within-group differences. I know, we’re getting a big tactical here. But this is another way of saying is that many trials of ACT gives researchers too much latitude in choosing what variables to report and what statistics are used to evaluate them.

Under similar circumstances, showed that listening to the Beatles song When I’m 64 left undergraduates 18 months younger than when they listen to the song Karamba. Of course, the researchers knew damn well that the Beatles song didn’t have this effect, but they indicated they were doing what lots of investigators due to get significant results, what they call p-hacking.

Many randomized trials of ACT are conducted with the same researcher flexibility that would allow a demonstration that listening to a Beatles song drops the age of undergraduates 18 months.

Many of the problems with ACT research could be avoided if researchers were required to publish ahead of time their primary outcome variables and plans for analyzing them. Such preregistration is increasingly recognized as best research practices, including by NIMH. There is  no excuse not to do it.

My take away message?

ACT gurus have been able to dodge the need to develop quality data to support their claims that their treatment is effective (and their sometime claim it is more effective than other approaches). A number of them are university-based academics and have ample resources to develop better quality evidence.

Workshop and weekend retreat attendees are convinced that ACT works on the strength of experiential learning and a lot of theoretical mumbo jumbo.

But the ACT promoters also make a lot of dodgy claims that there is strong evidence that the specific ingredients of ACT, techniques and values, account for the power of ACT. But some of the ACT gurus, Steve Hayes and Kelly Wilson at least, are academics and should limit their claims of being ‘evidence-based” to what is supported by strong, quality evidence. They don’t. I think they are being irresponsible in throwing in “evidence-based’ with all the

What should I do as an evidence-based skeptic wanting to improve the conversation about ACT?

 Earlier in my career, I spent six years in live supervision in some world-renowned therapists behind the one-way mirror including John Weakland, Paul Watzlawick, and Dick Fisch. I gave workshops world wide on how to do brief strategic therapies with individuals, couples, and families. I chose not to continue because (1) I didn’t like the pressure for drama and exciting interventions when I interviewed patients in front of large groups; (2) Even when there was a logic and appearance of effectiveness to what I did, I didn’t believe it could be manualized; and (3) My group didn’t have the resources to conduct proper outcome studies.

But I got it that workshop attendees like drama, exciting interventions, and emotional experiences. They go to trainings expecting to be entertained, as much as informed. I don’t think I can change that.

Many therapists have not had the training to evaluate claims about research, even if they accept that being backed by research findings is important. They depend on presenters to tell them about research and tend to trust what they say. Even therapist to know something about research, tennis and critical judgment when caught up in emotionality provided by some training experiences. Experiential learning can be powerful, even when it is used to promote interventions that are not supported by evidence.

I can’t change the training of therapists nor the culture of workshops and training experience. But I can reach out to therapist who want to develop skills to evaluate research for themselves. I think some of the things that point out in this blog post are quite teachable as things to look for.

I hope I can connect with therapists who want to become citizen scientists who are skeptical about what they hear and want to become equipped to think for themselves and look for effective resources when they don’t know how to interpret claims.

This is certainly not all therapists and may only be a minority. But such opinion leaders can be champions for the others in facilitating intelligent discussions of research concerning the effectiveness of psychotherapies. And they can prepare their colleagues to appreciate that most change in psychotherapy is not as dramatic or immediate as seen in therapy workshops.

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.


Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing


[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.