Stop using the Adverse Childhood Experiences Checklist to make claims about trauma causing physical and mental health problems

Scores on the adverse childhood experiences (ACE) checklist (or ACC) are widely used in making claims about the causal influence of childhood trauma on mental and physical health problems. Does anyone making these claims bother to look at how the checklist is put together and consider what a summary score might mean?


mind the brain logo

Scores on the adverse childhood experiences (ACE) checklist (or ACC) are widely used in making claims about the causal influence of childhood trauma on mental and physical health problems. Does anyone making these claims bother to look at how the checklist is put together and consider what a summary score might mean?

In this issue of Mind the Brain, we begin taking a skeptical look at the ACE checklist. We ponder some of the assumptions implicit in what items were included and how summary scores of the number of items checked are interpreted. Readers will be left with profound doubts that the ACE is suitable for making claims about trauma.

This blog will eventually be followed by another that presents the case that scores on the ACC do not represent a risk factor for health problems, only a relatively uninformative risk marker. In contrast to potentially modifiable risk factors, risk markers are best interpreted as calling attention to the influence of some combination of other risk factors, many of as yet unspecified, but undoubtedly of an entirely different nature than what is being studied. What?!! You will have to stay tuned, but I’ll give some hints about what I am talking about in the current blog post.

Summary of key points

 The ACE checklist is a collection of very diverse and ambiguous items that cannot be presumed to necessarily represent traumatic experiences.

Items variously

  • Represent circumstances that are not typically traumatic.
  • Reflect the respondent’s past or current psychopathology.
  • Make equivalent and traumatic vastly different experiences, many neutral and some that are positive.
  • Reinterpret a personal vulnerability due to familial transmission of psychopathology, either direct or indirect, rather than simply an exposure to events.
  • Ignore crucial contextual information, including timing of events.

There is reason not to assume that higher summed scores for the ACE represent more exposure to trauma than lower scores.

Are professionals misinterpreting the ACE checklist just careless or are they ideologues selectively identifying “evidence” for their positions which don’t depend on evidence at all?

ace-7Witness claims based on research with the ACE that migraines are caused by sexual abuse   and that psychotherapy addressing that abuse should be first line treatment. Or claims that childhood trauma is as strong a risk factor for psychosis and schizophrenia as smoking is for lung cancer [* ] and so psychotherapy is equivalent to medication in its effects. Or claims that myalgic encephalomyelitis, formerly known as chronic fatigue syndrome, is caused by childhood trauma and the psychological treatments can be recommended as the treatment of choice. These claims share a speculative, vague neo-cryptic pseudopsychoanalytic set of assumptions that is seldom articulated or explicitly confronted with evidence. Authors typically leap from claims about childhood trauma causing later problems to non sequitur claims about the efficacy of psychological intervention in treating these problems by addressing trauma. These claims about efficacy of trauma-focused treatment are not borne out in actually examining effects observed in randomized controlled trials.

Rather than attempting to address a provocative question about investigator motivation without a ready way of answering it, I will show most claims about trauma causing mental and physical health problems are, at best, based on very weak evidence, if they depend solely on the ACE checklist.

I will leave for my readers to decide if some authors who make such a fuss about the ACE have bothered to look at the instrument or care that is so inappropriate for the purposes to which they put it.

The ACE is reproduced at the bottom of this post and it is a good idea to compare what I’m saying about it to the actual checklist.

What “science” is behind such speculations?

The ACE was originally intended for educational purposes, not as a scientific instrument. Perhaps that explains its gross deficiencies as a key measure of psychological and epidemiological constructs.

The ACE checklist is a collection of very different and ambiguous items that cannot be presumed to represent traumatic experiences.

The ACE consists of ten dichotomous items for which the respondent is asked to indicate no/yes whether an experience occurred before the age of 18.  However, for six of the 10 items, the respondent is given further choices  that often differ greatly in the kind of experience to which the items refer. Scoring of the instrument does not take which of these experiences is the basis of a response. For example,

5. Did you often feel that … You didn’t have enough to eat, had to wear dirty clothes, and had no one to protect you? or

Your parents were too drunk or high to take care of you or take you to the doctor if you needed it?

Yes   No     If yes enter 1     ________

This item treats some very different circumstances as equivalent. The first half is complex, but largely covers the experience of living in poverty, but combines that with “having no one to protect you.” In contrast, the second half refers to substance abuse on the part of parents. In neither case, is there any room for interpreting what mitigating circumstances in the respondent’s life might have influenced effects of exposure. Presumably, the timing of this exposure would be important. If the exposure only occurred at the end of the 18 year period covered by the checklist, effects could be mitigated by other individual and social resources the respondent had.

Single items that are added together in a summary score.  We have to ask whether there is an equivalency between the two halves of the item that will be treated as the same. This will be an accumulating concern as we go through the 10 item questionnaire

The items vary greatly in the likelihood that they refer to an experience that was traumatic. Seldom do any of the researchers who use the ACE explain what they mean by trauma. If they did, I doubt that they could make a good argument that in endorsing many of these items would indicate that a respondent had faced a trauma.

From the third edition of the American Psychiatric Association Diagnostic and Statistical Manual (DSM-III) onward to DSM-5, the assumption has been that a traumatic event is a catastrophic stressor outside the range of usual human experience.

With that criteria in mind we have to ask if items are likely to represent a traumatic experience for most people. In answering this question, we also have to ask how we willing to consider a particular item is equivalent to other items in arriving at an overall score reflecting exposure to trauma before age 18. Yet, if summary scores are to be meaningful, assumption has to be made that items contribute equally if they are endorsed

6. Were your parents ever separated or divorced?

Yes   No     If yes enter 1     ________

The item refers to a highly prevalent and complex event, the nature and consequences of which are likely to unfold over time. Importantly, we need a sense of context to judge whether the event is traumatic and, if so how severe. Presumably, it would matter greatly when, across the 18 year span, the event that occurred. No timing or other information is asked of the respondent, only whether or not this event occurred. Neither the respondent nor anyone interpreting a score on the inventory has further information as to what is meant.

Other problems with ambiguous items.

Questions can be raised about the validity of all the individual items and the wisdom of combining them as equivalent in creating a summary score.

Items 1 and 2: Items raise questions about what role the respondent played eliciting the event.

 Did an event simply befall a respondent? Was it related to some pre-existing characteristic of the respondent? Or did the respondent have an active role in generating the event?

Did a parent or other adult member of the household often…

Swear at you, insult you, put you down, or humiliate you?


Act in a way that made you afraid that you might be physically hurt?

Yes   No     If yes enter 1     ________


Did a parent or other adult in the household often …

Push, grab, slap, or throw something at you?


Ever hit you so hard that you had marks or were injured?

Yes   No     If yes enter 1     ________

 Here, as throughout the rest of the checklist, questions can be raised about whether these items refer simply to an environmental exposure in epidemiological terms, say, equivalent to asbestos or tobacco. We don’t know the frequency, intensity or context of a the behavior in question, all of which may be crucial in evaluating whether a trauma occurred. For instance, it matters greatly if the behavior happened frequently when the respondent as a toddler or was limited to a struggle that occurred when the respondent was a teen high on drugs  attempting to take the car keys and go for a after midnight drive.

Like most of the rest of the questionnaire, there is the question of timing.

Item 3: There is so much ambiguity in endorsments of (ostensible) sexual abuse. Maybe it was a positive, liberating experience.

This is a crucial item and discussions of the ACE often assume that it is endorsed and represents a traumatic experience:

Did an adult or person at least 5 years older than you ever…

Touch or fondle you or have you touch their body in a sexual way?


Try to or actually have oral, anal, or vaginal sex with you?

Note that this is a complex item for which endorsement could be on the basis of a single instance of a person at least 5 years older touching or fondling the respondent. What if the presumed “perpetrator” is the 20 year old boyfriend or girlfriend of a 14 year old?

Are we willing to treat as equivalent “touch” or ‘fondle you” and “having anal sex” in all instances?

Arguably, the event which construed as trauma could actually be quite positive, as in the respondent  forming a secure attachment with a somewhat older, but nonetheless appropriate partner. All that is unconventional is not traumatic. What if the respondent and  alleged “perpetrator” were in a deeply intimate relationship or already married?

The research that attempts to link endorsement of such an item to lasting mental and physical health problems is remarkably contradictory and inconsistent 

Item 4:  Does this  item reflect the respondent’s serious clinical depression or other mental disorder before age 18 or currently, when the checklist is being completed?

Did you often feel that …  No one in your family loved you or thought you were important or special?    or

Your family didn’t look out for each other, feel close to each other, or support each other?

Yes   No     If yes enter 1     ________

As elsewhere in the checklist, there is no place for the respondent or someone interpreting a “yes” response for taking into account timing or contextual factors that might mitigate or compound effects of this “exposure.”

Item 5: Is this a  traumatic exposure or an enduring set of circumstances conferring multiple known risks to mental and physical health?

Did you often feel that …

You didn’t have enough to eat, had to wear dirty clothes, and had no one to protect you?


Your parents were too drunk or high to take care of you or take you to the doctor if you needed it?

Yes   No     If yes enter 1     ________

This item has already been discussed above, but is worth revisiting in terms of raising issues whether particular items refer either directly or indirectly to enduring sets of circumstances that pose their own enduring threat. The relevant question is whether items which ostensibly represent “traumatic events” and risk for subsequent problems are not risk factors, but only risk indicators, and not particularly informative ones.

Item 7: Could an ostensibly a traumatic exposure actually be no actual exposure?

Was your mother or stepmother:

Often pushed, grabbed, slapped, or had something thrown at her?    or

Sometimes or often kicked, bitten, hit with a fist, or hit with something hard?    or

Ever repeatedly hit over at least a few minutes or threatened with a gun or knife?

Yes   No     If yes enter 1     ________

Like item four, which refers to ostensible sexual abuse, this item seems to be one of the least ambiguous in terms of representing exposure to risk. But does it? We don’t know the timing, duration, or context. For instance, the mother might no longer be in the home and the respondent might not have known what happened at the time. There is even the possibility that the respondent was the “perpetrator” of such violence against the mother.

Items 8 and 9: Are traumatic exposures or indications of familial transmission of psychopathology?

Did you live with anyone who was a problem drinker or alcoholic or who used street drugs?

Yes   No

If yes enter 1     ________


Was a household member depressed or mentally ill or did a household member attempt suicide?    Yes   No     If yes enter 1     ________

These items are highly ambiguous. They don’t take in consideration whether the person was a biological relative, or whether they were a parent, sibling, or someone not biologically related. They don’t take into account timing. There may not have even been any direct exposure to the substance misuse or the attempted suicide, but the respondent only later learned of something that was closeted.

Item 10: traumatic exposure or relief from exposure?

Did a household member go to prison?

Yes   No

If yes enter 1     ________

The implications of endorsement of this item depend greatly on whom the household member was and the circumstances of them going to prison.

There may be a familial relationship with this person, but it could have been an abusive stepparents or stepsiblings, with the incarceration representing a lasting relief from some impressive situations. Or the person who became incarcerated was not an immediate family member, but somewhat more transient, maybe someone who was just renting a room or given a place to stay. We just don’t know.

Does adding up all these endorsements in a summary score clarify or confuse further?

Now add up your “Yes” answers:   _______   This is your ACE Score

 It would be useful to briefly review the assumptions involved in summing across items of a checklist and entering the summary score as a continuous variable in statistical analyses.

Classical test theory recognizes that the individual items may imperfectly reflect the underlying construct, in this case, traumatic exposure. However, in constructing a sum, the expectation is that the imperfections or errors of measurement in particular items cancel each other out. The summed score becomes a purer a representation of the underlying construct than any of the original items. Thus, the summary score will be more reliable and valid than any of the individual items would be.

There are a number of problems in applying this assumption to a summary ACE score. The items are quite heterogeneous, i.e., they vary wildly in whether they are likely to represent a traumatic exposure, and if so, the severity of that exposure. More importantly, there is a huge amount of variation in what these brief items would represent for particular individuals in the contexts they found themselves in the first 18 years of their lives. Undoubtedly, most endorsements of these items would represent false positives, if we hold ourselves to any strict definitions of trauma. If we don’t do so, we risk equating the only normative experiences that may have neutral or even positive effects on the respondent with serious exposures to traumatic events with lasting consequences

We are not in a position to know whether a score of five or even eight necessarily represents more traumatic exposure than a score of one.

Moreover, there is important empirical research of the clustering of events. We certainly cannot consider them random and unrelated. One classic study found 

In our data, total CCA was related to depressive symptoms, drug use, and antisocial behavior in a quadratic manner. Without further elucidation, this higher order relationship could have been interpreted as support for a sensitization process in which the long-term impact of each additional adversity on mental health compounds as childhood adversity accumulates. However, further analysis revealed that this acceleration effect was an artifact of the confounding of high cumulative adversity scores with the experience of more severe events. Thus, respondents with higher total CCA had disproportionately poorer emotional and behavioral functioning because of both the number and severity of the adversities they were exposed to, not the cumulative number of different types of adversities experienced.


Because low-impact adversities did not present a cumulative hazard to young adult mental health, they functioned as suppressor events in the total sum score, consistent with Turner and Wheaton’s (1997) expectation. Their inclusion increased the “noise” in the score and greatly watered down the influence of high-impact events. Thus, in addition to decreasing efficiency, total scores may seriously underestimate the cumulative effects of severe forms of childhood adversity, such as abuse and serious neglect.

But what if many or most of the high scores in a particular sample represent only a clustering of low- or no-impact adversities?

Another large-sample, key study cautioned:

Significant effects of parental separation}divorce in predicting subsequent mood disorders and addictive disorders are powerfully affected by whether or not there was parental violence and psychopathology in the household prior to the break-up and whether exposure to these adversities was reduced as a result of the separation (Kessler et al. 1997a). There are some situations – such as one in which the father was a violent alcoholic – where our data suggest that parental divorce and subsequent removal of the respondent from exposure to the father might actually be associated with a significant improvement in the respondent’s subsequent disorder risk profile, a possibility that has important social policy implications.

Finding Your ACE Score-page-0


*Richard Bentall commonly interprets summed ACE scores in peer reviewed articles  as having a traditional dose-response association with mental health outcomes, and therefore as representing a modifiable causal factor in psychosis. In books and in social media, his claims become simply absurd.


I don’t think his interpretations withstand a scrutiny of the items and what a summed score might conceivably represent.

eBook_Mindfulness_345x550Preorders are being accepted for e-books providing skeptical looks at mindfulness and positive psychology, and arming citizen scientists with critical thinking skills. 

I will also be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. I want to give researchers the tools to get into the journals where their work will get the attention it deserves.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at



The SMILE Trial Lightning Process for Children with CFS: Results too good to be true?

The SMILE trial holds many anomalies and leaves us with more questions than answers.

keith'ds pouting girl

A guest post by Dr. Keith Geraghty

Honorary Research Fellow at the University of Manchester, Centre for Primary Care, Division of Population Health and Health Services Research

ASA ruling left some awkward moments in Phil Parker’s videos promoting his Lightning Process.

The Advertising Standards Authority previously ruled that the Lightning Process (LP) should not be advertised as a treatment for CFS/ME. So how then, did LP end up getting tested as a treatment in a clinical trial involving adolescents with CFS/ME? Publication of the trial sparked controversy after it was claimed that LP, in addition to specialist medical care, out-performed specialist medical care alone. This blog attempts to shed light on just how a quack alternative online teaching programme, ended up in a costly clinical trial and discusses how the SMILE trial exemplifies all that is wrong with contemporary psycho-behavioural trials; that are clearly vulnerable to bias and spin.

The SMILE trial compared LP plus specialist medical care (SMC) to SMC alone (commonly a mix of cognitive behavioural therapy and graded exercise therapy). LP is a trademarked training programme created by Phil Parker from osteopathy, life coaching and neuro-linguistic programming. It costs over £600 and after assessment and telephone briefings, clients attend group sessions over three days. While there is much secrecy about what exactly these sessions involve, a cursory search online shows us that past clients were told to ‘block out all negative thoughts’ and to consider themselves well, not sick. A person with an illness is said to be ‘doing illness’ (LP spells doing as duing, to signify LP means more than just doing). LP appears to attempt to get a participant to ‘stop doing’ by blocking negative thoughts and making positive affirmations.

Leading psychologists have raised concerns. Professor James Coyne called LP “quackery” and said neuro-linguistic programming “…has been thoroughly debunked for its pseudoscience”. In an expert reaction to the SMILE trial for the Science Media Centre, Professor Dorothy Bishop of Oxford University stated: “the intervention that was assessed is commercial and associated with a number of warning signs. The Lightning Process appears based on neuro-linguistic programming, which, despite its scientific-sounding name, has long been recognised as pseudoscience“.

The first and most obvious question is why did the SMILE trial take place? Trial lead Professor Esther Crawley, who runs an NHS paediatric CFS/ME clinic, says she undertook the trial after many of her patients and their parents asked about LP. Patients with CFS/ME often report a lack of support from doctors and health care providers and some turn to the internet seeking help; some are drawn to try alternative approaches, such as LP. But is that justification enough for spending over £160,000 on testing LP on children? I think not. Should we test every quack approach peddled online: herbs, crystals, spiritual healing – particularly when funding in CFS/ME research is so limited currently? There must also be a compelling scientific plausibility to justify a trial. Simply wanting to see if something helps, does not merit adequate justification.

The SMILE trial has a fundamental design flaw. The trial compared specialist medical care alone (SMC) against SMC plus LP (SMC&LP). To the novice observer this may appear acceptable, but clinical trials are used to test item x against item y. For example, imagine trying to see which drug works better, drug A or drug B, you would not give drug A to one group and both drugs A and B to another group – yet this is exactly what happened in SMILE. In seeking to test LP, Prof. Crawley gave LP&SMC together – rendering any findings from this trial arm as pretty meaningless. The proper controls were missing. In addition, a trial of this magnitude would normally have a third arm, a do-nothing or usual care group, or another talk therapy control – yet such controls were missing.

Next we turn to the trial’s primary outcome measures. These were subjective self-reports of changes in physical function (using SF-36). Secondary outcomes were quality of life, anxiety and school attendance. These outcomes were assessed at 6 months with a follow-up at 12 months. It is reported that SMC+LP outperformed SMC alone on these measures at 6 and maintained at 12 months. However, there is no way to determine whether any claimed improvements came from LP alone, given LP was mixed with SMC. We could assume that LP+SMC meant more support, positive expectations and increased contact time. Here we see how farcical SMILE is as a trial. We have one group getting two treatments (possible double help) and one group getting one treatment (possible half help).

Of particular concern is how few of the available patients enrolled in and completed the trial: 637 children aged 12-18 attended screening or appointment at a specialist CFS/ME clinic; fewer than half (310) were deemed eligible; just 136 consented to receiving trial information and then only 100 were randomised (less than 1/3 of the eligible group). 49 had SMC and 51 had SMC+LP. Overall 207 patients either declined to participate or were not sufficiently interested to return the consent form. Were patients self-selecting? Were those less likely to respond to nonspecific factors choosing not to participate, and were we left with a group interested in LP – give Prof. Crawley said many patients asked about LP?

As the trial progressed, patients dropped out: of the 51 participants allocated to SMC+LP, only 39 received full SMC+LP. At 6-month assessment just 38 of the 48 allocated to SMC and 46 of the 51 in SMC+LP are fully recorded. At 12 months there are further losses to follow-up in both cohorts: 14% in LP and 24% in SMC.  The reasons for participant loss are not fully clear, though the paper reports 5 adverse events (3 in the SMC+LP arm). It is worth noting that physical function at 6 months deteriorated in 9 participants (roughly 10% overall), 8 in the SMC arm, with 5 participants having a fall of ≤10 on the SF-36 physical function subscale (deemed not clinically important). Again questions are raised as to whether some degree of self-selection took place? The fact 3 of the participants assigned to SMC alone appear to have received LP reflects possible contamination of research cohorts that are meant to be kept apart.

 Seven problems stand out in SMILE:

  1. The use of the SF-36 physical function test was questionable. This self-report instrument is not designed or adequately validated for use in children.
  2. Many of the participants appear to have had symptoms of anxiety and depression at the start of the trial. SMILE defined anxiety and depression as a score of ≥12 out of 22 on the self-report HADS. Usually a score of 8 or above is considered positive for mild anxiety and depression, and of above 12 for moderate anxiety and depression[1]. The average mean HADS score at trial entry was 9.6 (meaning using standard cut-offs, most participants met a criteria for anxiety and depression). On the Spence Anxiety Scale (SCAS) the average entry score was 35, with above 33 indicative of anxiety in this age group. Such mild to moderate elevations in depression and anxiety symptoms are very responsive to nonspecific support.
  3. There is an anomaly in the data on improvement: in the physical function test, the average base level of the children at entry into the trial was 54.5 (n=99), considered severely physically impaired. Only 52.5% of participants had been able to attend at least 3 days of school in the week prior to their entry into the study. Yet those assigned to SMC+LP were well enough to attend 3 consecutive days of sessions lasting 4 hours. The reports of severe physical disablement do not match the capabilities of those who participated in the course. Were the children’s self-reported poor physical abilities exaggerated to justify enrolment in the trial? Were the children’s elevated depression and anxiety symptoms responsive to the nonspecific elements in extra time of being assigned to LP plus standard care?
  4. If the subjective self-report is accepted as a recovery criterion, in LP, just 12 hours of talk therapy, added to SMC would cure the majority of children with CFS. Such an effect would be astonishing, if true. In randomized controlled trials in adults with CFS/ME, such dramatic restoration of physical function (a wholesale return to near normal) is universally not seen. The SMILE Trial is clearly unbelievable.
  5. SMILE’s reliance on the broad NICE criteria means there is a clear risk patients were included in the trial who would not have met stricter definitions of the illness. There is a growing concern that loose entry criteria in clinical trials in ME/CFS allow enrolments of many participants who do not in fact have ME/CFS. A detailed study of CFS prevalence found many children are wrongly diagnosed with CFS, when they may just be suffering from general fatigue and/or mental health complaints (Jones et al., 2004). SMILE uses NICE guidelines to diagnose CFS: fatigue must be present for at least 3 months with one or more of four other symptoms, which can be as general as sleep disturbance[2]. In contrast, Jones et al. showed that using the Centre for Disease Control criteria of at least four specific symptoms alongside detailed clinical examination, many children believed to have CFS are diagnosed with other exclusionary disorders, often general fatigue, mental health complaints, drug and alcohol abuse or eating disorders (that are often not readily disclosed to parents or doctors)[3].
  6. LP involves attempting to coerce clients into thinking that they have control over their symptoms and to block out symptoms. This alone would distort any response by a participant in a follow-on questionnaire about symptoms.
  7. LP was delivered by people from the Lightning Process Company. Phil Parker and his employees held a clear financial interest in a positive outcome in SMILE. Such an obvious conflict of interest is hard to disentangle and totally nullifies any outcomes from this trial.

Final Thoughts

The SMILE trial holds many anomalies and leaves us with more questions than answers.

It is not clear whether the children enrolled in the trial, diagnosed with CFS using NICE criteria, might of been deemed non-CFS using more stringent clinical screening (e.g. CDC or IOM Criteria).

There is no way of determining whether any effect following SMC+LP was anything more than the result of non-specific factors, psychological tricks and persuasion.

The fact LP+SMC appears to have cured the majority of participants with as little as 12 hours talk therapy is a big flashing red light that this trial is clearly fundamentally flawed.

There is a very real danger of promoting LP as a treatment for CFS/ME: The UK ME Association conducted a survey of members (4,217 members) and found that 20% of those who tried LP reported feeling worse (7.9% slightly worse,12.9% much worse). SMILE cannot be, and should not be, used to justify LP as a treatment for CFS/ME.

The Lightning Process has no scientific credibility and this trial highlights a fundamental flaw in contemporary clinical trials: they are susceptible to suggestion, bias and spin. The SMILE trial appears to draw paediatric CFS/ME clinical care for children into a swamp of pseudoscience and mysticism. This is a clear step backward. There is little to smile about after reviewing the SMILE trial.

Dr. Geraghty is currently an Honorary Research Fellow within the Centre for Primary Care, Division of Population Health and Health Services Research at the University of Manchester. He previously worked as a research associate at Cardiff University and Imperial College London. He left a career in clinical medicine after becoming ill with ME/CFS. The main themes of his work are doctor-patient relationships, medically unexplained symptoms, quality and safety in health care delivery, physician well-being and evidence-based medicine. He has a special interest in medically unexplained symptoms (MUS), and Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. 

Although only recently published, his recent ‘PACE-Gate’: When clinical trial evidence meets open data access is already ranked #2 out of 1,350 papers in altmetics in Journal of Health Psychology.

A recent Times article cited Dr Geraghty on reasons why NICE need to update their recommendations for ME/CFS

Special thanks to John Peters and David Marks for their feedback.

Coyne, J. (2017) Mind the Brain Blog,
Dorothy Bishop andExpert Commentary to the SMC (2017)

1. Crawley, E., et al., Chronic disabling fatigue at age 13 and association with family adversity. Pediatrics, 2012. 130(1): p. e71-e79.
2. Crawley, E.M., et al., Clinical and cost-effectiveness of the Lightning Process in addition to specialist medical care for paediatric chronic fatigue syndrome: randomised controlled trial. Archives of Disease in Childhood, 2017.
3. Jones, J.F., et al., Chronic fatigue syndrome and other fatiguing illnesses in adolescents: a population-based study. Journal of Adolescent Health, 2004. 35(1): p. 34-40.

Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing


[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.








Danish RCT of cognitive behavior therapy for whatever ails your physician about you

I was asked by a Danish journalist to examine a randomized controlled trial (RCT) of cognitive behavior therapy (CBT) for functional somatic symptoms. I had not previously given the study a close look.

I was dismayed by how highly problematic the study was in so many ways.

I doubted that the results of the study showed any benefits to the patients or have any relevance to healthcare.

I then searched and found the website for the senior author’s clinical offerings.  I suspected that the study was a mere experimercial or marketing effort of the services he offered.

Overall, I think what I found hiding in plain sight has broader relevance to scrutinizing other studies claiming to evaluate the efficacy of CBT for what are primarily physical illnesses, not psychiatric disorders. Look at the other RCTs. I am confident you will find similar problems. But then there is the bigger picture…

[A controversial assessment ahead? You can stop here and read the full text of the RCT  of the study and its trial registration before continuing with my analysis.]

Schröder A, Rehfeld E, Ørnbøl E, Sharpe M, Licht RW, Fink P. Cognitive–behavioural group treatment for a range of functional somatic syndromes: randomised trial. The British Journal of Psychiatry. 2012 Apr 13:bjp-p.

A summary overview of what I found:

 The RCT:

  • Was unblinded to patients, interventionists, and to the physicians continuing to provide routine care.
  • Had a grossly unmatched, inadequate control/comparison group that leads to any benefit from nonspecific (placebo) factors in the trial counting toward the estimated efficacy of the intervention.
  • Relied on subjective self-report measures for primary outcomes.
  • With such a familiar trio of design flaws, even an inert homeopathic treatment would be found effective, if it were provided with the same positive expectations and support as the CBT in this RCT. [This may seem a flippant comment that reflects on my credibility, not the study. But please keep reading to my detailed analysis where I back it up.]
  • The study showed an inexplicably high rate of deterioration in both treatment and control group. Apparent improvement in the treatment group might only reflect less deterioration than in the control group.
  • The study is focused on unvalidated psychiatric diagnoses being applied to patients with multiple somatic complaints, some of whom may not yet have a medical diagnosis, but most clearly had confirmed physical illnesses.

But wait, there is more!

  • It’s not CBT that was evaluated, but a complex multicomponent intervention in which what was called CBT is embedded in a way that its contribution cannot be evaluated.

The “CBT” did not map well on international understandings of the assumptions and delivery of CBT. The complex intervention included weeks of indoctrination of the patient with an understanding of their physical problems that incorporated simplistic pseudoscience before any CBT was delivered. We focused on goals imposed by a psychiatrist that didn’t necessarily fit with patients’ sense of their most pressing problems and the solutions.

OMGAnd the kicker.

  • The authors switched primary outcomes – reconfiguring the scoring of their subjective self-report measures years into the trial, based on a peeking at the results with the original scoring.

Investigators have a website which is marketing services. Rather than a quality contribution to the literature, this study can be seen as an experimercial doomed to bad science and questionable results from before the first patient was enrolled. An undeclared conflict of interest in play? There is another serious undeclared conflict of interest for one of the authors.

For the uninformed and gullible, the study handsomely succeeds as an advertisement for the investigators’ services to professionals and patients.

Personally, I would be indignant if a primary care physician tried to refer me or friend or family member to this trial. In the absence of overwhelming evidence to the contrary, I assume that people around me who complain of physical symptoms have legitimate physical concerns. If they do not yet have a confirmed diagnosis, it serves little purpose to stop the probing and refer them to psychiatrists. This trial operates with an anachronistic Victorian definition of psychosomatic condition.

something is rotten in the state of DenmarkBut why should we care about a patently badly conducted trial with switched outcomes? Is it only a matter of something being rotten in the state of Denmark? Aside from the general impact on the existing literature concerning CBT for somatic conditions, results of this trial  were entered into a Cochrane review of nonpharmacological interventions for medically unexplained symptoms. I previously complained about one of the authors of this RCT also being listed as an author on another Cochrane review protocol. Prior to that, I complained to Cochrane  about this author’s larger research group influencing a decision to include switched outcomes in another Cochrane review.  A lot of us rightfully depend heavily on the verdict of Cochrane reviews for deciding best evidence. That trust is being put into jeopardy.

Detailed analysis

1.This is an unblinded trial, a particularly weak methodology for examining whether a treatment works.

The letter that alerted physicians to the trial had essentially encouraged them to refer patients they were having difficulty managing.

‘Patients with a long-term illness course due to medically unexplained or functional somatic symptoms who may have received diagnoses like fibromyalgia, chronic fatigue syndrome, whiplash associated disorder, or somatoform disorder.

Patients and the physicians who referred them subsequently got feedback about to which group patients were assigned, either routine care or what was labeled as CBT. This information could have had a strong influence on the outcomes that were reported, particularly for the patients left in routine care.

Patients’ learning that they did not get assigned to the intervention group was undoubtedly disappointing and demoralizing. The information probably did nothing to improve the positive expectations and support available to patients in routine. This could have had a nocebo effect. The feedback may have contributed to the otherwise  inexplicably high rates of subjective deterioration [to be noted below] reported by patients left in the routine care condition. In contrast, the authors’ disclosure that patients had been assigned to the intervention group undoubtedly boosted the morale of both patients and physicians and also increased the gratitude of the patients. This would be reflected in the responses to the subjective outcome measures.

The gold standard alternative to an unblinded trial is a double-blind, placebo-controlled trial in which neither providers, nor patients, nor even the assessors rating outcomes know to which group particular patients were assigned. Of course, this is difficult to achieve in a psychotherapy trial. Yet a fair alternative is a psychotherapy trial in which patients and those who refer them are blind to the nature of the different treatments, and in which an effort is made to communicate credible positive expectations about the comparison control group.

Conclusion: A lack of blinding seriously biases this study toward finding a positive effect for the intervention, regardless of whether the intervention has any active, effective component.

2. A claim that this is a randomized controlled trial depends on the adequacy of the control offered by the comparison group, enhanced routine care. Just what is being controlled by the comparison? In evaluating a psychological treatment, it’s important that the comparison/control group offers the same frequency and intensity of contact, positive expectations, attention and support. This trial decidedly did not.

 There were large differences between the intervention and control conditions in the amount of contact time. Patients assigned to the cognitive therapy condition received an additional 9 group sessions with a psychiatrist of 3.5 hour duration, plus the option of even more consultations. The over 30 hours of contact time with a psychiatrist should be very attractive to patients who wanted it and could not otherwise obtain it. For some, it undoubtedly represented an opportunity to have someone to listen to their complaints of pain and suffering in a way that had not previously happened. This is also more than the intensity of psychotherapy typically offered in clinical trials, which is closer to 10 to 15, 50-minute sessions.

The intervention group thus received substantially more support and contact time, which was delivered with more positive expectations. This wealth of nonspecific factors favoring the intervention group compromises an effort to disentangle the specific effects of any active ingredient in the CBT intervention package. From what has been said so far, the trials’ providing a fair and generalizable evaluation of the CBT intervention is nigh impossible.

Conclusion: This is a methodologically poor choice of control groups with the dice loaded to obtain a positive effect for CBT.

3.The primary outcomes, both as originally scored and after switching, are subjective self-report measures that are highly responsive to nonspecific treatments, alleviation of mild depressive symptoms and demoralization. They are not consistently related to objective changes in functioning. They are particularly problematic when used as outcome measures in the context of an unblinded clinical trial within an inadequate control group.

There have been consistent demonstrations that assigning patients to inert treatments and measuring the outcomes with subjective measures may register improvements that will not correspond to what would be found with objective measures.

For instance, a provocative New England Journal of Medicine study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

There have been a number of demonstrations that treatments such as the one offered in the present study to patient populations similar to those in the study produce changes in subjective self-report that are not reflected in objective measures.

Much of the improvement in primary outcomes occurred before the first assessment after baseline and not very much afterwards. The early response is consistent with a placebo response.

The study actually included one largely unnoticed objective measure, utilization of routine care. Presumably if the CBT was effective as claimed, it would have produced a significant reduction in healthcare utilization. After all, isn’t the point of this trial to demonstrate that CBT can reduce health-care utilization associated with (as yet) medically unexplained symptoms? Curiously, utilization of routine care did not differ between groups.

The combination of the choice of subjective outcomes, unblinded nature of the trial, and poorly chosen control group bring together features that are highly likely to produce the appearance of positive effects, without any substantial benefit to the functioning and well-being of the patients.

Conclusion: Evidence for the efficacy of a CBT package for somatic complaints that depends solely on subjective self-report measures is unreliable, and unlikely to generalize to more objective measures of meaningful impact on patients’ lives.

4. We need to take into account the inexplicably high rates of deterioration in both groups, but particularly in the control group receiving enhanced care.

There was an unexplained deterioration of 50% deterioration in the control group and 25% in the intervention group. Rates of deterioration are only given a one-sentence mention in the article, but deserve much more attention. These rates of deterioration need to qualify and dampen any generalizable clinical interpretation of other claims about outcomes attributed to the CBT. We need to keep in mind that the clinical trials cannot determine how effective treatments are, but only how different a treatment is from a control group. So, an effect claimed for a treatment and control can largely or entirely come from deterioration in the control group, not what the treatment offers. The claim of success for CBT probably largely depends on the deterioration in the control group.

One interpretation of this trial is that spending an extraordinary 30 hours with a psychiatrist leads to only half the deterioration experienceddoing nothing more than routine care. But this begs the question of why are half the patients left in routine care deteriorating in such a large proportion. What possibly could be going on?

Conclusion: Unexplained deterioration in the control group may explain apparent effects of the treatment, but both groups are doing badly.

5. The diagnosis of “functional somatic symptoms” or, as the authors prefer – Severe Bodily Distress Syndromes – is considered by the authors to be a psychiatric diagnosis. It is not accepted as a valid diagnosis internationally. Its validation is limited to the work done almost entirely within the author group, which is explicitly labeled as “preliminary.” This biased sample of patients is quite heterogeneous, beyond their physicians having difficulty managing them. They have a full range of subjective complaints and documented physical conditions. Many of these patients would not be considered primarily having a psychiatric disorder internationally and certainly within the US, except where they had major depression or an anxiety disorder. Such psychiatric disorders were not an exclusion criteria.

Once sent on the pathway to a psychiatric diagnosis by their physicians’ making a referral to the study, patients had to meet additional criteria:

To be eligible for participation individuals had to have a chronic (i.e. of at least 2 years duration) bodily distress syndrome of the severe multi-organ type, which requires functional somatic symptoms from at least three of four bodily systems, and moderate to severe daily living.

The condition identified in the title of the article is not validated as a psychiatric diagnosis. Two papers to which the authors refer to their  own studies ( 1 , 2 ) from a single sample. The title of one of these papers makes a rather immodest claim:

Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. Journal of Psychosomatic Research. 2010 May 31;68(5):415-26.

In neither the two papers nor the present RCT is there sufficient effort to rule out a physical basis for the complaints qualifying these patients for a psychiatric diagnosis. There is also a lack of follow-up to see if physical diagnoses were later applied.

Citation patterns of these papers strongly suggest  the authors are not having got much traction internationally. The criteria of symptoms from three out of four bodily systems is arbitrary and unvalidated. Many patients with known physical conditions would meet these criteria without any psychiatric diagnosis being warranted.

The authors relate what is their essentially homegrown diagnosis to functional somatic syndromes, diagnoses which are themselves subject to serious criticism. See for instance the work of Allen Frances M.D., who had been the chair of the American Psychiatric Association ‘s Diagnostic and Statistical Manual (DSM-IV) Task Force. He became a harsh critic of its shortcomings and the failures of APA to correct coverage of functional somatic syndromes in the next DSM.

Mislabeling Medical Illness As Mental Disorder

Unless DSM-5 changes these incredibly over inclusive criteria, it will greatly increase the rates of diagnosis of mental disorders in the medically ill – whether they have established diseases (like diabetes, coronary disease or cancer) or have unexplained medical conditions that so far have presented with somatic symptoms of unclear etiology.


The diagnosis of mental disorder will be based solely on the clinician’s subjective and fallible judgment that the patient’s life has become ‘subsumed’ with health concerns and preoccupations, or that the response to distressing somatic symptoms is ‘excessive’ or ‘disproportionate,’ or that the coping strategies to deal with the symptom are ‘maladaptive’.


 “These are inherently unreliable and untrustworthy judgments that will open the floodgates to the overdiagnosis of mental disorder and promote the missed diagnosis of medical disorder.

The DSM 5 Task force refused to adopt changes proposed by Dr. Frances.

Bad News: DSM 5 Refuses to Correct Somatic Symptom Disorder

Leading Frances to apologize to patients:

My heart goes out to all those who will be mislabeled with this misbegotten diagnosis. And I regret and apologize for my failure to be more effective.

The chair of The DSM Somatic Symptom Disorder work group has delivered a scathing critique of the very concept of medically unexplained symptoms.

Dimsdale JE. Medically unexplained symptoms: a treacherous foundation for somatoform disorders?. Psychiatric Clinics of North America. 2011 Sep 30;34(3):511-3.

Dimsdale noted that applying this psychiatric diagnosis sidesteps the quality of medical examination that led up to it. Furthermore:

Many illnesses present initially with nonspecific signs such as fatigue, long before the disease progresses to the point where laboratory and physical findings can establish a diagnosis.

And such diagnoses may encompass far too varied a group of patients for any intervention to make sense:

One needs to acknowledge that diseases are very heterogeneous. That heterogeneity may account for the variance in response to intervention. Histologically, similar tumors have different surface receptors, which affect response to chemotherapy. Particularly in chronic disease presentations such as irritable bowel syndrome or chronic fatigue syndrome, the heterogeneity of the illness makes it perilous to diagnose all such patients as having MUS and an underlying somatoform disorder.

I tried making sense of a table of the additional diagnoses that the patients in this study had been given. A considerable proportion of patients had physical conditions that would not be considered psychiatric problems in the United States.. Many patients could be suffering from multiple symptoms not only from the conditions, but side effects of the medications being offered. It is very difficult to manage multiple medications required by multiple comorbidities. Physicians from the community found their competence and ability to spend time with these patients taxing.

table of functional somatic symptoms

Most patients had a diagnosis of “functional headaches.” It’s not clear what this designation means, but conceivably it could include migraine headaches, which are accompanied by multiple physical complaints. CBT is not an evidence-based treatment of choice for functional headaches, much less migraines.

Over a third of the patients had irritable bowel syndrome (IBS). A systematic review of the comorbidity  of irritable bowel syndrome concluded physical comorbidity is the norm in IBS:

The nongastrointestinal nonpsychiatric disorders with the best-documented association are fibromyalgia (median of 49% have IBS), chronic fatigue syndrome (51%), temporomandibular joint disorder (64%), and chronic pelvic pain (50%).

In the United States, many patients and specialists would consider considering irritable bowel syndrome as a psychiatric condition offensive and counterproductive. There is growing evidence that irritable bowel syndrome is a disturbance in the gut microbiota. It involves a gut-brain interaction, but the primary direction of influence is of the disturbance in the gut on the brain. Anxiety and depression symptoms are secondary manifestations, a product of activity in the gut influencing the nervous system.

Most of the patients in the sample had a diagnosis of fibromyalgia and over half of all patients in this study had a diagnosis of chronic fatigue syndrome.

Other patients had diagnosable anxiety and depressive disorders, which, particularly at the lower end of severity, are responsive to nonspecific treatments.

Undoubtedly many of these patients, perhaps most of them, are demoralized by not been able to get a  diagnosis for what they have good basis to believe is a medical condition, aside from the discomfort, pain, and interference with their life that they are experiencing. They could be experiencing a demoralization secondary to physical illness.

These patients presented with pain, fatigue, general malaise, and demoralization. I have trouble imagining how their specific most pressing concerns could be addressed in group settings. These patients pose particular problems for making substantive clinical interpretation of outcomes that are highly general and subjective.

Conclusion: Diagnosing patients with multiple physical symptoms as having a psychiatric condition is highly controversial. Results will not generalize to countries and settings where the practice is not accepted. Many of the patients involved in the study had recognizable physical conditions, and yet they are being shunted to psychiatrists who focused only on their attitude towards the symptoms. They are being denied the specialist care and treatments that might conceivably reduce the impact of their conditions on their lives

6. The “CBT” offered in this study is as part of a complex, multicomponent treatment that does not resemble cognitive behavior therapy as it is practiced in the United States.

it is thoughtAs seen in figure 1 in the article, The multicomponent intervention is quite complex and consists of more than cognitive behavior therapy. Moreover, at least in the United States, CBT has distinctive elements of collaborative empiricism. Patients and therapist work together selecting issues on which to focus, developing strategies, with the patients reporting back on efforts to implement them. From the details available in the article, the treatment sounded much more like an exhortation or indoctrination, even arguing with the patients, if necessary. An English version available on the web of the educational material used in initial sessions confirmed a lot of condescending pseudoscience was presented to convince the patients that their problems were largely in their heads.

Without a clear application of learning theory, behavioral analysis, or cognitive science, the “CBT”  treatment offered in this RCT has much more in common with the creative novation therapy offered by Hans Eysenck, which is now known to have been justified with fraudulent data. Indeed,  the educational materials  for this study to what is offered in Eysenck’s study reveal striking similarities. Eysenck was advancing the claim that his intervention could prevent cardiovascular disease and cancer and overcome the iatrogenic effects. I know, this sounds really crazy, but see my careful documentation elsewhere.

Conclusion: The embedding of an unorthodox “CBT” in a multicomponent intervention in this study does not allow isolating any specific, active component ofCBT that might be at work.

7. The investigators disclose having altered their scoring of their primary outcome years after the trial began, and probably after a lot of outcome data had been collected.

I found a casual disclosure in the method section of this article unsettling, particularly noting that the original trial registration was:

We found an unexpected moderate negative correlation of the physical and mental component summary measures, which are constructed as independent measures. According to the SF-36 manual, a low or zero correlation of the physical and mental components is a prerequisite of their use.23 Moreover, three SF-36 scales that contribute considerably to the PCS did not fulfil basic scaling assumptions.31 These findings, together with a recent report of problems with the PCS in patients with physical and mental comorbidity,32 made us concerned that the PCS would not reliably measure patients’ physical health in the study sample. We therefore decided before conducting the analysis not to use the PCS, but to use instead the aggregate score as outlined above as our primary outcome measure. This decision was made on 26 February 2009 and registered as a protocol change at clinical trials. gov on 11 March 2009. Only baseline data had been analysed when we made our decision and the follow-up data were still concealed.

Switching outcomes, particularly after some results are known, constitutes a serious violation of best research practices and leads to suspicion of the investigators refining their hypotheses after they had peeked at the data. See How researchers dupe the public with a sneaky practice called “outcome switching”

The authors had originally proposed a scoring consistent with a very large body of literature. Dropping the original scoring precludes any direct comparison with this body of research, including basic norms. They claim that they switched scoring because two key subscales were correlated in the opposite direction of what is reported in the larger literature. This is troubling indication that something has gone terribly wrong in authors’ recruitment of a sample. It should not be pushed under the rug.

The authors claim that they switched outcomes based only on examining of baseline data from their study. However, one of the authors, Michael Sharpe is also an author on the controversial PACE trial  A parallel switch was made to the scoring of the subjective self-reports in that trial. When the data were eventually re-analyzed using the original scoring, any positive findings for the trial were substantially reduced and arguably disappeared.

Even if the authors of the present RCT did not peekat their outcome data before deciding to switch scoring of the primary outcome, they certainly had strong indications from other sources that the original scoring would produce weak or null findings. In 2009, one of the authors, Michael Sharpe had access to results of a relevant trial. What is called the FINE trial had null findings, which affected decisions to switch outcomes in the PACE trial. Is it just a coincidence that the scoring of the outcomes was then switched for the present RCT?

Conclusion: The outcome switching for the present trial  represents bad research practices. For the trial to have any credibility, the investigators should make their data publicly available so these data could be independently re-analyzed with the original scoring of primary outcomes.

The senior author’s clinic

 I invite readers to take a virtual tour of the website for the senior author’s clinical services  ]. Much of it is available in English. Recently, I blogged about dubious claims of a health care system in Detroit achieving a goal of “zero suicide.” . I suggested that the evidence for this claim was quite dubious, but was a powerful advertisement for the health care system. I think the present report of an RCT can similarly be seen as an infomercial for training and clinical services available in Denmark.

Conflict of interest

 No conflict of interest is declared for this RCT. Under somewhat similar circumstances, I formally complained about undeclared conflicts of interest in a series of papers published in PLOS One. A correction has been announced, but not yet posted.

Aside from the senior author’s need to declare a conflict of interest, the same can be said for one of the authors, Michael Sharpe.

Apart from the professional and reputational interest, (his whole career has been built making strong claims about such interventions) Sharpe works for insurance companies, and publishes on the subject. He declared a conflict of interest for the for PACE trial.

MS has done voluntary and paid consultancy work for government and for legal and insurance companies, and has received royalties from Oxford University Press.

Here’s Sharpe’s report written for the social benefits reinsurance company UnumProvident.

If results of this are accepted at face, they will lend credibility to the claims that effective interventions are available to reduce social disability. It doesn’t matter that the intervention is not effective. Rather persons receiving social disability payments can be disqualified because they are not enrolled in such treatment.

Effects on the credibility of Cochrane collaboration report

The switched outcomes of the trial were entered into a Cochrane systematic review, to which primary care health professionals look for guidance in dealing with a complex clinical situation. The review gives no indication of the host of problems that I exposed here. Furthermore, I have glanced at some of the other trials included and I see similar difficulties.

I been unable to convince the Cochrane to clean up conflicts of interest that are attached to switched outcomes being entered in reviews. Perhaps some of my readers will want to approach Cochrane to revisit this issue.
I think this post raises larger issues about whether Cochrane has any business conducting and disseminating reviews of such a bogus psychiatric diagnosis, medically unexplained symptoms. These reviews do patients no good, and may sidetrack them from getting the medical care they deserve. The reviews do serve the interest of special interests, including disability insurance companies.

Special thanks to John Peters and to Skeptical Cat for their assistance with my writing this blog. However, I have sole responsibility for any excesses or distortions.


Before you enroll your child in the MAGENTA chronic fatigue syndrome study: Issues to be considered

[October 3 8:23 AM Update: I have now inserted Article 21 of the Declaration of Helsinki below, which is particularly relevant to discussions of the ethical problems of Dr. Esther Crawley’s previous SMILE trial.]

Petitions are calling for shutting down the MAGENTA trial. Those who organized the effort and signed the petition are commendably brave, given past vilification of any effort by patients and their allies to have a say about such trials.

Below I identify a number of issues that parents should consider in deciding whether to enroll their children in the MAGENTA trial or to withdraw them if they have already been enrolled. I take a strong stand, but I believe I have adequately justified and documented my points. I welcome discussion to the contrary.

This is a long read but to summarize the key points:

  • The MAGENTA trial does not promise any health benefits for the children participating in the trial. The information sheet for the trial was recently modified to suggest they might benefit. However, earlier versions clearly stated that no benefit was anticipated.
  • There is inadequate disclosure of likely harms to children participating in the trial.
  • An estimate of a health benefit can be evaluated from the existing literature concerning the effectiveness of the graded exercise therapy intervention with adults. Obtaining funding for the MAGENTA trial depended on a misrepresentation of the strength of evidence that it works in adult populations.  I am talking about the PACE trial.
  • Beyond any direct benefit to their children, parents might be motivated by the hope of contributing to science and the availability of effective treatments. However, these possible benefits depend on publication of results of a trial after undergoing peer review. The Principal Investigator for the MAGENTA trial, Dr. Esther Crawley, has a history of obtaining parents’ consent for participation of their children in the SMILE trial, but then not publishing the results in a timely fashion. Years later, we are still waiting.
  • Dr. Esther Crawley exposed children to unnecessary risk without likely benefit in her conduct of the SMILE trial. This clinical trial involved inflicting a quack treatment on children. Parents were not adequately informed of the nature of the treatment and the absence of evidence for any mechanism by which the intervention could conceivably be effective. This reflects on the due diligence that Dr. Crawley can be expected to exercise in the MAGENTA trial.
  • The consent form for the MAGENTA trial involves parents granting permission for the investigator to use children and parents’ comments concerning effects of the treatment for its promotion. Insufficient restrictions are placed on how the comments can be used. There is the clear precedent of comments made in the context of the SMILE trial being used to promote the quack Lightning Process treatment in the absence of evidence that treatment was actually effective in the trial. There is no guarantee that any comments collected from children and parents in the MAGENTA trial would not similarly be misused.
  • Dr. Esther Crawley participated in a smear campaign against parents having legitimate concerns about the SMILE trial. Parents making legitimate use of tools provided by the government such as Freedom of Information Act requests, appeals of decisions of ethical review boards and complaints to the General Medical Council were vilified and shamed.
  • Dr. Esther Crawley has provided direct, self-incriminating quotes in the newsletter of the Science Media Centre about how she was coached and directed by their staff to slam the patient community.  She played a key role in a concerted and orchestrated attack on the credibility of not only parents of participants in the MAGENTA trial, but of all patients having chronic fatigue syndrome/ myalgic encephalomyelitis , as well as their advocates and allies.

I am not a parent of a child eligible for recruitment to the MAGENTA trial. I am not even a citizen or resident of the UK. Nonetheless, I have considered the issues and lay out some of my considerations below. On this basis, I signed the global support version  of the UK petition to suspend all trials of graded exercise therapy in children and adults with ME/CFS. I encourage readers who are similarly in my situation outside the UK to join me in signing the global support petition.

If I were a parent of an eligible child or a resident of the UK, I would not enroll my child in MAGENTA. I would immediately withdraw my child if he or she were currently participating in the trial. I would request all the child’s data be given back or evidence that it had been destroyed.

I recommend my PLOS Mind the Brain post, What patients should require before consenting to participate in research…  as either a prelude or epilogue to the following blog post.

What you will find here is a discussion of matters that parents should consider before enrolling their children in the MAGENTA trial of graded exercise for chronic fatigue syndrome. The previous blog post [ ]  is rich in links to an ongoing initiative from The BMJ to promote broader involvement of patients (and implicitly, parents of patients) in the design, implementation, and interpretation of clinical trials. The views put forth by The BMJ are quite progressive, even if there is a gap between their expression of views and their actual implementation. Overall, that blog post presents a good set of standards for patients (and parents) making informed decisions concerning enrollment in clinical trials.

Simon McGrathLate-breaking update: See also

Simon McGrath: PACE trial shows why medicine needs patients to scrutinise studies about their health

Basic considerations.

Patients are under no obligation to participate in clinical trials. It should be recognized that any participation typically involves burden and possibly risk over what is involved in receiving medical care outside of a clinical trial.

It is a deprivation of their human rights and a violation of the Declaration of Helsinki to coerce patients to participate in medical research without freely given, fully informed consent.

Patients cannot be denied any medical treatment or attention to which they would otherwise be entitled if they fail to enroll in a clinical trial.

Issues are compounded when consent from parents is sought for participation of vulnerable children and adolescents for whom they have legal responsibility. Although assent to participate in clinical trials is sought from children and adolescents, it remains for their parents to consent to their participation.

Parents can at any time withdraw their consent for their children and adolescents participating in trials and have their data removed, without requiring the approval of any authorities of their reason for doing so.

Declaration of Helsinki

The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data.

It includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

[October 3 8:23 AM Update]: I have now inserted Article 21 of the Declaration of Helsinki which really nails the ethical problems of the SMILE trial:

21. Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.

There is clearly in adequate scientific justification for testing the quack Lightning Process Treatment.

What Is the Magenta Trial?

The published MAGENTA study protocol states

This study aims to investigate the acceptability and feasibility of carrying out a multicentre randomised controlled trial investigating the effectiveness of graded exercise therapy compared with activity management for children/teenagers who are mildly or moderately affected with CFS/ME.

Methods and analysis 100 paediatric patients (8–17 years) with CFS/ME will be recruited from 3 specialist UK National Health Service (NHS) CFS/ME services (Bath, Cambridge and Newcastle). Patients will be randomised (1:1) to receive either graded exercise therapy or activity management. Feasibility analysis will include the number of young people eligible, approached and consented to the trial; attrition rate and treatment adherence; questionnaire and accelerometer completion rates. Integrated qualitative methods will ascertain perceptions of feasibility and acceptability of recruitment, randomisation and the interventions. All adverse events will be monitored to assess the safety of the trial.

The first of two treatments being compared is:

Arm 1: activity management

This arm will be delivered by CFS/ME specialists. As activity management is currently being delivered in all three services, clinicians will not require further training; however, they will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). Clinicians therefore have flexibility in delivering the intervention within their National Health Service (NHS) setting. Activity management aims to convert a ‘boom–bust’ pattern of activity (lots 1 day and little the next) to a baseline with the same daily amount before increasing the daily amount by 10–20% each week. For children and adolescents with CFS/ME, these are mostly cognitive activities: school, schoolwork, reading, socialising and screen time (phone, laptop, TV, games). Those allocated to this arm will receive advice about the total amount of daily activity, including physical activity, but will not receive specific advice about their use of exercise, increasing exercise or timed physical exercise.

So, the first arm of the trial is a comparison condition consisting of standard care delivered without further training of providers. The treatment is flexibly delivered, expected to vary between settings, and thus largely uncontrolled. The treatment represents a methodologically weak condition that does not adequately control for attention and positive expectations. Control conditions should be equivalent to the intervention being evaluated in these dimensions.

The second arm of the study:

Arm 2: graded exercise therapy (GET)

This arm will be delivered by referral to a GET-trained CFS/ME specialist who will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). They will be encouraged to deliver GET as they would in their NHS setting.20 Those allocated to this arm will be offered advice that is focused on exercise with detailed assessment of current physical activity, advice about exercise and a programme including timed daily exercise. The intervention will encourage children and adolescents to find a baseline level of exercise which will be increased slowly (by 10–20% a week, as per NICE guidance5 and the Pacing, graded Activity and Cognitive behaviour therapy – a randomised Evaluation (PACE)12 ,21). This will be the median amount of daily exercise done during the week. Children and adolescents will also be taught to use a heart rate monitor to avoid overexertion. Participants will be advised to stay within the target heart rate zones of 50–70% of their maximum heart rate.5 ,7

The outcome of the trial will be evaluated in terms of

Quantitative analysis

The percentage recruited of those eligible will be calculated …Retention will be estimated as the percentage of recruited children and adolescents reaching the primary 6-month follow-up point, who provide key outcome measures (the Chalder Fatigue Scale and the 36-Item Short-Form Physical Functioning Scale (SF-36 PFS)) at that assessment point.

actigraphObjective data will be collected in the form of physical activity measured by Accelerometers. These are

Small, matchbox-sized devices that measure physical activity. They have been shown to provide reliable indicators of physical activity among children and adults.

However, actual evaluation of the outcome of the trial will focus on recruitment and retention and subjective, self-report measures of fatigue and physical functioning. These subjective measures have been shown to be less valid than objective measures. Scores are  vulnerable  to participants knowing what condition they are assigned to (called ‘being unblinded’) and their perception of which intervention the investigators prefer.

It is notable that in the PACE trial of CBT and GET for chronic fatigue syndrome in adults, the investigators manipulated participants’ self-reports with praise in newsletters sent out during the trial . The investigators also switched their scoring of the self-report measures and produced results that they later conceded to have been exaggerated by their changing in scoring of the self-report measures [ ].

Irish ME/CFS Association Officer & Tom Kindlon
Tom Kindlon, Irish ME/CFS Association Officer

See an excellent commentary by Tom Kindlon at PubMed Commons [What’s that? ]

The validity of using subjective outcome measures as primary outcomes is questionable in such a trial

The bottom line is that the investigators have a poorly designed study with inadequate control condition. They have chosen subjective self-reports that are prone to invalidity and manipulation over objective measures like actual changes in activity or practical real-world measures like school attendance. Not very good science here. But they are asking parents to sign their children up.

What is promised to parents consenting to have the children enrolled in the trial?

The published protocol to which the investigators supposedly committed themselves stated

What are the possible benefits and risks of participating?
Participants will not benefit directly from taking part in the study although it may prove enjoyable contributing to the research. There are no risks of participating in the study.

Version 7 of the information sheet provided to parents, states

Your child may benefit from the treatment they receive, but we cannot guarantee this. Some children with CFS/ME like to know that they are helping other children in the future. Your child may also learn about research.

Survey assessments conducted by the patient community strongly contradict the suggestion that there is no risk of harm with GET.

alemAlem Matthees, the patient activist who obtained release of the PACE data and participated in reanalysis has commented:

“Given that post-exertional symptomatology is a hallmark of ME/CFS, it is premature to do trials of graded exercise on children when safety has not first been properly established in adults. The assertion that graded exercise is safe in adults is generally based on trials where harms are poorly reported or where the evidence of objectively measured increases in total activity levels is lacking. Adult patients commonly report that their health was substantially worsened after trying to increase their activity levels, sometimes severely and permanently, therefore this serious issue cannot be ignored when recruiting children for research.”

See also

Kindlon T. Reporting of harms associated with graded exercise therapy and cognitive behavioural therapy in myalgic encephalomyelitis/chronic fatigue syndrome. Bulletin of the IACFS/ME. 2011;19(2):59-111.

This thorough systematic review reports inadequacy in harm reporting in clinical trials, but:

Exercise-related physiological abnormalities have been documented in recent studies and high rates of adverse  reactions  to exercise have been  recorded in  a number of  patient surveys. Fifty-one percent of  survey respondents (range 28-82%, n=4338, 8 surveys) reported that GET worsened their health while 20% of respondents (range 7-38%, n=1808, 5 surveys) reported similar results for CBT.

The unpublished results of Dr. Esther Crawley’s SMILE trial

 A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The published protocol for the SMILE trial

[Note the ® in the title below, indicating a test of trademarked commercial product. The significance of that is worthy of a whole other blog post. ]

Crawley E, Mills N, Hollingworth W, Deans Z, Sterne JA, Donovan JL, Beasant L, Montgomery A. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial). Trials. 2013 Dec 26;14(1):1.


The data monitoring group will receive notice of serious adverse events (SAEs) for the sample as whole. If the incidence of SAEs of a similar type is greater than would be expected in this population, it will be possible for the data monitoring group to receive data according to trial arm to determine any evidence of excess in either arm.

Primary outcome data at six months will be examined once data are available from 50 patients, to ensure that neither arm is having a detrimental effect on the majority of patients. An independent statistician with no other involvement in the study will investigate whether more than 20 participants in the study sample as a whole have experienced a reduction of ≥ 30 points on the SF-36 at six months. In this case, the data will then be summarised separately by trial arm, and sent to the data monitoring group for review. This process will ensure that the trial team will not have access to the outcome data separated by treatment arm.

A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The trial was thus completed a number of years ago, but these valuable data have never been published.

The only publication from the trial so far uses selective quotes from child participants that cannot be independently evaluated. Readers are not told how representative these quotes, the outcomes for the children being quoted or the overall outcomes of the trial.

Parslow R, Patel A, Beasant L, Haywood K, Johnson D, Crawley E. What matters to children with CFS/ME? A conceptual model as the first stage in developing a PROM. Archives of Disease in Childhood. 2015 Dec 1;100(12):1141-7.

The “evaluation” of the quack Lightning Treatment in the SMILE trial and quotes from patients have also been used to promote Parker’s products as being used in NHS clinics.

How can I say the Lightning Process is quackery?

 Dr. Crawley describes the Lightning Process in the Research Ethics Application Form for the SMILE study as   ombining the principles of neurolinguistic programming, osteopathy, and clinical hypnotherapy.

That is an amazing array of three different frameworks from different disciplines. You would be hard pressed to find an example other than the Lightning Process that claimed to integrate them. Yet, any mechanisms for explaining therapeutic interventions cannot be a creative stir fry of whatever is on hand being thrown together. For a treatment to be considered science-based, there has to be a solid basis of evidence that these presumably complex processes fit together as assumed and work as assumed. I challenge Dr. Crawley or anyone else to produce a shred of credible, peer-reviewed evidence for the basic mechanism of the Lightning Process.

The entry for Neuro-linguistic programming (NLP) in Wikipedia states

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

 The Hampshire (UK) County Council Trading Standards Office filed a formal complaint against Phil Parker for claims made on the Lightning Process website concerning effects on CFS/ME:

The “CFS/ME” page of the website included the statements “Our survey found that 81.3 %* of clients report that they no longer have the issues they came with by day three of the LP course” and “The Lightning Process is working with the NHS on a feasibility study, please click here for further details, and for other research information click here”.

parker nhs advert
Seeming endorsements on Parker’s website. Two of them –Northern Ireland and NHS Suffolk subsequently complained that use of their insignias was unauthorized and they were quickly removed.

The “working with the NHS” refers to the collaboration with Dr. Easter Crawley.

The UK Advertising Standards Authority upheld this complaint, as well as about Parker’s claims about effectiveness with other conditions, including  multiple sclerosis, irritable bowel syndrome and fibromyalgia

 Another complaint in 2013 about claims on Phil Parker’s website was similarly upheld:

 The claims must not appear again in their current form. We welcomed the decision to remove the claims. We told Phil Parker Group not to make claims on websites within their control that were directly connected with the supply of their goods and services if those claims could not be supported with robust evidence. We also told them not to refer to conditions for which advice should be sought from suitably qualified health professionals.

 As we will see, these upheld charges of quackery occurred when parents of children participating in the SMILE trial were being vilified in the BMJ and elsewhere. Dr. Crawley was prominently featured in this vilification and was quoted in a celebration of its success by the Science Media Centre, which had orchestrated the vilification.

Captured cfs praker ad

The Research Ethics Committee approval of the SMILE trial and the aftermath

 I was not very aware of the CFS/ME literature, and certainly not all its controversies when the South West Research Ethics Committee (REC) reviewed the application for the SMILE trial and ultimately approved it on September 8, 2010.

I would have had strong opinions about it. I only first started blogging a little afterwards.  But I was very concerned about any patients being exposed to alternative and unproven medical treatments in other contexts that were not evidence-based – even more so to treatments for which promoters claimed implausible mechanisms by which they worked. I would not have felt it appropriate to inflict the Lightning Process on unsuspecting children. It is insufficient justification to put them a clinical trial simply because a particular treatment has not been evaluated.

 Prince Charles once advocated organic coffee enemas to treat advanced cancer. His endorsement generated a lot of curiosity from cancer patients. But that would not justify a randomized trial of coffee enemas. By analogy, I don’t think Dr. Esther Crawley had sufficient justification to conduct her trial, especially without warnings that that there was no scientific basis to expect the Lightning Process to work or that it would not hurt the children.

 I am concerned about clinical trials that have little likelihood of producing evidence that a treatment is effective, but that seemed designed to get these treatments into routine clinical care. it is now appreciated that some clinical trials have little scientific value but serve as experimercials or means of placing products in clinical settings. Pharmaceutical companies notoriously do this.

As it turned out, the SMILE trial succeeded admirably as a promotion for the Lightning Process, earning Phil Parker unknown but substantial fees through its use in the SMILE trial, but also in successful marketing throughout the NHS afterwards.

In short, I would been concerned about the judgment of Dr. Esther Crawley in organizing the SMILE trial. I would been quite curious about conflicts of interest and whether patients were adequately informed of how Phil Parker was benefiting.

The ethics review of the SMILE trial gave short shrift to these important concerns.

When the patient community and its advocate, Dr. Charles Shepherd, became aware of the SMILE trial’s approval, there were protests leading to re-evaluations all the way up to the National Patient Safety Agency. Examining an Extract of Minutes from South West 2 REC meeting held on 2 December 2010, I see many objections to the approval being raised and I am unsatisfied by the way in which they were discounted.

Patient, parent, and advocate protests escalated. If some acted inappropriate, this did not undermine the righteousness of others legitimate protest. By analogy, I feel strongly about police violence aimed against African-Americans and racist policies that disproportionately target African-Americans for police scrutiny and stoppng. I’m upset when agitators and provocateurs become violent at protests, but that does not delegitimize my concerns about the way black people are treated in America.

Dr. Esther Crawley undoubtedly experienced considerable stress and unfair treatment, but I don’t understand why she was not responsive to patient concerns nor  why she failed to honor her responsibility to protect child patients from exposure to unproven and likely harmful treatments.

Dr. Crawley is extensively quoted in a British Medical Journal opinion piece authored by a freelance journalist,  Nigel Hawkes:

Hawkes N. Dangers of research into chronic fatigue syndrome. BMJ. 2011 Jun 22;342:d3780.

If I had been on the scene, Dr. Crawley might well have been describing me in terms of how I would react, including my exercising of appropriate, legally-provided means of protest and complaint:

Critics of the method opposed the trial, first, Dr Crawley says, by claiming it was a terrible treatment and then by calling for two ethical reviews. Dr Shepherd backed the ethical challenge, which included the claim that it was unethical to carry out the trial in children, made by the ME Association and the Young ME Sufferers Trust. After re-opening its ethical review and reconsidering the evidence in the light of the challenge, the regional ethical committee of the NHS reiterated its support for the trial.

There was arguably some smearing of Dr. Shepherd, even in some distancing of him from the action of others:

This point of view, if not the actions it inspires, is defended by Charles Shepherd, medical adviser to and trustee of the ME Association. “The anger and frustration patients have that funding has been almost totally focused on the psychiatric side is very justifiable,” he says. “But the way a very tiny element goes about protesting about it is not acceptable.

This article escalated with unfair comparisons to animal rights activists, with condemnation of appropriate use of channels of complaint – reporting physicians to the General Medical Council.

The personalised nature of the campaign has much in common with that of animal rights activists, who subjected many scientists to abuse and intimidation in the 1990s. The attitude at the time was that the less said about the threats the better. Giving them publicity would only encourage more. Scientists for the most part kept silent and journalists desisted from writing about the subject, partly because they feared anything they wrote would make the situation worse. Some journalists have also been discouraged from writing about CFS/ME, such is the unpleasant atmosphere it engenders.

While the campaigners have stopped short of the violent activities of the animal rights groups, they have another weapon in their armoury—reporting doctors to the GMC. Willie Hamilton, an academic general practitioner and professor of primary care diagnostics at Peninsula Medical School in Exeter, served on the panel assembled by the National Institute for Health and Clinical Excellence (NICE) to formulate treatment advice for CFS/ME.

Simon Wessely and the Principal Investigator of the PACE trial, Peter White, were given free rein to dramatize their predicament posed by the protest. Much later, in the 2016 Lower Tribunal Hearing, testimony would be given by PACE

Co-Investigator Trudie Chalder would much later (2016) cast doubt on whether the harassment was as severe or violent as it was portrayed. Before that, the financial conflicts of interest of Peter White that were denied in the article would be exposed.

In response to her testimony, the UK Information Officer stated:

Professor Chalder’s evidence when she accepts that unpleasant things have been said to and about PACE researchers only, but that no threats have been made either to researchers or participants.

But in 2012, a pamphlet celebrating the success of The Science Media Centre started by Wessely would be rich in indiscreet quotes from Esther Crawley. The article in BMJ was revealed to be part of a much larger orchestrated campaign to smear, discredit and silence patients, parents, advocates and their allies.

Dr. Esther Crawley’s participation in a campaign organized by the Science Media Center to discredit patients, parents, advocates and supporters.

 The SMC would later organize a letter writing campaign to Parliament in support of Peter White and his refusal to release the PACE data to Alem Mattheees who had made a requestunder the Freedom of Information Act. The letter writing campaign was an effort to get scientific data excluded from the provisions of the freedom of information act. The effort failed and the data were subsequently released.

But here is how Esther Crawley described her assistance:

The SMC organised a meeting so we could discuss what to do to protect researchers. Those who had been subject to abuse met with press officers, representatives from the GMC and, importantly, police who had dealt with the  animal rights campaign. This transformed my view of  what had been going on. I had thought those attacking us were “activists”; the police explained they were “extremists”.


We were told that we needed to make better use of the law and consider using the press in our favour – as had researchers harried by animal rights extremists. “Let the public know what you are trying to do and what is happening to you,” we were told. “Let the public decide.”


I took part in quite a few interviews that day, and have done since. I was also inundated with letters, emails and phone calls from patients with CFS/ME all over the world asking me to continue and not “give up”. The malicious, they pointed out, are in a minority. The abuse has stopped completely. I never read the activists’ blogs, but friends who did told me that they claimed to be “confused” and “upset” – possibly because their role had been switched from victim to abuser. “We never thought we were doing any harm…”

 The patient community and its allies are still burdened by the damage of this effort and are rebuilding its credibility only slowly. Only now are they beginning to get an audience as suffering human beings with significant, legitimate unmet needs. Only now are they escaping the stigmatization that occurred at this time with Esther Crawley playing a key role.

Where does this leave us?

stop posterParents are being asked to enroll in a clinical trial without clear benefit to the children but with the possibility of considerable risk from the graded exercise. They are being asked by Esther Crawley, a physician, who has previously inflicted a quack treatment on their children with CFS/ME in the guise of a clinical trial, for which he is never published the resulting data. She has played an effective role in damaging the legitimacy and capacity of patients and parents to complain.

Given this history and these factors, why would a parent possibly want to enroll their children in the MAGENTA trial? Somebody please tell me.

Special thanks to all the patient citizen-scientists who contributed to this blog post. Any inaccuracies or excesses are entirely my own, but these persons gave me substantial help. Some are named in the blog, but others prefer anonymity.

 All opinions expressed are solely those of James C Coyne. The blog post in no way conveys any official position of Mind the Brain, PLOS blogs or the larger PLOS community. I appreciate the free expression of  personal opinion that I am allowed.







What patients should require before consenting to participate in research…

A bold BMJ editorial  calls for more patient involvement in the design, implementation, and interpretation of research – but ends on a sobering note: The BMJ has so little such involvement to report.

In this edition of Mind the Brain, I suggest how patients, individually and collectively, can take responsibility for advancing this important initiative themselves.

I write in a context defined by recent events.

  • Government-funded researchers offered inaccurate interpretations of their results [1, 2].
  • An unprecedented number of patients have judged the researchers’ interpretation of their results as harmful to their well-being.
  • The researchers then violated government-supported data sharing policies in refusing to release their data for independent analysis.
  • Patients were vilified in the investigators’ efforts to justify their refusal to release the data.

These events underscore the need for patients to require certain documentation before deciding whether to participate in research.

Declining to participate in clinical research is a patient’s inalienable right that must not jeopardize the receipt of routine treatment or lead to retaliation.

A simple step: in deciding whether to participate in research, patients can insist that any consent form they sign contains documentation of patient involvement at all phases of the research. If there is no detailing of how patients were involved in the design of this study and how they will be involved in the interpretation, patients should consider not consenting.

Similarly, patients should consider refusing to sign consent forms that do not expressly indicate that the data will be readily available for further analyses, preferably by placing the data in a publicly accessible depository.

Patients exercising their rights in these ways will make for better and more useful biomedical research, as well as research that is more patient-oriented

The BMJ editorial

bmj-logo-ogThe editorial Research Is the Future, Get Involved declares:

More than three million NHS patients took part in research over the past five years. Bravo. Now let’s make sure that patients are properly involved, not just as participants but in trial conception, design, and conduct and the analysis, reporting, and dissemination of results.

But in the next sentences, the editorial describes how The BMJ’s laudable efforts to get researchers to demonstrate how patients were involved have not produced impressive results:

man with empty pocketsYou may have noticed the new “patient involvement” box in The BMJ’s research articles. Sadly, all too often the text reads something like, “No patients were involved in setting the research question or the outcome measures; nor were they involved in the design and implementation of the study. There are no plans to involve patients in the dissemination of results.” We hope that the shock of such statements will stimulate change. Examples of good patient involvement will also help: see the multicentre randomised trial on stepped care for depression and anxiety (doi:10.1136/bmj.h6127).

Our plan is to shine a light on the current state of affairs and then gradually raise the bar. Working with other journals, research funders, and ethics committees, we hope that at some time in the future only research in which patients have been fully involved will be considered acceptable.

In their instructions to authors, The BMJ includes a section Reporting patients’ involvement in research which states:

As part of its patient partnership strategy, The BMJ is encouraging active patient involvement in setting the research agenda.

We appreciate that not all authors of research papers will have done this, and we will still consider your paper if you did not involve patients at an early stage. We do, however, request that all authors provide a statement in the methods section under the subheading Patient involvement.

This should provide a brief response to the following questions:

How was the development of the research question and outcome measures informed by patients’ priorities, experience, and preferences?

How did you involve patients in the design of this study?

Were patients involved in the recruitment to and conduct of the study?

How will the results be disseminated to study participants?

For randomised controlled trials, was the burden of the intervention assessed by patients themselves?

Patient advisers should also be thanked in the contributorship statement/acknowledgements.

If patients were not involved please state this.

If this information is not in the submitted manuscript we will ask you to provide it during the peer review process.

Please also note also note that The BMJ now sends randomised controlled trials and other relevant studies for peer review by patients.

Recent events suggest that these instructions should be amended with the following question:

How were patients involved in the interpretation of results?

The instructions to authors should also elaborate that the intent is require description of how results were shared with patients before publication and dissemination to the news media. This process should be interactive with the possibility of corrective feedback, rather than a simple presentation of the results to the patients without opportunity for comment or for suggesting qualification of the interpretations that will be made. This process should be described in the article.

partnering with patientsMaterial offered by The BMJ in support of their initiative include an editorial, Patient Partnership, which explains:

The strategy brings landmark changes to The BMJ’s internal processes, and seeks to place the journal at the forefront of the international debate on the science, art, and implementation of meaningful, productive partnership with patients. It was “co –produced” with the members of our new international patient advisory panel, which was set up in January 2014. It’s members continue to inform our thinking and help us with implementation of our strategy.

patient includedFor its efforts, The BMJ has been the first medical journal to receive the “Patients Included” Certificate from Lucien Engelen’s Radboud REshape Academy. For his part, Lucien had previously announced:

I will ‘NO-SHOW’ at healthcare conferences that do not add patients TO or IN their programme or invite them to be IN the audience. Also I will no longer give lectures/keynotes at ‘NO-SHOW’ conferences.

But strong words need an action plan to become more than mere words. Although laudable exceptions can be noted, they are few and far between.

In Beyond rhetoric: we need a strategy for patient involvement in the health service, NHS user Sarah Thornton has called the UK government to task for being heavy on the hyperbole of empowering patients but lacking a robust strategy for implementing it. The same could be said for the floundering effort of The BMJ to support patient empowerment in research.

So, should patients just remain patient, keep signing up for clinical trials and hope that funders eventually get more patient oriented in the decisions about grants and that researchers eventually become more patient-oriented?

Recent events suggest that is unwise.

The BMJ patient-oriented initiative versus the PACE investigators’ refusal to share data and the vilification of patients who object to their interpretation of the data

As previously detailed here  the PACE investigators have steadfastly refused to provide the data for independent evaluation of claims. In doing so, they are defying numerous published standards from governmental and funding agencies that dictate sharing of data. Ironically, in justifying this refusal, the investigators cite possible repercussions of releasing the data for the ability to conduct future research.

Fortunately, in a decision against the PACE investigators, the UK Information Commissioner’s Office (ICO) rejected this argument because

He is also not convinced that there is sufficient evidence for him to determine that disclosure would be likely to deter significant numbers of other potential participants from volunteering to take part in future studies so as to affect the University’s ability to undertake such research. As a result, the Commissioner is reluctant to accept that disclosure of the withheld information would be likely to have an adverse effect on the University’s future ability to attract necessary funding and to carry out research in this area, with a consequent effect on its reputation and ability to recruit staff and students.

But the PACE investigators have appealed this decision and continue to withhold their data. Moreover in their initial refusal to share the data, they characterized patients who objected to the possible harm of their interpretations as a small vocal minority.

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Physician Charles Shepherd, himself a sufferer of myalgic encephalomyelitis (ME) notes:

  • Over 10,000 people signed a petition calling for claims of the PACE investigators relating to so-called recovery to be retracted.
  • In a survey of 1,428 people with ME, 73 per cent reported that CBT had no effect on symptoms while 74 per cent reported that GET had made their condition worse.

The BMJ’s position on data sharing

A May 15, 2015 editorial spelled out a new policy at The BMJ concerning data sharing, The BMJ requires data sharing on request for all trials:

Heeding calls from the Institute of Medicine, WHO, and the Nordic Trial Alliance, we are extending our policy

The movement to make data from clinical trials widely accessible has achieved enormous success, and it is now time for medical journals to play their part. From 1 July The BMJ will extend its requirements for data sharing to apply to all submitted clinical trials, not just those that test drugs or devices.1 The data transparency revolution is gathering pace.2 Last month, the World Health Organization (WHO) and the Nordic Trial Alliance released important declarations about clinical trial transparency.3 4

Note that The BMJ was making the data sharing requirement to all trials, not just medical and medical device trials.

But The BMJ was simply following the lead of the family of PLOS journals that made an earlier, broader, and simpler commitment to data from clinical trials being available to others.

plosThe PLOS journals’ policy on data sharing

On December 12, 2013, the PLOS journals scooped other major publishers with:

PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception.

When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS’s policy. The data availability statement will be published with the article if accepted.

Refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection. PLOS journal editors encourage researchers to contact them if they encounter difficulties in obtaining data from articles published in PLOS journals. If restrictions on access to data come to light after publication, we reserve the right to post a correction, to contact the authors’ institutions and funders, or in extreme cases to retract the publication

This requirement took effect on March 1, 2014. However, one of the most stringent of data sharing policies in the industry was already in effect.

Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.

Even the earlier requirement for publication in PLOS journals would have forestalled the delays, struggles, and complicated quasi-legal maneuvering to characterized the PACE investigators’ refusing to release their data.

Why medically ill people agree to be in clinical research

Patients are not obligated to participate in research, but should freely choose whether to participate based on a weighing of the benefits and risk. Consent to treatment in clinical research needs to be voluntary and fully informed.

Medically ill patients often cannot expect direct personal benefit from participating in a research trial. This is particularly true when trials involve comparison of a treatment that they want that is not otherwise available, but they risk getting randomized to a poorly defined and inadequate routine care. Their needs continue to be neglected, but now burdened by multiple and sometimes intrusive assessments. This is also the case with descriptive observational research and particularly phase 1 clinical studies that provide no direct benefit to participating patients, only the prospect of improving the care of future patients.

In recognition that many research projects do not directly benefit individual patients, consent forms identify possible benefits to other current and future patients and to society at large.

Protecting the rights of participants in research

The World Medical Association (WMA) Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects spells out a set of principles protecting the rights of human subjects, it includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

Can patients pick up the challenge of realizing the promise of The BMJ editorial, Research Is the Future, Get Involved ?

One patient to whom I showed an earlier draft objected that this is just another burden being thrust on medical patients who already have their condition and difficult treatment decisions with which to contend. She pointed out so often patient empowerment strategies ended up leaving patients with responsibilities they could not shoulder and that the medical system should have met for them.

I agree that not every patient can take up this burden of promoting  both more patient involvement in research and data sharing, but groups of patients can. And when individual patients are willing to take on the sacrifice of insisting on these conditions for their consent, they should be recognized and supported by others. This is not a matter for patients with particular illnesses or members of patient organizations organized around a particular illness. Rather, this is a contribution to the well-being of society should be applauded and supported across the artificial boundaries drawn around particular conditions or race or class.

The mere possibility that patients are going to refuse to participate in research that does not have plans for patient involvement or data sharing can have a powerful effect. It is difficult enough for researchers to accrue sufficient numbers of patients for their studies. If the threat is that they will run into problems because they don’t adequately involve patients, they will be proactive in redesigning the research strategies and reflecting it in their consent forms, if they are serious about getting their research done.

just-say-noPatients are looking after the broader society in participating in medical research. However, if researchers do not take steps to ensure that society gets the greatest possible benefit, patients can just say no, we won’t consent to participation.

Acknowledgments: I benefited from discussions with numerous patients and some professionals in writing and revising this blog. Because some of the patients desired anonymity, I will simply give credit to the group. However, I am responsible for any excesses or inaccuracies that may have escaped their scrutiny.


Why the scientific community needs the PACE trial data to be released

To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001University and clinical trial investigators must release data to a citizen-scientist patient, according to a landmark decision in the UK. But the decision could still be overturned if the University and investigators appeal. The scientific community needs the decision to be upheld. I’ll argue that it’s unwise for any appeal to be made. The reasons for withholding the data in the first place were archaic. Overturning of the decision would set a bad precedent and would remove another tooth from almost toothless requirements for data sharing.

We didn’t need Francis Collins, Director of National Institutes of Health to tell us what we already knew, the scientific and biomedical literature is untrustworthy.

And there is the new report from the UK Academy of Medical Sciences, Reproducibility and reliability of biomedical research: improving research practice.

There has been a growing unease about the reproducibility of much biomedical research, with failures to replicate findings noted in high-profile scientific journals, as well as in the general and scientific media. Lack of reproducibility hinders scientific progress and translation, and threatens the reputation of biomedical science.

Among the report’s recommendations:

  • Journals mandating that the data underlying findings are made available in a timely manner. This is already required by certain publishers such as the Public Library of Science (PLOS) and it was agreed by many participants that it should become more common practice.
  • Funders requiring that data be released in a timely fashion. Many funding agencies require that data generated with their funding be made available to the scientific community in a timely and responsible manner

A consensus has been reached: The crisis in the trustworthiness of science can be only overcome only if scientific data are routinely available for reanalysis. Independent replication of socially significant findings is often unfeasible, and unnecessary if original data are fully available for inspection.

Numerous governmental funding agencies and regulatory bodies are endorsing routine data sharing.

The UK Medical Research Council (MRC) 2011 policy on data sharing and preservation  has endorsed principles laid out by the Research Councils UK including

Publicly funded research data are a public good, produced in the public interest, which should be made openly available with as few restrictions as possible in a timely and responsible manner.

To enable research data to be discoverable and effectively re-used by others, sufficient metadata should be recorded and made openly available to enable other researchers to understand the research and re-use potential of the data. Published results should always include information on how to access the supporting data.

The Wellcome Trust Policy On Data Management and Sharing opens with

The Wellcome Trust is committed to ensuring that the outputs of the research it funds, including research data, are managed and used in ways that maximise public benefit. Making research data widely available to the research community in a timely and responsible manner ensures that these data can be verified, built upon and used to advance knowledge and its application to generate improvements in health.

The Cochrane Collaboration has weighed in that there should be ready access to all clinical trial data

Summary results for all protocol-specified outcomes, with analyses based on all participants, to become publicly available free of charge and in easily accessible electronic formats within 12 months after completion of planned collection of trial data;

Raw, anonymised, individual participant data to be made available free of charge; with appropriate safeguards to ensure ethical and scientific integrity and standards, and to protect participant privacy (for example through a central repository, and accompanied by suitably detailed explanation).

Many similar statements can be found on the web. I’m unaware of credible counterarguments gaining wide acceptance.

toothless manYet, endorsements of routine sharing of data are only a promissory reform and depend on enforcement that has been spotty, at best. Those of us who request data from previously published clinical trials quickly realize that requirements for sharing data have no teeth. In light of that, scientists need to watch closely whether a landmark decision concerning sharing of data from a publicly funded trial is appealed and overturned.

The Decision requiring release of the PACE data

The UK’s Information Commissioner’s Office (ICO) ordered Queen Mary University of London (QMUL) on October 27, 2015 to release anonymized from the PACE chronic fatigue syndrome trial data to an unnamed complainant. QMUL has 28 days to appeal.

Even if scientists don’t know enough to care about Chronic Fatigue Syndrome/Myalgic Encephalomyelitis, they should be concerned about the reasons that were given in a previous refusal to release the data.

I took a critical look at the long-term follow up results for the PACE trial in a previous Mind the Brain blog post  and found fatal flaws in the authors’ self-congratulatory interpretation of results. Despite authors’ claims to the contrary and their extraordinary efforts to encourage patients to report the intervention was helpful, there were simply no differences between groups at follow-up

Background on the request for release of PACE data

  • A complainant requested release of specific PACE data from QMUL under the Freedom of Information Act.
  • QMUL refused the request.
  • The complainant requested an internal review but QMUL maintained its decision to withhold the data.
  • The complainant contacted the ICO with concerns about how the request had been handled.
  • On October 27, 2015, the ICO sided with the complainant and order the release of the data.

A report outlines Queen Mary’s arguments for refusing to release the data and the Commissioner’s justification for siding with the patient requesting the data be released.

Reasons the request release of data was initially refused

The QMU PACE investigators claimed

  • They were entitled to withhold data prior to publication of planned papers.
  • An exemption to having to share data because data contained sensitive medical information from which it was possible to identify the trial participants.
  • Release of the data might harm their ability to recruit patients for research studies in the future.

The QMU PACE researchers specifically raised concerns about a motivated intruder being able to facilitate re-identification of participants:

In relation to a motivated intruder being able facilitate re-identification of participants, the University argued that:

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Magneto_430Bizarre. This is obviously a talented masked motivated intruder. Do they have evidence that Magneto is at it again? Mostly he now is working with the good guys, as seen in the help he gave Neurocritic and me.

Let’s think about this novel argument. I checked with University of Pennsylvania bioethicist Jon Merz, an expert who has worked internationally to train researchers and establish committees for the protection of human subjects. His opinion was clear:

The litany of excuses – not reasons – offered by the researchers and Queen Mary University is a bald attempt to avoid transparency and accountability, hiding behind legal walls instead of meeting their critics on a level playing field.  They should be willing to provide the data for independent analyses in pursuit of the truth.  They of course could do this willingly, in a way that would let them contractually ensure that data would be protected and that no attempts to identify individual subjects would be made (and it is completely unclear why anyone would care to undertake such an effort), or they can lose this case and essentially lose any hope for controlling distribution.

The ‘orchestrated response to undermine the credibility of the study’ claimed by QMU and the PACE investigators, as well as issue being raised of the “integrity of the scientists who conducted the study” sounds all too familiar. It’s the kind of defense that is heard from scientists under scrutiny of the likes of Open Science Collaborations, as in psychology and cancer. Reactionaries resisting post-publication peer review say we must be worried about harassment from

“replication police” “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.”

Far fetched? Compare this to a QMU quote drawn from the National Radio, Australian Broadcast Company April 18, 2011 interview of Richard Horton and PACE investigator Michael Sharpe in which former Lancet Editor Richard Horton condemned:

A fairly small, but highly organised, very vocal and very damaging group of individuals who have…hijacked this agenda and distorted the debate…

dost thou feel‘Distorted the debate’? Was someone so impertinent as to challenge investigators’ claims about their findings? Sounds like Pubpeer  We have seen what they can do.

Alas, all scientific findings should be scrutinized, all data relevant to the claims that are made should be available for reanalysis. Investigators just need to live with the possibility that their claims will be proven wrong or exaggerated. This is all the more true for claims that have substantial impact on public policy and clinical services, and ultimately, patient welfare.

[It is fascinating to note that Richard Horton spoke at the meeting that produced the UK Academy of Medical Sciences report to which I provided a link above. Horton covered the meaning in a Lancet editorial  in which he amplified the sentiment of the meeting: “The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world.” His editorial echoed a number of recommendations of the meeting report, but curiously omitted mentioning of data sharing.]

jacob-bronowski-scientist-that-is-the-essence-of-science-ask-anFortunately the ICO has rejected the arguments of QMUL and the PACE investigators. The Commissioner found that QMUL and the PACE investigators incorrectly interpreted regulations in their withholding of the data and should provide the complaint with the data or risk being viewed as in contempt of court.

The 30-page decision is a fascinating read, but here’s an accurate summary from elsewhere:

In his decision, the Commissioner found that QMUL failed to provide any plausible mechanism through which patients could be identified, even in the case of a “motivated intruder.” He was also not convinced that there is sufficient evidence to determine that releasing the data would result in the mass exodus of a significant number of the trial’s 640 participants nor that it would deter significant numbers of participants from volunteering to take part in future research.

Requirements for data sharing in the United States have no teeth and situation would be worsened by reversal of ICO decision

Like the UK, the United States supposedly has requirements for sharing of data from publicly funded trials. But good luck in getting support from regulatory agencies associated with funding sources for obtaining data. Here’s my recent story, still unfolding – or maybe, sadly, over, at least for now.

For a long time I’ve fought my own battles about researchers making unwarranted claims that psychotherapy extend the lives of cancer patients. Research simply does not support the claim. The belief that psychological factors have such influence on the course and outcome of cancer sets up cancer patients to be blamed and to blame themselves when they don’t overcome their disease by some sort of mind control. Our systematic review concluded

“No randomized trial designed with survival as a primary endpoint and in which psychotherapy was not confounded with medical care has yielded a positive effect.”

Investigators who conducted some of the best ambitious, well-designed trials to test the efficacy of psychological interventions on cancer but obtained null results echoed our assessment. The commentaries were entitled “Letting Go of Hope” and “Time to Move on.”

I provided an extensive review of the literature concerning whether psychotherapy and support groups increased survival time in an earlier blog post. Hasn’t the issue of mind-over-cancer been laid to rest? I was recently contacted by a science journalist interested in writing an article about this controversy. After a long discussion, he concluded that the issue was settled — no effect had been found — and he could not succeed in pitching his idea for an article to a quality magazine.

But as detailed here one investigator has persisted in claims that a combination of relaxation exercises, stress reduction, and nutritional counseling increases survival time. My colleagues and I gave this 2008 study a careful look.  We ran chi-square analyses of basic data presented in the paper’s tables. But none of our analyses of group assignment on mortality more disease recurrence was significant. The investigators’ claim of an effect depended on dubious multivariate analyses with covariates that could not be independently evaluated without a look at the data.

The investigator group initially attempted to block publication of a letter to the editor, citing a policy of the journal Cancer that critical letters could not be published unless investigators agreed to respond and they were refusing to respond. We appealed and the journal changed its policy and allowed us additional length to our letter.

We then requested from the investigator’s University Research Integrity Officer the specific data needed to replicate the multivariate analyses in which the investigators claimed an effect on survival. The request was denied:

The data, if disclosed, would reveal pending research ideas and techniques. Consequently, the release of such information would put those using such data for research purposes in a substantial competitive disadvantage as competitors and researchers would have access to the unpublished intellectual property of the University and its faculty and students.

Recall that we were requesting in 2014 specific data needed to evaluate analyses published in 2008.

I checked with statistician Andrew Gelman whether my objections to the multivariate analyses were well-founded and he agreed they were.

Since then, another eminent statistician Helena Kraemer has published an incisive critique of reliance in a randomized controlled trial on multivariate analyses and simple bivariate analyses do not support the efficacy of interventions. She labeled adjustments with covariates as a “source of false-positive findings.”

We appealed to the US Health and Human Services Office of Research Integrity  (ORI) but they indicated no ability to enforce data sharing.

Meanwhile, the principal investigator who claimed an effect on survival accompanied National Cancer Institute program officers to conferences in Europe and the United States where she promoted her intervention as effective. I complained to Robert Croyle, Director, NCI Division of Cancer Control and Population Sciences who twice has been one of the program officer’s co-presenting with her. Ironically, in his capacity as director he is supposedly facilitating data sharing for the division. Professionals were being misled to believe that this intervention would extend the lives of cancer patients, and the claim seemingly had the endorsement NCI.

I told Robert Croyle  that if only the data for the specific analyses were released, it could be demonstrated that the claims were false. Croyle did not disagree, but indicated that there was no way to compel release of the data.

The National Cancer Institute recently offered to pay the conference fees to the International Psycho-Oncology Congress in Washington DC of any professionals willing to sign up for free training in this intervention.

I don’t think I could get any qualified professional including  Croyle to debate me publicly as to whether psychotherapy increases the survival of cancer patients. Yet the promotion of the idea persists because it is consistent with the power of mind over body and disease, an attractive talking point

I have not given up in my efforts to get the data to demonstrate that this trial did not show that psychotherapy extends the survival of cancer patients, but I am blocked by the unwillingness of authorities to enforce data sharing rules that they espouse.

There are obvious parallels between the politics behind persistence of the claim in the US for psychotherapy increasing survival time for cancer patients and those in the UK about cognitive behavior therapy being sufficient treatment for schizophrenia in the absence of medication or producing recovery from the debilitating medical condition, Chronic Fatigue Syndrome/Myalgic Encephalomyelitis. There are also parallels to investigators making controversial claims based on multivariate analyses, but not allowing access to data to independently evaluate the analyses. In both cases, patient well-being suffers.

If the ICO upholds the release of data for the PACE trial in the UK, it will pressure the US NIH to stop hypocritically endorsing data sharing and rewarding investigators whose credibility depends on not sharing their data.

As seen in a PLOS One study, unwillingness to share data in response to formal requests is

associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Why the PACE investigators should not appeal

In the past, PACE investigators have been quite dismissive of criticism, appearing to have assumed that being afflicted with Chronic Fatigue Syndrome/Myalgic Encephalomyelitis precludes a critic being taken seriously, even when the criticism is otherwise valid. However, with publication of the long-term follow-up data in Lancet Psychiatry, they are now contending with accomplished academics whose criticisms cannot be so easily brushed aside. Yes, the credibility of the investigators’ interpretations of their data are being challenged. And even if they do not believe they need to be responsive to patients, they need to be responsive to colleagues. Releasing the data is the only acceptable response and not doing so risks damage to their reputations.

QMUL, Professors White and Sharpe, let the People’s data go.