Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing

Notes

[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.

 

 

 

 

 

 

 

Danish RCT of cognitive behavior therapy for whatever ails your physician about you

I was asked by a Danish journalist to examine a randomized controlled trial (RCT) of cognitive behavior therapy (CBT) for functional somatic symptoms. I had not previously given the study a close look.

I was dismayed by how highly problematic the study was in so many ways.

I doubted that the results of the study showed any benefits to the patients or have any relevance to healthcare.

I then searched and found the website for the senior author’s clinical offerings.  I suspected that the study was a mere experimercial or marketing effort of the services he offered.

Overall, I think what I found hiding in plain sight has broader relevance to scrutinizing other studies claiming to evaluate the efficacy of CBT for what are primarily physical illnesses, not psychiatric disorders. Look at the other RCTs. I am confident you will find similar problems. But then there is the bigger picture…

[A controversial assessment ahead? You can stop here and read the full text of the RCT  of the study and its trial registration before continuing with my analysis.]

Schröder A, Rehfeld E, Ørnbøl E, Sharpe M, Licht RW, Fink P. Cognitive–behavioural group treatment for a range of functional somatic syndromes: randomised trial. The British Journal of Psychiatry. 2012 Apr 13:bjp-p.

A summary overview of what I found:

 The RCT:

  • Was unblinded to patients, interventionists, and to the physicians continuing to provide routine care.
  • Had a grossly unmatched, inadequate control/comparison group that leads to any benefit from nonspecific (placebo) factors in the trial counting toward the estimated efficacy of the intervention.
  • Relied on subjective self-report measures for primary outcomes.
  • With such a familiar trio of design flaws, even an inert homeopathic treatment would be found effective, if it were provided with the same positive expectations and support as the CBT in this RCT. [This may seem a flippant comment that reflects on my credibility, not the study. But please keep reading to my detailed analysis where I back it up.]
  • The study showed an inexplicably high rate of deterioration in both treatment and control group. Apparent improvement in the treatment group might only reflect less deterioration than in the control group.
  • The study is focused on unvalidated psychiatric diagnoses being applied to patients with multiple somatic complaints, some of whom may not yet have a medical diagnosis, but most clearly had confirmed physical illnesses.

But wait, there is more!

  • It’s not CBT that was evaluated, but a complex multicomponent intervention in which what was called CBT is embedded in a way that its contribution cannot be evaluated.

The “CBT” did not map well on international understandings of the assumptions and delivery of CBT. The complex intervention included weeks of indoctrination of the patient with an understanding of their physical problems that incorporated simplistic pseudoscience before any CBT was delivered. We focused on goals imposed by a psychiatrist that didn’t necessarily fit with patients’ sense of their most pressing problems and the solutions.

OMGAnd the kicker.

  • The authors switched primary outcomes – reconfiguring the scoring of their subjective self-report measures years into the trial, based on a peeking at the results with the original scoring.

Investigators have a website which is marketing services. Rather than a quality contribution to the literature, this study can be seen as an experimercial doomed to bad science and questionable results from before the first patient was enrolled. An undeclared conflict of interest in play? There is another serious undeclared conflict of interest for one of the authors.

For the uninformed and gullible, the study handsomely succeeds as an advertisement for the investigators’ services to professionals and patients.

Personally, I would be indignant if a primary care physician tried to refer me or friend or family member to this trial. In the absence of overwhelming evidence to the contrary, I assume that people around me who complain of physical symptoms have legitimate physical concerns. If they do not yet have a confirmed diagnosis, it serves little purpose to stop the probing and refer them to psychiatrists. This trial operates with an anachronistic Victorian definition of psychosomatic condition.

something is rotten in the state of DenmarkBut why should we care about a patently badly conducted trial with switched outcomes? Is it only a matter of something being rotten in the state of Denmark? Aside from the general impact on the existing literature concerning CBT for somatic conditions, results of this trial  were entered into a Cochrane review of nonpharmacological interventions for medically unexplained symptoms. I previously complained about one of the authors of this RCT also being listed as an author on another Cochrane review protocol. Prior to that, I complained to Cochrane  about this author’s larger research group influencing a decision to include switched outcomes in another Cochrane review.  A lot of us rightfully depend heavily on the verdict of Cochrane reviews for deciding best evidence. That trust is being put into jeopardy.

Detailed analysis

1.This is an unblinded trial, a particularly weak methodology for examining whether a treatment works.

The letter that alerted physicians to the trial had essentially encouraged them to refer patients they were having difficulty managing.

‘Patients with a long-term illness course due to medically unexplained or functional somatic symptoms who may have received diagnoses like fibromyalgia, chronic fatigue syndrome, whiplash associated disorder, or somatoform disorder.

Patients and the physicians who referred them subsequently got feedback about to which group patients were assigned, either routine care or what was labeled as CBT. This information could have had a strong influence on the outcomes that were reported, particularly for the patients left in routine care.

Patients’ learning that they did not get assigned to the intervention group was undoubtedly disappointing and demoralizing. The information probably did nothing to improve the positive expectations and support available to patients in routine. This could have had a nocebo effect. The feedback may have contributed to the otherwise  inexplicably high rates of subjective deterioration [to be noted below] reported by patients left in the routine care condition. In contrast, the authors’ disclosure that patients had been assigned to the intervention group undoubtedly boosted the morale of both patients and physicians and also increased the gratitude of the patients. This would be reflected in the responses to the subjective outcome measures.

The gold standard alternative to an unblinded trial is a double-blind, placebo-controlled trial in which neither providers, nor patients, nor even the assessors rating outcomes know to which group particular patients were assigned. Of course, this is difficult to achieve in a psychotherapy trial. Yet a fair alternative is a psychotherapy trial in which patients and those who refer them are blind to the nature of the different treatments, and in which an effort is made to communicate credible positive expectations about the comparison control group.

Conclusion: A lack of blinding seriously biases this study toward finding a positive effect for the intervention, regardless of whether the intervention has any active, effective component.

2. A claim that this is a randomized controlled trial depends on the adequacy of the control offered by the comparison group, enhanced routine care. Just what is being controlled by the comparison? In evaluating a psychological treatment, it’s important that the comparison/control group offers the same frequency and intensity of contact, positive expectations, attention and support. This trial decidedly did not.

 There were large differences between the intervention and control conditions in the amount of contact time. Patients assigned to the cognitive therapy condition received an additional 9 group sessions with a psychiatrist of 3.5 hour duration, plus the option of even more consultations. The over 30 hours of contact time with a psychiatrist should be very attractive to patients who wanted it and could not otherwise obtain it. For some, it undoubtedly represented an opportunity to have someone to listen to their complaints of pain and suffering in a way that had not previously happened. This is also more than the intensity of psychotherapy typically offered in clinical trials, which is closer to 10 to 15, 50-minute sessions.

The intervention group thus received substantially more support and contact time, which was delivered with more positive expectations. This wealth of nonspecific factors favoring the intervention group compromises an effort to disentangle the specific effects of any active ingredient in the CBT intervention package. From what has been said so far, the trials’ providing a fair and generalizable evaluation of the CBT intervention is nigh impossible.

Conclusion: This is a methodologically poor choice of control groups with the dice loaded to obtain a positive effect for CBT.

3.The primary outcomes, both as originally scored and after switching, are subjective self-report measures that are highly responsive to nonspecific treatments, alleviation of mild depressive symptoms and demoralization. They are not consistently related to objective changes in functioning. They are particularly problematic when used as outcome measures in the context of an unblinded clinical trial within an inadequate control group.

There have been consistent demonstrations that assigning patients to inert treatments and measuring the outcomes with subjective measures may register improvements that will not correspond to what would be found with objective measures.

For instance, a provocative New England Journal of Medicine study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

There have been a number of demonstrations that treatments such as the one offered in the present study to patient populations similar to those in the study produce changes in subjective self-report that are not reflected in objective measures.

Much of the improvement in primary outcomes occurred before the first assessment after baseline and not very much afterwards. The early response is consistent with a placebo response.

The study actually included one largely unnoticed objective measure, utilization of routine care. Presumably if the CBT was effective as claimed, it would have produced a significant reduction in healthcare utilization. After all, isn’t the point of this trial to demonstrate that CBT can reduce health-care utilization associated with (as yet) medically unexplained symptoms? Curiously, utilization of routine care did not differ between groups.

The combination of the choice of subjective outcomes, unblinded nature of the trial, and poorly chosen control group bring together features that are highly likely to produce the appearance of positive effects, without any substantial benefit to the functioning and well-being of the patients.

Conclusion: Evidence for the efficacy of a CBT package for somatic complaints that depends solely on subjective self-report measures is unreliable, and unlikely to generalize to more objective measures of meaningful impact on patients’ lives.

4. We need to take into account the inexplicably high rates of deterioration in both groups, but particularly in the control group receiving enhanced care.

There was an unexplained deterioration of 50% deterioration in the control group and 25% in the intervention group. Rates of deterioration are only given a one-sentence mention in the article, but deserve much more attention. These rates of deterioration need to qualify and dampen any generalizable clinical interpretation of other claims about outcomes attributed to the CBT. We need to keep in mind that the clinical trials cannot determine how effective treatments are, but only how different a treatment is from a control group. So, an effect claimed for a treatment and control can largely or entirely come from deterioration in the control group, not what the treatment offers. The claim of success for CBT probably largely depends on the deterioration in the control group.

One interpretation of this trial is that spending an extraordinary 30 hours with a psychiatrist leads to only half the deterioration experienceddoing nothing more than routine care. But this begs the question of why are half the patients left in routine care deteriorating in such a large proportion. What possibly could be going on?

Conclusion: Unexplained deterioration in the control group may explain apparent effects of the treatment, but both groups are doing badly.

5. The diagnosis of “functional somatic symptoms” or, as the authors prefer – Severe Bodily Distress Syndromes – is considered by the authors to be a psychiatric diagnosis. It is not accepted as a valid diagnosis internationally. Its validation is limited to the work done almost entirely within the author group, which is explicitly labeled as “preliminary.” This biased sample of patients is quite heterogeneous, beyond their physicians having difficulty managing them. They have a full range of subjective complaints and documented physical conditions. Many of these patients would not be considered primarily having a psychiatric disorder internationally and certainly within the US, except where they had major depression or an anxiety disorder. Such psychiatric disorders were not an exclusion criteria.

Once sent on the pathway to a psychiatric diagnosis by their physicians’ making a referral to the study, patients had to meet additional criteria:

To be eligible for participation individuals had to have a chronic (i.e. of at least 2 years duration) bodily distress syndrome of the severe multi-organ type, which requires functional somatic symptoms from at least three of four bodily systems, and moderate to severe impairment.in daily living.

The condition identified in the title of the article is not validated as a psychiatric diagnosis. Two papers to which the authors refer to their  own studies ( 1 , 2 ) from a single sample. The title of one of these papers makes a rather immodest claim:

Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. Journal of Psychosomatic Research. 2010 May 31;68(5):415-26.

In neither the two papers nor the present RCT is there sufficient effort to rule out a physical basis for the complaints qualifying these patients for a psychiatric diagnosis. There is also a lack of follow-up to see if physical diagnoses were later applied.

Citation patterns of these papers strongly suggest  the authors are not having got much traction internationally. The criteria of symptoms from three out of four bodily systems is arbitrary and unvalidated. Many patients with known physical conditions would meet these criteria without any psychiatric diagnosis being warranted.

The authors relate what is their essentially homegrown diagnosis to functional somatic syndromes, diagnoses which are themselves subject to serious criticism. See for instance the work of Allen Frances M.D., who had been the chair of the American Psychiatric Association ‘s Diagnostic and Statistical Manual (DSM-IV) Task Force. He became a harsh critic of its shortcomings and the failures of APA to correct coverage of functional somatic syndromes in the next DSM.

Mislabeling Medical Illness As Mental Disorder

Unless DSM-5 changes these incredibly over inclusive criteria, it will greatly increase the rates of diagnosis of mental disorders in the medically ill – whether they have established diseases (like diabetes, coronary disease or cancer) or have unexplained medical conditions that so far have presented with somatic symptoms of unclear etiology.

And:

The diagnosis of mental disorder will be based solely on the clinician’s subjective and fallible judgment that the patient’s life has become ‘subsumed’ with health concerns and preoccupations, or that the response to distressing somatic symptoms is ‘excessive’ or ‘disproportionate,’ or that the coping strategies to deal with the symptom are ‘maladaptive’.

And:

 “These are inherently unreliable and untrustworthy judgments that will open the floodgates to the overdiagnosis of mental disorder and promote the missed diagnosis of medical disorder.

The DSM 5 Task force refused to adopt changes proposed by Dr. Frances.

Bad News: DSM 5 Refuses to Correct Somatic Symptom Disorder

Leading Frances to apologize to patients:

My heart goes out to all those who will be mislabeled with this misbegotten diagnosis. And I regret and apologize for my failure to be more effective.

The chair of The DSM Somatic Symptom Disorder work group has delivered a scathing critique of the very concept of medically unexplained symptoms.

Dimsdale JE. Medically unexplained symptoms: a treacherous foundation for somatoform disorders?. Psychiatric Clinics of North America. 2011 Sep 30;34(3):511-3.

Dimsdale noted that applying this psychiatric diagnosis sidesteps the quality of medical examination that led up to it. Furthermore:

Many illnesses present initially with nonspecific signs such as fatigue, long before the disease progresses to the point where laboratory and physical findings can establish a diagnosis.

And such diagnoses may encompass far too varied a group of patients for any intervention to make sense:

One needs to acknowledge that diseases are very heterogeneous. That heterogeneity may account for the variance in response to intervention. Histologically, similar tumors have different surface receptors, which affect response to chemotherapy. Particularly in chronic disease presentations such as irritable bowel syndrome or chronic fatigue syndrome, the heterogeneity of the illness makes it perilous to diagnose all such patients as having MUS and an underlying somatoform disorder.

I tried making sense of a table of the additional diagnoses that the patients in this study had been given. A considerable proportion of patients had physical conditions that would not be considered psychiatric problems in the United States.. Many patients could be suffering from multiple symptoms not only from the conditions, but side effects of the medications being offered. It is very difficult to manage multiple medications required by multiple comorbidities. Physicians from the community found their competence and ability to spend time with these patients taxing.

table of functional somatic symptoms

Most patients had a diagnosis of “functional headaches.” It’s not clear what this designation means, but conceivably it could include migraine headaches, which are accompanied by multiple physical complaints. CBT is not an evidence-based treatment of choice for functional headaches, much less migraines.

Over a third of the patients had irritable bowel syndrome (IBS). A systematic review of the comorbidity  of irritable bowel syndrome concluded physical comorbidity is the norm in IBS:

The nongastrointestinal nonpsychiatric disorders with the best-documented association are fibromyalgia (median of 49% have IBS), chronic fatigue syndrome (51%), temporomandibular joint disorder (64%), and chronic pelvic pain (50%).

In the United States, many patients and specialists would consider considering irritable bowel syndrome as a psychiatric condition offensive and counterproductive. There is growing evidence that irritable bowel syndrome is a disturbance in the gut microbiota. It involves a gut-brain interaction, but the primary direction of influence is of the disturbance in the gut on the brain. Anxiety and depression symptoms are secondary manifestations, a product of activity in the gut influencing the nervous system.

Most of the patients in the sample had a diagnosis of fibromyalgia and over half of all patients in this study had a diagnosis of chronic fatigue syndrome.

Other patients had diagnosable anxiety and depressive disorders, which, particularly at the lower end of severity, are responsive to nonspecific treatments.

Undoubtedly many of these patients, perhaps most of them, are demoralized by not been able to get a  diagnosis for what they have good basis to believe is a medical condition, aside from the discomfort, pain, and interference with their life that they are experiencing. They could be experiencing a demoralization secondary to physical illness.

These patients presented with pain, fatigue, general malaise, and demoralization. I have trouble imagining how their specific most pressing concerns could be addressed in group settings. These patients pose particular problems for making substantive clinical interpretation of outcomes that are highly general and subjective.

Conclusion: Diagnosing patients with multiple physical symptoms as having a psychiatric condition is highly controversial. Results will not generalize to countries and settings where the practice is not accepted. Many of the patients involved in the study had recognizable physical conditions, and yet they are being shunted to psychiatrists who focused only on their attitude towards the symptoms. They are being denied the specialist care and treatments that might conceivably reduce the impact of their conditions on their lives

6. The “CBT” offered in this study is as part of a complex, multicomponent treatment that does not resemble cognitive behavior therapy as it is practiced in the United States.

it is thoughtAs seen in figure 1 in the article, The multicomponent intervention is quite complex and consists of more than cognitive behavior therapy. Moreover, at least in the United States, CBT has distinctive elements of collaborative empiricism. Patients and therapist work together selecting issues on which to focus, developing strategies, with the patients reporting back on efforts to implement them. From the details available in the article, the treatment sounded much more like an exhortation or indoctrination, even arguing with the patients, if necessary. An English version available on the web of the educational material used in initial sessions confirmed a lot of condescending pseudoscience was presented to convince the patients that their problems were largely in their heads.

Without a clear application of learning theory, behavioral analysis, or cognitive science, the “CBT”  treatment offered in this RCT has much more in common with the creative novation therapy offered by Hans Eysenck, which is now known to have been justified with fraudulent data. Indeed,  the educational materials  for this study to what is offered in Eysenck’s study reveal striking similarities. Eysenck was advancing the claim that his intervention could prevent cardiovascular disease and cancer and overcome the iatrogenic effects. I know, this sounds really crazy, but see my careful documentation elsewhere.

Conclusion: The embedding of an unorthodox “CBT” in a multicomponent intervention in this study does not allow isolating any specific, active component ofCBT that might be at work.

7. The investigators disclose having altered their scoring of their primary outcome years after the trial began, and probably after a lot of outcome data had been collected.

I found a casual disclosure in the method section of this article unsettling, particularly noting that the original trial registration was:

We found an unexpected moderate negative correlation of the physical and mental component summary measures, which are constructed as independent measures. According to the SF-36 manual, a low or zero correlation of the physical and mental components is a prerequisite of their use.23 Moreover, three SF-36 scales that contribute considerably to the PCS did not fulfil basic scaling assumptions.31 These findings, together with a recent report of problems with the PCS in patients with physical and mental comorbidity,32 made us concerned that the PCS would not reliably measure patients’ physical health in the study sample. We therefore decided before conducting the analysis not to use the PCS, but to use instead the aggregate score as outlined above as our primary outcome measure. This decision was made on 26 February 2009 and registered as a protocol change at clinical trials. gov on 11 March 2009. Only baseline data had been analysed when we made our decision and the follow-up data were still concealed.

Switching outcomes, particularly after some results are known, constitutes a serious violation of best research practices and leads to suspicion of the investigators refining their hypotheses after they had peeked at the data. See How researchers dupe the public with a sneaky practice called “outcome switching”.

The authors had originally proposed a scoring consistent with a very large body of literature. Dropping the original scoring precludes any direct comparison with this body of research, including basic norms. They claim that they switched scoring because two key subscales were correlated in the opposite direction of what is reported in the larger literature. This is troubling indication that something has gone terribly wrong in authors’ recruitment of a sample. It should not be pushed under the rug.

The authors claim that they switched outcomes based only on examining of baseline data from their study. However, one of the authors, Michael Sharpe is also an author on the controversial PACE trial  A parallel switch was made to the scoring of the subjective self-reports in that trial. When the data were eventually re-analyzed using the original scoring, any positive findings for the trial were substantially reduced and arguably disappeared.

Even if the authors of the present RCT did not peekat their outcome data before deciding to switch scoring of the primary outcome, they certainly had strong indications from other sources that the original scoring would produce weak or null findings. In 2009, one of the authors, Michael Sharpe had access to results of a relevant trial. What is called the FINE trial had null findings, which affected decisions to switch outcomes in the PACE trial. Is it just a coincidence that the scoring of the outcomes was then switched for the present RCT?

Conclusion: The outcome switching for the present trial  represents bad research practices. For the trial to have any credibility, the investigators should make their data publicly available so these data could be independently re-analyzed with the original scoring of primary outcomes.

The senior author’s clinic

 I invite readers to take a virtual tour of the website for the senior author’s clinical services  ]. Much of it is available in English. Recently, I blogged about dubious claims of a health care system in Detroit achieving a goal of “zero suicide.” . I suggested that the evidence for this claim was quite dubious, but was a powerful advertisement for the health care system. I think the present report of an RCT can similarly be seen as an infomercial for training and clinical services available in Denmark.

Conflict of interest

 No conflict of interest is declared for this RCT. Under somewhat similar circumstances, I formally complained about undeclared conflicts of interest in a series of papers published in PLOS One. A correction has been announced, but not yet posted.

Aside from the senior author’s need to declare a conflict of interest, the same can be said for one of the authors, Michael Sharpe.

Apart from the professional and reputational interest, (his whole career has been built making strong claims about such interventions) Sharpe works for insurance companies, and publishes on the subject. He declared a conflict of interest for the for PACE trial.

MS has done voluntary and paid consultancy work for government and for legal and insurance companies, and has received royalties from Oxford University Press.

Here’s Sharpe’s report written for the social benefits reinsurance company UnumProvident.

If results of this are accepted at face, they will lend credibility to the claims that effective interventions are available to reduce social disability. It doesn’t matter that the intervention is not effective. Rather persons receiving social disability payments can be disqualified because they are not enrolled in such treatment.

Effects on the credibility of Cochrane collaboration report

The switched outcomes of the trial were entered into a Cochrane systematic review, to which primary care health professionals look for guidance in dealing with a complex clinical situation. The review gives no indication of the host of problems that I exposed here. Furthermore, I have glanced at some of the other trials included and I see similar difficulties.

I been unable to convince the Cochrane to clean up conflicts of interest that are attached to switched outcomes being entered in reviews. Perhaps some of my readers will want to approach Cochrane to revisit this issue.
I think this post raises larger issues about whether Cochrane has any business conducting and disseminating reviews of such a bogus psychiatric diagnosis, medically unexplained symptoms. These reviews do patients no good, and may sidetrack them from getting the medical care they deserve. The reviews do serve the interest of special interests, including disability insurance companies.

Special thanks to John Peters and to Skeptical Cat for their assistance with my writing this blog. However, I have sole responsibility for any excesses or distortions.

 

Before you enroll your child in the MAGENTA chronic fatigue syndrome study: Issues to be considered

[October 3 8:23 AM Update: I have now inserted Article 21 of the Declaration of Helsinki below, which is particularly relevant to discussions of the ethical problems of Dr. Esther Crawley’s previous SMILE trial.]

Petitions are calling for shutting down the MAGENTA trial. Those who organized the effort and signed the petition are commendably brave, given past vilification of any effort by patients and their allies to have a say about such trials.

Below I identify a number of issues that parents should consider in deciding whether to enroll their children in the MAGENTA trial or to withdraw them if they have already been enrolled. I take a strong stand, but I believe I have adequately justified and documented my points. I welcome discussion to the contrary.

This is a long read but to summarize the key points:

  • The MAGENTA trial does not promise any health benefits for the children participating in the trial. The information sheet for the trial was recently modified to suggest they might benefit. However, earlier versions clearly stated that no benefit was anticipated.
  • There is inadequate disclosure of likely harms to children participating in the trial.
  • An estimate of a health benefit can be evaluated from the existing literature concerning the effectiveness of the graded exercise therapy intervention with adults. Obtaining funding for the MAGENTA trial depended on a misrepresentation of the strength of evidence that it works in adult populations.  I am talking about the PACE trial.
  • Beyond any direct benefit to their children, parents might be motivated by the hope of contributing to science and the availability of effective treatments. However, these possible benefits depend on publication of results of a trial after undergoing peer review. The Principal Investigator for the MAGENTA trial, Dr. Esther Crawley, has a history of obtaining parents’ consent for participation of their children in the SMILE trial, but then not publishing the results in a timely fashion. Years later, we are still waiting.
  • Dr. Esther Crawley exposed children to unnecessary risk without likely benefit in her conduct of the SMILE trial. This clinical trial involved inflicting a quack treatment on children. Parents were not adequately informed of the nature of the treatment and the absence of evidence for any mechanism by which the intervention could conceivably be effective. This reflects on the due diligence that Dr. Crawley can be expected to exercise in the MAGENTA trial.
  • The consent form for the MAGENTA trial involves parents granting permission for the investigator to use children and parents’ comments concerning effects of the treatment for its promotion. Insufficient restrictions are placed on how the comments can be used. There is the clear precedent of comments made in the context of the SMILE trial being used to promote the quack Lightning Process treatment in the absence of evidence that treatment was actually effective in the trial. There is no guarantee that any comments collected from children and parents in the MAGENTA trial would not similarly be misused.
  • Dr. Esther Crawley participated in a smear campaign against parents having legitimate concerns about the SMILE trial. Parents making legitimate use of tools provided by the government such as Freedom of Information Act requests, appeals of decisions of ethical review boards and complaints to the General Medical Council were vilified and shamed.
  • Dr. Esther Crawley has provided direct, self-incriminating quotes in the newsletter of the Science Media Centre about how she was coached and directed by their staff to slam the patient community.  She played a key role in a concerted and orchestrated attack on the credibility of not only parents of participants in the MAGENTA trial, but of all patients having chronic fatigue syndrome/ myalgic encephalomyelitis , as well as their advocates and allies.

I am not a parent of a child eligible for recruitment to the MAGENTA trial. I am not even a citizen or resident of the UK. Nonetheless, I have considered the issues and lay out some of my considerations below. On this basis, I signed the global support version  of the UK petition to suspend all trials of graded exercise therapy in children and adults with ME/CFS. I encourage readers who are similarly in my situation outside the UK to join me in signing the global support petition.

If I were a parent of an eligible child or a resident of the UK, I would not enroll my child in MAGENTA. I would immediately withdraw my child if he or she were currently participating in the trial. I would request all the child’s data be given back or evidence that it had been destroyed.

I recommend my PLOS Mind the Brain post, What patients should require before consenting to participate in research…  as either a prelude or epilogue to the following blog post.

What you will find here is a discussion of matters that parents should consider before enrolling their children in the MAGENTA trial of graded exercise for chronic fatigue syndrome. The previous blog post [http://blogs.plos.org/mindthebrain/2015/12/09/what-patients-should-require-before-consenting-to-participate-in-research/ ]  is rich in links to an ongoing initiative from The BMJ to promote broader involvement of patients (and implicitly, parents of patients) in the design, implementation, and interpretation of clinical trials. The views put forth by The BMJ are quite progressive, even if there is a gap between their expression of views and their actual implementation. Overall, that blog post presents a good set of standards for patients (and parents) making informed decisions concerning enrollment in clinical trials.

Simon McGrathLate-breaking update: See also

Simon McGrath: PACE trial shows why medicine needs patients to scrutinise studies about their health

Basic considerations.

Patients are under no obligation to participate in clinical trials. It should be recognized that any participation typically involves burden and possibly risk over what is involved in receiving medical care outside of a clinical trial.

It is a deprivation of their human rights and a violation of the Declaration of Helsinki to coerce patients to participate in medical research without freely given, fully informed consent.

Patients cannot be denied any medical treatment or attention to which they would otherwise be entitled if they fail to enroll in a clinical trial.

Issues are compounded when consent from parents is sought for participation of vulnerable children and adolescents for whom they have legal responsibility. Although assent to participate in clinical trials is sought from children and adolescents, it remains for their parents to consent to their participation.

Parents can at any time withdraw their consent for their children and adolescents participating in trials and have their data removed, without requiring the approval of any authorities of their reason for doing so.

Declaration of Helsinki

The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data.

It includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

[October 3 8:23 AM Update]: I have now inserted Article 21 of the Declaration of Helsinki which really nails the ethical problems of the SMILE trial:

21. Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.

There is clearly in adequate scientific justification for testing the quack Lightning Process Treatment.

What Is the Magenta Trial?

The published MAGENTA study protocol states

This study aims to investigate the acceptability and feasibility of carrying out a multicentre randomised controlled trial investigating the effectiveness of graded exercise therapy compared with activity management for children/teenagers who are mildly or moderately affected with CFS/ME.

Methods and analysis 100 paediatric patients (8–17 years) with CFS/ME will be recruited from 3 specialist UK National Health Service (NHS) CFS/ME services (Bath, Cambridge and Newcastle). Patients will be randomised (1:1) to receive either graded exercise therapy or activity management. Feasibility analysis will include the number of young people eligible, approached and consented to the trial; attrition rate and treatment adherence; questionnaire and accelerometer completion rates. Integrated qualitative methods will ascertain perceptions of feasibility and acceptability of recruitment, randomisation and the interventions. All adverse events will be monitored to assess the safety of the trial.

The first of two treatments being compared is:

Arm 1: activity management

This arm will be delivered by CFS/ME specialists. As activity management is currently being delivered in all three services, clinicians will not require further training; however, they will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). Clinicians therefore have flexibility in delivering the intervention within their National Health Service (NHS) setting. Activity management aims to convert a ‘boom–bust’ pattern of activity (lots 1 day and little the next) to a baseline with the same daily amount before increasing the daily amount by 10–20% each week. For children and adolescents with CFS/ME, these are mostly cognitive activities: school, schoolwork, reading, socialising and screen time (phone, laptop, TV, games). Those allocated to this arm will receive advice about the total amount of daily activity, including physical activity, but will not receive specific advice about their use of exercise, increasing exercise or timed physical exercise.

So, the first arm of the trial is a comparison condition consisting of standard care delivered without further training of providers. The treatment is flexibly delivered, expected to vary between settings, and thus largely uncontrolled. The treatment represents a methodologically weak condition that does not adequately control for attention and positive expectations. Control conditions should be equivalent to the intervention being evaluated in these dimensions.

The second arm of the study:

Arm 2: graded exercise therapy (GET)

This arm will be delivered by referral to a GET-trained CFS/ME specialist who will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). They will be encouraged to deliver GET as they would in their NHS setting.20 Those allocated to this arm will be offered advice that is focused on exercise with detailed assessment of current physical activity, advice about exercise and a programme including timed daily exercise. The intervention will encourage children and adolescents to find a baseline level of exercise which will be increased slowly (by 10–20% a week, as per NICE guidance5 and the Pacing, graded Activity and Cognitive behaviour therapy – a randomised Evaluation (PACE)12 ,21). This will be the median amount of daily exercise done during the week. Children and adolescents will also be taught to use a heart rate monitor to avoid overexertion. Participants will be advised to stay within the target heart rate zones of 50–70% of their maximum heart rate.5 ,7

The outcome of the trial will be evaluated in terms of

Quantitative analysis

The percentage recruited of those eligible will be calculated …Retention will be estimated as the percentage of recruited children and adolescents reaching the primary 6-month follow-up point, who provide key outcome measures (the Chalder Fatigue Scale and the 36-Item Short-Form Physical Functioning Scale (SF-36 PFS)) at that assessment point.

actigraphObjective data will be collected in the form of physical activity measured by Accelerometers. These are

Small, matchbox-sized devices that measure physical activity. They have been shown to provide reliable indicators of physical activity among children and adults.

However, actual evaluation of the outcome of the trial will focus on recruitment and retention and subjective, self-report measures of fatigue and physical functioning. These subjective measures have been shown to be less valid than objective measures. Scores are  vulnerable  to participants knowing what condition they are assigned to (called ‘being unblinded’) and their perception of which intervention the investigators prefer.

It is notable that in the PACE trial of CBT and GET for chronic fatigue syndrome in adults, the investigators manipulated participants’ self-reports with praise in newsletters sent out during the trial . The investigators also switched their scoring of the self-report measures and produced results that they later conceded to have been exaggerated by their changing in scoring of the self-report measures [http://www.wolfson.qmul.ac.uk/current-projects/pace-trial#news ].

Irish ME/CFS Association Officer & Tom Kindlon
Tom Kindlon, Irish ME/CFS Association Officer

See an excellent commentary by Tom Kindlon at PubMed Commons [What’s that? ]

The validity of using subjective outcome measures as primary outcomes is questionable in such a trial

The bottom line is that the investigators have a poorly designed study with inadequate control condition. They have chosen subjective self-reports that are prone to invalidity and manipulation over objective measures like actual changes in activity or practical real-world measures like school attendance. Not very good science here. But they are asking parents to sign their children up.

What is promised to parents consenting to have the children enrolled in the trial?

The published protocol to which the investigators supposedly committed themselves stated

What are the possible benefits and risks of participating?
Participants will not benefit directly from taking part in the study although it may prove enjoyable contributing to the research. There are no risks of participating in the study.

Version 7 of the information sheet provided to parents, states

Your child may benefit from the treatment they receive, but we cannot guarantee this. Some children with CFS/ME like to know that they are helping other children in the future. Your child may also learn about research.

Survey assessments conducted by the patient community strongly contradict the suggestion that there is no risk of harm with GET.

alemAlem Matthees, the patient activist who obtained release of the PACE data and participated in reanalysis has commented:

“Given that post-exertional symptomatology is a hallmark of ME/CFS, it is premature to do trials of graded exercise on children when safety has not first been properly established in adults. The assertion that graded exercise is safe in adults is generally based on trials where harms are poorly reported or where the evidence of objectively measured increases in total activity levels is lacking. Adult patients commonly report that their health was substantially worsened after trying to increase their activity levels, sometimes severely and permanently, therefore this serious issue cannot be ignored when recruiting children for research.”

See also

Kindlon T. Reporting of harms associated with graded exercise therapy and cognitive behavioural therapy in myalgic encephalomyelitis/chronic fatigue syndrome. Bulletin of the IACFS/ME. 2011;19(2):59-111.

This thorough systematic review reports inadequacy in harm reporting in clinical trials, but:

Exercise-related physiological abnormalities have been documented in recent studies and high rates of adverse  reactions  to exercise have been  recorded in  a number of  patient surveys. Fifty-one percent of  survey respondents (range 28-82%, n=4338, 8 surveys) reported that GET worsened their health while 20% of respondents (range 7-38%, n=1808, 5 surveys) reported similar results for CBT.

The unpublished results of Dr. Esther Crawley’s SMILE trial

 A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The published protocol for the SMILE trial

[Note the ® in the title below, indicating a test of trademarked commercial product. The significance of that is worthy of a whole other blog post. ]

Crawley E, Mills N, Hollingworth W, Deans Z, Sterne JA, Donovan JL, Beasant L, Montgomery A. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial). Trials. 2013 Dec 26;14(1):1.

States

The data monitoring group will receive notice of serious adverse events (SAEs) for the sample as whole. If the incidence of SAEs of a similar type is greater than would be expected in this population, it will be possible for the data monitoring group to receive data according to trial arm to determine any evidence of excess in either arm.

Primary outcome data at six months will be examined once data are available from 50 patients, to ensure that neither arm is having a detrimental effect on the majority of patients. An independent statistician with no other involvement in the study will investigate whether more than 20 participants in the study sample as a whole have experienced a reduction of ≥ 30 points on the SF-36 at six months. In this case, the data will then be summarised separately by trial arm, and sent to the data monitoring group for review. This process will ensure that the trial team will not have access to the outcome data separated by treatment arm.

A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The trial was thus completed a number of years ago, but these valuable data have never been published.

The only publication from the trial so far uses selective quotes from child participants that cannot be independently evaluated. Readers are not told how representative these quotes, the outcomes for the children being quoted or the overall outcomes of the trial.

Parslow R, Patel A, Beasant L, Haywood K, Johnson D, Crawley E. What matters to children with CFS/ME? A conceptual model as the first stage in developing a PROM. Archives of Disease in Childhood. 2015 Dec 1;100(12):1141-7.

The “evaluation” of the quack Lightning Treatment in the SMILE trial and quotes from patients have also been used to promote Parker’s products as being used in NHS clinics.

How can I say the Lightning Process is quackery?

 Dr. Crawley describes the Lightning Process in the Research Ethics Application Form for the SMILE study as   ombining the principles of neurolinguistic programming, osteopathy, and clinical hypnotherapy.

That is an amazing array of three different frameworks from different disciplines. You would be hard pressed to find an example other than the Lightning Process that claimed to integrate them. Yet, any mechanisms for explaining therapeutic interventions cannot be a creative stir fry of whatever is on hand being thrown together. For a treatment to be considered science-based, there has to be a solid basis of evidence that these presumably complex processes fit together as assumed and work as assumed. I challenge Dr. Crawley or anyone else to produce a shred of credible, peer-reviewed evidence for the basic mechanism of the Lightning Process.

The entry for Neuro-linguistic programming (NLP) in Wikipedia states

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

 The Hampshire (UK) County Council Trading Standards Office filed a formal complaint against Phil Parker for claims made on the Lightning Process website concerning effects on CFS/ME:

The “CFS/ME” page of the website included the statements “Our survey found that 81.3 %* of clients report that they no longer have the issues they came with by day three of the LP course” and “The Lightning Process is working with the NHS on a feasibility study, please click here for further details, and for other research information click here”.

parker nhs advert
Seeming endorsements on Parker’s website. Two of them –Northern Ireland and NHS Suffolk subsequently complained that use of their insignias was unauthorized and they were quickly removed.

The “working with the NHS” refers to the collaboration with Dr. Easter Crawley.

The UK Advertising Standards Authority upheld this complaint, as well as about Parker’s claims about effectiveness with other conditions, including  multiple sclerosis, irritable bowel syndrome and fibromyalgia

 Another complaint in 2013 about claims on Phil Parker’s website was similarly upheld:

 The claims must not appear again in their current form. We welcomed the decision to remove the claims. We told Phil Parker Group not to make claims on websites within their control that were directly connected with the supply of their goods and services if those claims could not be supported with robust evidence. We also told them not to refer to conditions for which advice should be sought from suitably qualified health professionals.

 As we will see, these upheld charges of quackery occurred when parents of children participating in the SMILE trial were being vilified in the BMJ and elsewhere. Dr. Crawley was prominently featured in this vilification and was quoted in a celebration of its success by the Science Media Centre, which had orchestrated the vilification.

Captured cfs praker ad

The Research Ethics Committee approval of the SMILE trial and the aftermath

 I was not very aware of the CFS/ME literature, and certainly not all its controversies when the South West Research Ethics Committee (REC) reviewed the application for the SMILE trial and ultimately approved it on September 8, 2010.

I would have had strong opinions about it. I only first started blogging a little afterwards.  But I was very concerned about any patients being exposed to alternative and unproven medical treatments in other contexts that were not evidence-based – even more so to treatments for which promoters claimed implausible mechanisms by which they worked. I would not have felt it appropriate to inflict the Lightning Process on unsuspecting children. It is insufficient justification to put them a clinical trial simply because a particular treatment has not been evaluated.

 Prince Charles once advocated organic coffee enemas to treat advanced cancer. His endorsement generated a lot of curiosity from cancer patients. But that would not justify a randomized trial of coffee enemas. By analogy, I don’t think Dr. Esther Crawley had sufficient justification to conduct her trial, especially without warnings that that there was no scientific basis to expect the Lightning Process to work or that it would not hurt the children.

 I am concerned about clinical trials that have little likelihood of producing evidence that a treatment is effective, but that seemed designed to get these treatments into routine clinical care. it is now appreciated that some clinical trials have little scientific value but serve as experimercials or means of placing products in clinical settings. Pharmaceutical companies notoriously do this.

As it turned out, the SMILE trial succeeded admirably as a promotion for the Lightning Process, earning Phil Parker unknown but substantial fees through its use in the SMILE trial, but also in successful marketing throughout the NHS afterwards.

In short, I would been concerned about the judgment of Dr. Esther Crawley in organizing the SMILE trial. I would been quite curious about conflicts of interest and whether patients were adequately informed of how Phil Parker was benefiting.

The ethics review of the SMILE trial gave short shrift to these important concerns.

When the patient community and its advocate, Dr. Charles Shepherd, became aware of the SMILE trial’s approval, there were protests leading to re-evaluations all the way up to the National Patient Safety Agency. Examining an Extract of Minutes from South West 2 REC meeting held on 2 December 2010, I see many objections to the approval being raised and I am unsatisfied by the way in which they were discounted.

Patient, parent, and advocate protests escalated. If some acted inappropriate, this did not undermine the righteousness of others legitimate protest. By analogy, I feel strongly about police violence aimed against African-Americans and racist policies that disproportionately target African-Americans for police scrutiny and stoppng. I’m upset when agitators and provocateurs become violent at protests, but that does not delegitimize my concerns about the way black people are treated in America.

Dr. Esther Crawley undoubtedly experienced considerable stress and unfair treatment, but I don’t understand why she was not responsive to patient concerns nor  why she failed to honor her responsibility to protect child patients from exposure to unproven and likely harmful treatments.

Dr. Crawley is extensively quoted in a British Medical Journal opinion piece authored by a freelance journalist,  Nigel Hawkes:

Hawkes N. Dangers of research into chronic fatigue syndrome. BMJ. 2011 Jun 22;342:d3780.

If I had been on the scene, Dr. Crawley might well have been describing me in terms of how I would react, including my exercising of appropriate, legally-provided means of protest and complaint:

Critics of the method opposed the trial, first, Dr Crawley says, by claiming it was a terrible treatment and then by calling for two ethical reviews. Dr Shepherd backed the ethical challenge, which included the claim that it was unethical to carry out the trial in children, made by the ME Association and the Young ME Sufferers Trust. After re-opening its ethical review and reconsidering the evidence in the light of the challenge, the regional ethical committee of the NHS reiterated its support for the trial.

There was arguably some smearing of Dr. Shepherd, even in some distancing of him from the action of others:

This point of view, if not the actions it inspires, is defended by Charles Shepherd, medical adviser to and trustee of the ME Association. “The anger and frustration patients have that funding has been almost totally focused on the psychiatric side is very justifiable,” he says. “But the way a very tiny element goes about protesting about it is not acceptable.

This article escalated with unfair comparisons to animal rights activists, with condemnation of appropriate use of channels of complaint – reporting physicians to the General Medical Council.

The personalised nature of the campaign has much in common with that of animal rights activists, who subjected many scientists to abuse and intimidation in the 1990s. The attitude at the time was that the less said about the threats the better. Giving them publicity would only encourage more. Scientists for the most part kept silent and journalists desisted from writing about the subject, partly because they feared anything they wrote would make the situation worse. Some journalists have also been discouraged from writing about CFS/ME, such is the unpleasant atmosphere it engenders.

While the campaigners have stopped short of the violent activities of the animal rights groups, they have another weapon in their armoury—reporting doctors to the GMC. Willie Hamilton, an academic general practitioner and professor of primary care diagnostics at Peninsula Medical School in Exeter, served on the panel assembled by the National Institute for Health and Clinical Excellence (NICE) to formulate treatment advice for CFS/ME.

Simon Wessely and the Principal Investigator of the PACE trial, Peter White, were given free rein to dramatize their predicament posed by the protest. Much later, in the 2016 Lower Tribunal Hearing, testimony would be given by PACE

Co-Investigator Trudie Chalder would much later (2016) cast doubt on whether the harassment was as severe or violent as it was portrayed. Before that, the financial conflicts of interest of Peter White that were denied in the article would be exposed.

In response to her testimony, the UK Information Officer stated:

Professor Chalder’s evidence when she accepts that unpleasant things have been said to and about PACE researchers only, but that no threats have been made either to researchers or participants.

But in 2012, a pamphlet celebrating the success of The Science Media Centre started by Wessely would be rich in indiscreet quotes from Esther Crawley. The article in BMJ was revealed to be part of a much larger orchestrated campaign to smear, discredit and silence patients, parents, advocates and their allies.

Dr. Esther Crawley’s participation in a campaign organized by the Science Media Center to discredit patients, parents, advocates and supporters.

 The SMC would later organize a letter writing campaign to Parliament in support of Peter White and his refusal to release the PACE data to Alem Mattheees who had made a requestunder the Freedom of Information Act. The letter writing campaign was an effort to get scientific data excluded from the provisions of the freedom of information act. The effort failed and the data were subsequently released.

But here is how Esther Crawley described her assistance:

The SMC organised a meeting so we could discuss what to do to protect researchers. Those who had been subject to abuse met with press officers, representatives from the GMC and, importantly, police who had dealt with the  animal rights campaign. This transformed my view of  what had been going on. I had thought those attacking us were “activists”; the police explained they were “extremists”.

And

We were told that we needed to make better use of the law and consider using the press in our favour – as had researchers harried by animal rights extremists. “Let the public know what you are trying to do and what is happening to you,” we were told. “Let the public decide.”

And

I took part in quite a few interviews that day, and have done since. I was also inundated with letters, emails and phone calls from patients with CFS/ME all over the world asking me to continue and not “give up”. The malicious, they pointed out, are in a minority. The abuse has stopped completely. I never read the activists’ blogs, but friends who did told me that they claimed to be “confused” and “upset” – possibly because their role had been switched from victim to abuser. “We never thought we were doing any harm…”

 The patient community and its allies are still burdened by the damage of this effort and are rebuilding its credibility only slowly. Only now are they beginning to get an audience as suffering human beings with significant, legitimate unmet needs. Only now are they escaping the stigmatization that occurred at this time with Esther Crawley playing a key role.

Where does this leave us?

stop posterParents are being asked to enroll in a clinical trial without clear benefit to the children but with the possibility of considerable risk from the graded exercise. They are being asked by Esther Crawley, a physician, who has previously inflicted a quack treatment on their children with CFS/ME in the guise of a clinical trial, for which he is never published the resulting data. She has played an effective role in damaging the legitimacy and capacity of patients and parents to complain.

Given this history and these factors, why would a parent possibly want to enroll their children in the MAGENTA trial? Somebody please tell me.

Special thanks to all the patient citizen-scientists who contributed to this blog post. Any inaccuracies or excesses are entirely my own, but these persons gave me substantial help. Some are named in the blog, but others prefer anonymity.

 All opinions expressed are solely those of James C Coyne. The blog post in no way conveys any official position of Mind the Brain, PLOS blogs or the larger PLOS community. I appreciate the free expression of  personal opinion that I am allowed.

 

 

 

 

 

 

What patients should require before consenting to participate in research…

A bold BMJ editorial  calls for more patient involvement in the design, implementation, and interpretation of research – but ends on a sobering note: The BMJ has so little such involvement to report.

In this edition of Mind the Brain, I suggest how patients, individually and collectively, can take responsibility for advancing this important initiative themselves.

I write in a context defined by recent events.

  • Government-funded researchers offered inaccurate interpretations of their results [1, 2].
  • An unprecedented number of patients have judged the researchers’ interpretation of their results as harmful to their well-being.
  • The researchers then violated government-supported data sharing policies in refusing to release their data for independent analysis.
  • Patients were vilified in the investigators’ efforts to justify their refusal to release the data.

These events underscore the need for patients to require certain documentation before deciding whether to participate in research.

Declining to participate in clinical research is a patient’s inalienable right that must not jeopardize the receipt of routine treatment or lead to retaliation.

A simple step: in deciding whether to participate in research, patients can insist that any consent form they sign contains documentation of patient involvement at all phases of the research. If there is no detailing of how patients were involved in the design of this study and how they will be involved in the interpretation, patients should consider not consenting.

Similarly, patients should consider refusing to sign consent forms that do not expressly indicate that the data will be readily available for further analyses, preferably by placing the data in a publicly accessible depository.

Patients exercising their rights in these ways will make for better and more useful biomedical research, as well as research that is more patient-oriented

The BMJ editorial

bmj-logo-ogThe editorial Research Is the Future, Get Involved declares:

More than three million NHS patients took part in research over the past five years. Bravo. Now let’s make sure that patients are properly involved, not just as participants but in trial conception, design, and conduct and the analysis, reporting, and dissemination of results.

But in the next sentences, the editorial describes how The BMJ’s laudable efforts to get researchers to demonstrate how patients were involved have not produced impressive results:

man with empty pocketsYou may have noticed the new “patient involvement” box in The BMJ’s research articles. Sadly, all too often the text reads something like, “No patients were involved in setting the research question or the outcome measures; nor were they involved in the design and implementation of the study. There are no plans to involve patients in the dissemination of results.” We hope that the shock of such statements will stimulate change. Examples of good patient involvement will also help: see the multicentre randomised trial on stepped care for depression and anxiety (doi:10.1136/bmj.h6127).

Our plan is to shine a light on the current state of affairs and then gradually raise the bar. Working with other journals, research funders, and ethics committees, we hope that at some time in the future only research in which patients have been fully involved will be considered acceptable.

In their instructions to authors, The BMJ includes a section Reporting patients’ involvement in research which states:

As part of its patient partnership strategy, The BMJ is encouraging active patient involvement in setting the research agenda.

We appreciate that not all authors of research papers will have done this, and we will still consider your paper if you did not involve patients at an early stage. We do, however, request that all authors provide a statement in the methods section under the subheading Patient involvement.

This should provide a brief response to the following questions:

How was the development of the research question and outcome measures informed by patients’ priorities, experience, and preferences?

How did you involve patients in the design of this study?

Were patients involved in the recruitment to and conduct of the study?

How will the results be disseminated to study participants?

For randomised controlled trials, was the burden of the intervention assessed by patients themselves?

Patient advisers should also be thanked in the contributorship statement/acknowledgements.

If patients were not involved please state this.

If this information is not in the submitted manuscript we will ask you to provide it during the peer review process.

Please also note also note that The BMJ now sends randomised controlled trials and other relevant studies for peer review by patients.

Recent events suggest that these instructions should be amended with the following question:

How were patients involved in the interpretation of results?

The instructions to authors should also elaborate that the intent is require description of how results were shared with patients before publication and dissemination to the news media. This process should be interactive with the possibility of corrective feedback, rather than a simple presentation of the results to the patients without opportunity for comment or for suggesting qualification of the interpretations that will be made. This process should be described in the article.

partnering with patientsMaterial offered by The BMJ in support of their initiative include an editorial, Patient Partnership, which explains:

The strategy brings landmark changes to The BMJ’s internal processes, and seeks to place the journal at the forefront of the international debate on the science, art, and implementation of meaningful, productive partnership with patients. It was “co –produced” with the members of our new international patient advisory panel, which was set up in January 2014. It’s members continue to inform our thinking and help us with implementation of our strategy.

patient includedFor its efforts, The BMJ has been the first medical journal to receive the “Patients Included” Certificate from Lucien Engelen’s Radboud REshape Academy. For his part, Lucien had previously announced:

I will ‘NO-SHOW’ at healthcare conferences that do not add patients TO or IN their programme or invite them to be IN the audience. Also I will no longer give lectures/keynotes at ‘NO-SHOW’ conferences.

But strong words need an action plan to become more than mere words. Although laudable exceptions can be noted, they are few and far between.

In Beyond rhetoric: we need a strategy for patient involvement in the health service, NHS user Sarah Thornton has called the UK government to task for being heavy on the hyperbole of empowering patients but lacking a robust strategy for implementing it. The same could be said for the floundering effort of The BMJ to support patient empowerment in research.

So, should patients just remain patient, keep signing up for clinical trials and hope that funders eventually get more patient oriented in the decisions about grants and that researchers eventually become more patient-oriented?

Recent events suggest that is unwise.

The BMJ patient-oriented initiative versus the PACE investigators’ refusal to share data and the vilification of patients who object to their interpretation of the data

As previously detailed here  the PACE investigators have steadfastly refused to provide the data for independent evaluation of claims. In doing so, they are defying numerous published standards from governmental and funding agencies that dictate sharing of data. Ironically, in justifying this refusal, the investigators cite possible repercussions of releasing the data for the ability to conduct future research.

Fortunately, in a decision against the PACE investigators, the UK Information Commissioner’s Office (ICO) rejected this argument because

He is also not convinced that there is sufficient evidence for him to determine that disclosure would be likely to deter significant numbers of other potential participants from volunteering to take part in future studies so as to affect the University’s ability to undertake such research. As a result, the Commissioner is reluctant to accept that disclosure of the withheld information would be likely to have an adverse effect on the University’s future ability to attract necessary funding and to carry out research in this area, with a consequent effect on its reputation and ability to recruit staff and students.

But the PACE investigators have appealed this decision and continue to withhold their data. Moreover in their initial refusal to share the data, they characterized patients who objected to the possible harm of their interpretations as a small vocal minority.

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Physician Charles Shepherd, himself a sufferer of myalgic encephalomyelitis (ME) notes:

  • Over 10,000 people signed a petition calling for claims of the PACE investigators relating to so-called recovery to be retracted.
  • In a survey of 1,428 people with ME, 73 per cent reported that CBT had no effect on symptoms while 74 per cent reported that GET had made their condition worse.

The BMJ’s position on data sharing

A May 15, 2015 editorial spelled out a new policy at The BMJ concerning data sharing, The BMJ requires data sharing on request for all trials:

Heeding calls from the Institute of Medicine, WHO, and the Nordic Trial Alliance, we are extending our policy

The movement to make data from clinical trials widely accessible has achieved enormous success, and it is now time for medical journals to play their part. From 1 July The BMJ will extend its requirements for data sharing to apply to all submitted clinical trials, not just those that test drugs or devices.1 The data transparency revolution is gathering pace.2 Last month, the World Health Organization (WHO) and the Nordic Trial Alliance released important declarations about clinical trial transparency.3 4

Note that The BMJ was making the data sharing requirement to all trials, not just medical and medical device trials.

But The BMJ was simply following the lead of the family of PLOS journals that made an earlier, broader, and simpler commitment to data from clinical trials being available to others.

plosThe PLOS journals’ policy on data sharing

On December 12, 2013, the PLOS journals scooped other major publishers with:

PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception.

When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS’s policy. The data availability statement will be published with the article if accepted.

Refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection. PLOS journal editors encourage researchers to contact them if they encounter difficulties in obtaining data from articles published in PLOS journals. If restrictions on access to data come to light after publication, we reserve the right to post a correction, to contact the authors’ institutions and funders, or in extreme cases to retract the publication

This requirement took effect on March 1, 2014. However, one of the most stringent of data sharing policies in the industry was already in effect.

Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.

Even the earlier requirement for publication in PLOS journals would have forestalled the delays, struggles, and complicated quasi-legal maneuvering to characterized the PACE investigators’ refusing to release their data.

Why medically ill people agree to be in clinical research

Patients are not obligated to participate in research, but should freely choose whether to participate based on a weighing of the benefits and risk. Consent to treatment in clinical research needs to be voluntary and fully informed.

Medically ill patients often cannot expect direct personal benefit from participating in a research trial. This is particularly true when trials involve comparison of a treatment that they want that is not otherwise available, but they risk getting randomized to a poorly defined and inadequate routine care. Their needs continue to be neglected, but now burdened by multiple and sometimes intrusive assessments. This is also the case with descriptive observational research and particularly phase 1 clinical studies that provide no direct benefit to participating patients, only the prospect of improving the care of future patients.

In recognition that many research projects do not directly benefit individual patients, consent forms identify possible benefits to other current and future patients and to society at large.

Protecting the rights of participants in research

The World Medical Association (WMA) Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects spells out a set of principles protecting the rights of human subjects, it includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

Can patients pick up the challenge of realizing the promise of The BMJ editorial, Research Is the Future, Get Involved ?

One patient to whom I showed an earlier draft objected that this is just another burden being thrust on medical patients who already have their condition and difficult treatment decisions with which to contend. She pointed out so often patient empowerment strategies ended up leaving patients with responsibilities they could not shoulder and that the medical system should have met for them.

I agree that not every patient can take up this burden of promoting  both more patient involvement in research and data sharing, but groups of patients can. And when individual patients are willing to take on the sacrifice of insisting on these conditions for their consent, they should be recognized and supported by others. This is not a matter for patients with particular illnesses or members of patient organizations organized around a particular illness. Rather, this is a contribution to the well-being of society should be applauded and supported across the artificial boundaries drawn around particular conditions or race or class.

The mere possibility that patients are going to refuse to participate in research that does not have plans for patient involvement or data sharing can have a powerful effect. It is difficult enough for researchers to accrue sufficient numbers of patients for their studies. If the threat is that they will run into problems because they don’t adequately involve patients, they will be proactive in redesigning the research strategies and reflecting it in their consent forms, if they are serious about getting their research done.

just-say-noPatients are looking after the broader society in participating in medical research. However, if researchers do not take steps to ensure that society gets the greatest possible benefit, patients can just say no, we won’t consent to participation.

Acknowledgments: I benefited from discussions with numerous patients and some professionals in writing and revising this blog. Because some of the patients desired anonymity, I will simply give credit to the group. However, I am responsible for any excesses or inaccuracies that may have escaped their scrutiny.

 

Why the scientific community needs the PACE trial data to be released

To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001University and clinical trial investigators must release data to a citizen-scientist patient, according to a landmark decision in the UK. But the decision could still be overturned if the University and investigators appeal. The scientific community needs the decision to be upheld. I’ll argue that it’s unwise for any appeal to be made. The reasons for withholding the data in the first place were archaic. Overturning of the decision would set a bad precedent and would remove another tooth from almost toothless requirements for data sharing.

We didn’t need Francis Collins, Director of National Institutes of Health to tell us what we already knew, the scientific and biomedical literature is untrustworthy.

And there is the new report from the UK Academy of Medical Sciences, Reproducibility and reliability of biomedical research: improving research practice.

There has been a growing unease about the reproducibility of much biomedical research, with failures to replicate findings noted in high-profile scientific journals, as well as in the general and scientific media. Lack of reproducibility hinders scientific progress and translation, and threatens the reputation of biomedical science.

Among the report’s recommendations:

  • Journals mandating that the data underlying findings are made available in a timely manner. This is already required by certain publishers such as the Public Library of Science (PLOS) and it was agreed by many participants that it should become more common practice.
  • Funders requiring that data be released in a timely fashion. Many funding agencies require that data generated with their funding be made available to the scientific community in a timely and responsible manner

A consensus has been reached: The crisis in the trustworthiness of science can be only overcome only if scientific data are routinely available for reanalysis. Independent replication of socially significant findings is often unfeasible, and unnecessary if original data are fully available for inspection.

Numerous governmental funding agencies and regulatory bodies are endorsing routine data sharing.

The UK Medical Research Council (MRC) 2011 policy on data sharing and preservation  has endorsed principles laid out by the Research Councils UK including

Publicly funded research data are a public good, produced in the public interest, which should be made openly available with as few restrictions as possible in a timely and responsible manner.

To enable research data to be discoverable and effectively re-used by others, sufficient metadata should be recorded and made openly available to enable other researchers to understand the research and re-use potential of the data. Published results should always include information on how to access the supporting data.

The Wellcome Trust Policy On Data Management and Sharing opens with

The Wellcome Trust is committed to ensuring that the outputs of the research it funds, including research data, are managed and used in ways that maximise public benefit. Making research data widely available to the research community in a timely and responsible manner ensures that these data can be verified, built upon and used to advance knowledge and its application to generate improvements in health.

The Cochrane Collaboration has weighed in that there should be ready access to all clinical trial data

Summary results for all protocol-specified outcomes, with analyses based on all participants, to become publicly available free of charge and in easily accessible electronic formats within 12 months after completion of planned collection of trial data;

Raw, anonymised, individual participant data to be made available free of charge; with appropriate safeguards to ensure ethical and scientific integrity and standards, and to protect participant privacy (for example through a central repository, and accompanied by suitably detailed explanation).

Many similar statements can be found on the web. I’m unaware of credible counterarguments gaining wide acceptance.

toothless manYet, endorsements of routine sharing of data are only a promissory reform and depend on enforcement that has been spotty, at best. Those of us who request data from previously published clinical trials quickly realize that requirements for sharing data have no teeth. In light of that, scientists need to watch closely whether a landmark decision concerning sharing of data from a publicly funded trial is appealed and overturned.

The Decision requiring release of the PACE data

The UK’s Information Commissioner’s Office (ICO) ordered Queen Mary University of London (QMUL) on October 27, 2015 to release anonymized from the PACE chronic fatigue syndrome trial data to an unnamed complainant. QMUL has 28 days to appeal.

Even if scientists don’t know enough to care about Chronic Fatigue Syndrome/Myalgic Encephalomyelitis, they should be concerned about the reasons that were given in a previous refusal to release the data.

I took a critical look at the long-term follow up results for the PACE trial in a previous Mind the Brain blog post  and found fatal flaws in the authors’ self-congratulatory interpretation of results. Despite authors’ claims to the contrary and their extraordinary efforts to encourage patients to report the intervention was helpful, there were simply no differences between groups at follow-up

Background on the request for release of PACE data

  • A complainant requested release of specific PACE data from QMUL under the Freedom of Information Act.
  • QMUL refused the request.
  • The complainant requested an internal review but QMUL maintained its decision to withhold the data.
  • The complainant contacted the ICO with concerns about how the request had been handled.
  • On October 27, 2015, the ICO sided with the complainant and order the release of the data.

A report outlines Queen Mary’s arguments for refusing to release the data and the Commissioner’s justification for siding with the patient requesting the data be released.

Reasons the request release of data was initially refused

The QMU PACE investigators claimed

  • They were entitled to withhold data prior to publication of planned papers.
  • An exemption to having to share data because data contained sensitive medical information from which it was possible to identify the trial participants.
  • Release of the data might harm their ability to recruit patients for research studies in the future.

The QMU PACE researchers specifically raised concerns about a motivated intruder being able to facilitate re-identification of participants:

In relation to a motivated intruder being able facilitate re-identification of participants, the University argued that:

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Magneto_430Bizarre. This is obviously a talented masked motivated intruder. Do they have evidence that Magneto is at it again? Mostly he now is working with the good guys, as seen in the help he gave Neurocritic and me.

Let’s think about this novel argument. I checked with University of Pennsylvania bioethicist Jon Merz, an expert who has worked internationally to train researchers and establish committees for the protection of human subjects. His opinion was clear:

The litany of excuses – not reasons – offered by the researchers and Queen Mary University is a bald attempt to avoid transparency and accountability, hiding behind legal walls instead of meeting their critics on a level playing field.  They should be willing to provide the data for independent analyses in pursuit of the truth.  They of course could do this willingly, in a way that would let them contractually ensure that data would be protected and that no attempts to identify individual subjects would be made (and it is completely unclear why anyone would care to undertake such an effort), or they can lose this case and essentially lose any hope for controlling distribution.

The ‘orchestrated response to undermine the credibility of the study’ claimed by QMU and the PACE investigators, as well as issue being raised of the “integrity of the scientists who conducted the study” sounds all too familiar. It’s the kind of defense that is heard from scientists under scrutiny of the likes of Open Science Collaborations, as in psychology and cancer. Reactionaries resisting post-publication peer review say we must be worried about harassment from

“replication police” “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.”

Far fetched? Compare this to a QMU quote drawn from the National Radio, Australian Broadcast Company April 18, 2011 interview of Richard Horton and PACE investigator Michael Sharpe in which former Lancet Editor Richard Horton condemned:

A fairly small, but highly organised, very vocal and very damaging group of individuals who have…hijacked this agenda and distorted the debate…

dost thou feel‘Distorted the debate’? Was someone so impertinent as to challenge investigators’ claims about their findings? Sounds like Pubpeer  We have seen what they can do.

Alas, all scientific findings should be scrutinized, all data relevant to the claims that are made should be available for reanalysis. Investigators just need to live with the possibility that their claims will be proven wrong or exaggerated. This is all the more true for claims that have substantial impact on public policy and clinical services, and ultimately, patient welfare.

[It is fascinating to note that Richard Horton spoke at the meeting that produced the UK Academy of Medical Sciences report to which I provided a link above. Horton covered the meaning in a Lancet editorial  in which he amplified the sentiment of the meeting: “The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world.” His editorial echoed a number of recommendations of the meeting report, but curiously omitted mentioning of data sharing.]

jacob-bronowski-scientist-that-is-the-essence-of-science-ask-anFortunately the ICO has rejected the arguments of QMUL and the PACE investigators. The Commissioner found that QMUL and the PACE investigators incorrectly interpreted regulations in their withholding of the data and should provide the complaint with the data or risk being viewed as in contempt of court.

The 30-page decision is a fascinating read, but here’s an accurate summary from elsewhere:

In his decision, the Commissioner found that QMUL failed to provide any plausible mechanism through which patients could be identified, even in the case of a “motivated intruder.” He was also not convinced that there is sufficient evidence to determine that releasing the data would result in the mass exodus of a significant number of the trial’s 640 participants nor that it would deter significant numbers of participants from volunteering to take part in future research.

Requirements for data sharing in the United States have no teeth and situation would be worsened by reversal of ICO decision

Like the UK, the United States supposedly has requirements for sharing of data from publicly funded trials. But good luck in getting support from regulatory agencies associated with funding sources for obtaining data. Here’s my recent story, still unfolding – or maybe, sadly, over, at least for now.

For a long time I’ve fought my own battles about researchers making unwarranted claims that psychotherapy extend the lives of cancer patients. Research simply does not support the claim. The belief that psychological factors have such influence on the course and outcome of cancer sets up cancer patients to be blamed and to blame themselves when they don’t overcome their disease by some sort of mind control. Our systematic review concluded

“No randomized trial designed with survival as a primary endpoint and in which psychotherapy was not confounded with medical care has yielded a positive effect.”

Investigators who conducted some of the best ambitious, well-designed trials to test the efficacy of psychological interventions on cancer but obtained null results echoed our assessment. The commentaries were entitled “Letting Go of Hope” and “Time to Move on.”

I provided an extensive review of the literature concerning whether psychotherapy and support groups increased survival time in an earlier blog post. Hasn’t the issue of mind-over-cancer been laid to rest? I was recently contacted by a science journalist interested in writing an article about this controversy. After a long discussion, he concluded that the issue was settled — no effect had been found — and he could not succeed in pitching his idea for an article to a quality magazine.

But as detailed here one investigator has persisted in claims that a combination of relaxation exercises, stress reduction, and nutritional counseling increases survival time. My colleagues and I gave this 2008 study a careful look.  We ran chi-square analyses of basic data presented in the paper’s tables. But none of our analyses of group assignment on mortality more disease recurrence was significant. The investigators’ claim of an effect depended on dubious multivariate analyses with covariates that could not be independently evaluated without a look at the data.

The investigator group initially attempted to block publication of a letter to the editor, citing a policy of the journal Cancer that critical letters could not be published unless investigators agreed to respond and they were refusing to respond. We appealed and the journal changed its policy and allowed us additional length to our letter.

We then requested from the investigator’s University Research Integrity Officer the specific data needed to replicate the multivariate analyses in which the investigators claimed an effect on survival. The request was denied:

The data, if disclosed, would reveal pending research ideas and techniques. Consequently, the release of such information would put those using such data for research purposes in a substantial competitive disadvantage as competitors and researchers would have access to the unpublished intellectual property of the University and its faculty and students.

Recall that we were requesting in 2014 specific data needed to evaluate analyses published in 2008.

I checked with statistician Andrew Gelman whether my objections to the multivariate analyses were well-founded and he agreed they were.

Since then, another eminent statistician Helena Kraemer has published an incisive critique of reliance in a randomized controlled trial on multivariate analyses and simple bivariate analyses do not support the efficacy of interventions. She labeled adjustments with covariates as a “source of false-positive findings.”

We appealed to the US Health and Human Services Office of Research Integrity  (ORI) but they indicated no ability to enforce data sharing.

Meanwhile, the principal investigator who claimed an effect on survival accompanied National Cancer Institute program officers to conferences in Europe and the United States where she promoted her intervention as effective. I complained to Robert Croyle, Director, NCI Division of Cancer Control and Population Sciences who twice has been one of the program officer’s co-presenting with her. Ironically, in his capacity as director he is supposedly facilitating data sharing for the division. Professionals were being misled to believe that this intervention would extend the lives of cancer patients, and the claim seemingly had the endorsement NCI.

I told Robert Croyle  that if only the data for the specific analyses were released, it could be demonstrated that the claims were false. Croyle did not disagree, but indicated that there was no way to compel release of the data.

The National Cancer Institute recently offered to pay the conference fees to the International Psycho-Oncology Congress in Washington DC of any professionals willing to sign up for free training in this intervention.

I don’t think I could get any qualified professional including  Croyle to debate me publicly as to whether psychotherapy increases the survival of cancer patients. Yet the promotion of the idea persists because it is consistent with the power of mind over body and disease, an attractive talking point

I have not given up in my efforts to get the data to demonstrate that this trial did not show that psychotherapy extends the survival of cancer patients, but I am blocked by the unwillingness of authorities to enforce data sharing rules that they espouse.

There are obvious parallels between the politics behind persistence of the claim in the US for psychotherapy increasing survival time for cancer patients and those in the UK about cognitive behavior therapy being sufficient treatment for schizophrenia in the absence of medication or producing recovery from the debilitating medical condition, Chronic Fatigue Syndrome/Myalgic Encephalomyelitis. There are also parallels to investigators making controversial claims based on multivariate analyses, but not allowing access to data to independently evaluate the analyses. In both cases, patient well-being suffers.

If the ICO upholds the release of data for the PACE trial in the UK, it will pressure the US NIH to stop hypocritically endorsing data sharing and rewarding investigators whose credibility depends on not sharing their data.

As seen in a PLOS One study, unwillingness to share data in response to formal requests is

associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Why the PACE investigators should not appeal

In the past, PACE investigators have been quite dismissive of criticism, appearing to have assumed that being afflicted with Chronic Fatigue Syndrome/Myalgic Encephalomyelitis precludes a critic being taken seriously, even when the criticism is otherwise valid. However, with publication of the long-term follow-up data in Lancet Psychiatry, they are now contending with accomplished academics whose criticisms cannot be so easily brushed aside. Yes, the credibility of the investigators’ interpretations of their data are being challenged. And even if they do not believe they need to be responsive to patients, they need to be responsive to colleagues. Releasing the data is the only acceptable response and not doing so risks damage to their reputations.

QMUL, Professors White and Sharpe, let the People’s data go.

 

Uninterpretable: Fatal flaws in PACE Chronic Fatigue Syndrome follow-up study

Earlier decisions by the investigator group preclude valid long-term follow-up evaluation of CBT for chronic fatigue syndrome (CFS).

CFS-Think-of-the-worst1At the outset, let me say that I’m skeptical whether we can hold the PACE investigators responsible for the outrageous headlines that have been slapped on their follow-up study and on the comments they have made in interviews.

The Telegraph screamed

Chronic Fatigue Syndrome sufferers ‘can overcome symptoms of ME with positive thinking and exercise’

Oxford University has found ME is not actually a chronic illness

My own experience critiquing media interpretation of scientific studies suggests that neither researchers nor even journalists necessarily control shockingly inaccurate headlines placed on otherwise unexceptional media coverage. On the other hand, much distorted and exaggerated media coverage starts with statements made by researchers and by press releases from their institutions.

The one specific quote attributed to a PACE investigator is unfortunate because of its potential to be misinterpreted by professionals, persons who suffer from chronic fatigue syndrome, and the people around them affected by their functioning.

“It’s wrong to say people don’t want to get better, but they get locked into a pattern and their life constricts around what they can do. If you live within your limits that becomes a self-fulfilling prophesy.”

It suggests that willfulness causes CFS sufferers’ impaired functioning. This is ridiculous as application of the discredited concept of fighting spirit to cancer patients’ failure to triumph against their life altering and life-threatening condition. Let’s practice the principle of charity and assume this is not the intention of the PACE investigator, particularly when there is so much more for which we should give them responsibility.

Go here for a fuller evaluation that I endorse of the Telegraph coverage of PACE follow-up study.

Having read the PACE follow-up study carefully, my assessment is that the data presented are uninterpretable. We can temporarily suspend critical thinking and some basic rules for conducting randomized trials (RCTs), follow-up studies, and analyzing the subsequent data. Even if we do, we should reject some of the interpretations offered by the PACE investigators as unfairly spun to fit what has already a distorted positive interpretation oPACE trial HQf the results.

It is important to note that the PACE follow-up study can only be as good as the original data it’s based on. And in the case of the PACE study itself, a recent longread critique by UC Berkeley journalism and public health lecturer David Tuller has arguably exposed such indefensible flaws that any follow-up is essentially meaningless. See it for yourself [1, 2, 3 ].

This week’s report of the PACE long term follow-up study and a commentary  are available free at the Lancet Psychiatry website after a free registration. I encourage everyone to download a copy before reading further. Unfortunately, some crucial details of the article are highly technical and some details crucial to interpreting the results are not presented.

I will provide practical interpretations of the most crucial technical details so that they are more understandable to the nonspecialist. Let me know where I fail.

1When Cherished Beliefs Clash with EvidenceTo encourage proceeding with this longread, but to satisfy those who are unwilling or unable to proceed, I’ll reveal my main points are

  • The PACE investigators sacrificed any possibility of meaningful long-term follow-up by breaking protocol and issuing patient testimonials about CBT before accrual was even completed.
  • This already fatal flaw was compounded with a loose recommendation for treatment after the intervention phase of the trial ended. The investigators provide poor documentation of which treatment was taken up by which patients and whether there was crossover in the treatment being received during follow up.
  • Investigators’ attempts to correct methodological issues with statistical strategies lapses into voodoo statistics.
  • The primary outcome self-report variables are susceptible to manipulation, investigator preferences for particular treatments, peer pressure, and confounding with mental health variables.
  • The Pace investigators exploited ambiguities in the design and execution of their trial with self-congratulatory, confirmatory bias.

The Lancet Psychiatry summary/abstract of the article

Background. The PACE trial found that, when added to specialist medical care (SMC), cognitive behavioural therapy (CBT), or graded exercise therapy (GET) were superior to adaptive pacing therapy (APT) or SMC alone in improving fatigue and physical functioning in people with chronic fatigue syndrome 1 year after randomisation. In this pre-specified follow-up study, we aimed to assess additional treatments received after the trial and investigate long-term outcomes (at least 2 years after randomisation) within and between original treatment groups in those originally included in the PACE trial.

Findings Between May 8, 2008, and April 26, 2011, 481 (75%) participants from the PACE trial returned questionnaires. Median time from randomisation to return of long-term follow-up assessment was 31 months (IQR 30–32; range 24–53). 210 (44%) participants received additional treatment (mostly CBT or GET) after the trial; with participants originally assigned to SMC alone (73 [63%] of 115) or APT (60 [50%] of 119) more likely to seek treatment than those originally assigned to GET (41 [32%] of 127) or CBT (36 [31%] of 118; p<0·0001). Improvements in fatigue and physical functioning reported by participants originally assigned to CBT and GET were maintained (within-group comparison of fatigue and physical functioning, respectively, at long-term follow-up as compared with 1 year: CBT –2·2 [95% CI –3·7 to –0·6], 3·3 [0·02 to 6·7]; GET –1·3 [–2·7 to 0·1], 0·5 [–2·7 to 3·6]). Participants allocated to APT and to SMC alone in the trial improved over the follow-up period compared with 1 year (fatigue and physical functioning, respectively: APT –3·0 [–4·4 to –1·6], 8·5 [4·5 to 12·5]; SMC –3·9 [–5·3 to –2·6], 7·1 [4·0 to 10·3]). There was little evidence of differences in outcomes between the randomised treatment groups at long-term follow-up.

Interpretation The beneficial effects of CBT and GET seen at 1 year were maintained at long-term follow-up a median of 2·5 years after randomisation. Outcomes with SMC alone or APT improved from the 1 year outcome and were similar to CBT and GET at long-term follow-up, but these data should be interpreted in the context of additional therapies having being given according to physician choice and patient preference after the 1 year trial final assessment. Future research should identify predictors of response to CBT and GET and also develop better treatments for those who respond to neither.

fem imageNote the contradiction here which will persist throughout the paper, the official Oxford University press release, quotes from the PACE investigators to the media, and media coverage. On the one hand we are told:

Improvements in fatigue and physical functioning reported by participants originally assigned to CBT and GET were maintained…

Yet we are also told:

There was little evidence of differences in outcomes between the randomised treatment groups at long-term follow-up.

Which statement is to be given precedence? To the extent that features of a randomized trial have been preserved in the follow-up (which we will see, is not actually the case), a lack of between group differences at follow-up should be given precedence over any persistence of change within groups from baseline. That is a not controversial point for interpreting clinical trials.

A statement about group differences at follow up should proceed and qualify any statement about within-group follow up. Otherwise why bother with a RCT in the first place?

The statement in the Interpretation section of the summary/abstract has an unsubstantiated spin in favor of the investigators’ preferred intervention.

Outcomes with SMC alone or APT improved from the 1 year outcome and were similar to CBT and GET at long-term follow-up, but these data should be interpreted in the context of additional therapies having being given according to physician choice and patient preference after the 1 year trial final assessment.

If we’re going to be cautious and qualified in our statements, there are lots of other explanations for similar outcomes in the intervention and control groups that are more plausible. Simply put and without unsubstantiated assumptions, any group differences observed earlier have dissipated. Poof! Any advantages of CBT and GET are not sustained.

How the PACE investigators destroyed the possibility of an interpretable follow-up study

imagesNeither the Lancet Psychiatry article nor any recent statements by the PACE investigators acknowledged how these investigators destroyed any possibility of analyses of meaningful follow-up data.

Before the intervention phase of the trial was even completed, even before accrual of patients was complete, the investigators published a newsletter in December 2008 directed at trial participants. An article appropriately reminds participants of the upcoming two and one half year follow-up. But then it acknowledges difficulty accruing patients, but that additional funding has been received from the MRC to extend recruiting. And then glowing testimonials appear on p. 3 of the newsletter about the effects of their intervention.

“Being included in this trial has helped me tremendously. (The treatment) is now a way of life for me, I can’t imagine functioning fully without it. I have nothing but praise and thanks for everyone involved in this trial.”

“I really enjoyed being a part of the PACE Trial. It helped me to learn more about myself, especially (treatment), and control factors in my life that were damaging. It is difficult for me to gauge just how effective the treatment was because 2007 was a particularly strained, strange and difficult year for me but I feel I survived and that the trial armed me with the necessary aids to get me through. It was also hugely beneficial being part of something where people understand the symptoms and illness and I really enjoyed this aspect.”

These testimonials are a horrible breach of protocol. Taken together with the acknowledgment of the difficulty accruing patients, the testimonials solicit expression of gratitude and apply pressure on participants to endorse the trial by providing a positive of their outcome. Some minimal effort is made to disguise the conditions from which the testimonials come. However, references to a therapist and, in the final quote above, to “control factors in my life that were damaging” leave no doubt that the CBT and GET favored by the investigators is having positive results.

Probably more than in most chronic illnesses, CFS sufferers turn to each other for support in the face of bewildering and often stigmatizing responses from the medical community. These testimonials represent a form of peer pressure for positive evaluations of the trial.

Any investigator group that would deliberately violate protocol in this manner deserves further scrutiny for other violations and threats to the validity of their results. I challenge defenders of the PACE study to cite other precedents for this kind of manipulation of clinical trials participants. What would they have thought if a drug company had done this for the evaluation of their medication?

The breakdown of randomization as further destruction of the interpretability of follow-up results

Returning to the Lancet Psychiatry article itself, note the following:

After completing their final trial outcome assessment, trial participants were offered an additional PACE therapy if they were still unwell, they wanted more treatment, and their PACE trial doctor agreed this was appropriate. The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial. These choices were made with knowledge of the individual patient’s treatment allocation and outcome, but before the overall trial findings were known. Interventions were based on the trial manuals, but could be adapted to the patient’s needs.

Readers who are methodologically inclined might be interested in a paper in which I discuss incorporating patient preference in randomized trials, as well as another paper describing clinical trial conducted with German colleagues  in which we incorporated patient preference in evaluation of antidepressants and psychotherapy for depression in primary care. Patient preference can certainly be accommodated in a clinical trial in ways that preserve the benefits of randomization, but not as the PACE investigators have done.

Following completion of the treatment to which particular patients were randomly assigned, the PACE trial offered a complex negotiation between patient and trial physician about further treatment. This represents a thorough breakdown of the benefits of a controlled randomized trial for the evaluation of treatments. Any focus on the long-term effects of initial randomization is sacrificed by what could be substantial departures from that randomization. Any attempts at statistical corrections will fail.

Of course, investigators cannot ethically prevent research participants from seeking additional treatment. But in the case of PACE, the investigators encouraged departures from the randomized treatment yet did not adequately take into account the decisions that were made. An alternative would have been to continue with the randomized treatment, taking into account and quantifying any cross over into another treatment arm.

2When Cherished Beliefs Clash with EvidenceVoodoo statistics in dealing with incomplete follow-up data.

Between May 8, 2008, and April 26, 2011, 481 (75%) participants from the PACE trial returned questionnaires.

This is a very good rate of retention of participants for follow-up. The serious problem is that neither

  • loss to follow-up nor
  • whether there was further treatment, nor
  • whether there was cross over in the treatment received in follow-up versus the actual trial

is random.

Furthermore, any follow-up data is biased by the exhortation of the newsletter.

No statistical controls can restore the quality of the follow-up data to what would’ve been obtained with preservation of the initial randomization. Nothing can correct for the exhortation.

Nonetheless, the investigators tried to correct for loss of participants to follow-up and subsequent treatment. They described their effort in a technically complex passage, which I will subsequently interpret:

We assessed the differences in the measured outcomes between the original randomised treatment groups with linear mixed-effects regression models with the 12, 24, and 52 week, and long-term follow-up measures of outcomes as dependent variables and random intercepts and slopes over time to account for repeated measures.

We included the following covariates in the models: treatment group, trial stratification variables (trial centre and whether participants met the international chronic fatigue syndrome criteria,3 London myalgic encephalomyelitis criteria,4 and DSM IV depressive disorder criteria),18,19 time from original trial randomisation, time by treatment group interaction term, long-term follow-up data by treatment group interaction term, baseline values of the outcome, and missing data predictors (sex, education level, body-mass index, and patient self-help organisation membership), so the differences between groups obtained were adjusted for these variables.

Nearly half (44%; 210 of 479) of all the follow-up study participants reported receiving additional trial treatments after their final 1 year outcome assessment (table 2; appendix p 2). The number of participants who received additional therapy differed between the original treatment groups, with more participants who were originally assigned to SMC alone (73 [63%] of 115) or to APT (60 [50%] of 119) receiving additional therapy than those assigned to GET (41 [32%] of 127) or CBT (36 [31%] of 118; p<0·0001).

In the trial analysis plan we defined an adequate number of therapy sessions as ten of a maximum possible of 15. Although many participants in the follow-up study had received additional treatment, few reported receiving this amount (table 2). Most of the additional treatment that was delivered to this level was either CBT or GET.

The “linear mixed-effects regression models” are rather standard techniques for compensating for missing data by using all of the available data to estimate what is missing. The problem is that this approach assumes that any missing data are random, which is an untested assumption that is unlikely to be true in this study.

3aWhen Cherished Beliefs Clash with Evidence-page-0The inclusion of “covariates” is an effort to control for possible threats to the validity of the overall analyses by taking into account what is known about participants. There are numerous problems here. We can’t be assured that the results are any more robust and reliable than what would be obtained without these efforts at statistical control. The best publishing practice is to make the unadjusted outcome variables available and let readers decide. Greatest confidence in results is obtained when there is no difference between the results in the adjusted and unadjusted analyses.

Methodologically inclined readers should consult an excellent recent article by clinical trial expert, Helene Kraemer, A Source of False Findings in Published Research Studies Adjusting for Covariates.

The effectiveness of statistical controls depends on certain assumptions being met about patterns of variation within the control variables. There is no indication that any diagnostic analyses were done to determine whether possible candidate control variables should be eliminated in order to avoid a violation of assumptions about the multivariate distribution of covariates. With so many control variables, spurious results are likely. Apparent results could change radically with the arbitrary addition or subtraction of control variables. See here for a further explanation of this problem.

We don’t even know how this set of covariate/control variables, rather than some other set, was established. Notoriously, investigators often try out various combinations of control variables and present only those that make their trial looked best. Readers are protected from this questionable research practice only with pre-specification of analyses before investigators know their results—and in an unblinded trial, researchers often know the result trends long before they see the actual numbers.

See JP Simmons’  hilarious demonstration that briefly listening to the Beatles’ “When I’m 64” can be leave research participants a year and a half older younger than listening to “Kalimba” – at least when investigators have free reign to manipulate the results they want in an study without pre-registration of analytic plans.

Finally, the efficacy of complex statistical controls is widely overestimated and depends on unrealistic assumptions. First, it is assumed that all relevant variables that need to be controlled have been identified. Second, even when this unrealistic assumption has been met, it is assumed that all statistical control variables have been measured without error. When that is not the case, results can appear significant when they actually are not. See a classic paper by Andrew Phillips and George Davey Smith for further explanation of the problem of measurement error producing spurious findings.

What the investigators claim the study shows

In an intact clinical trial, investigators can analyze outcome data with and without adjustments and readers can decide which to emphasize. However, this is far from an intact clinical trial and these results are not interpretable.

The investigators nonetheless make the following claims in addition to what was said in the summary/abstract.

In the results the investigators state

The improvements in fatigue and physical functioning reported by participants allocated to CBT or GET at their 1 year trial outcome assessment were sustained.

This was followed by

The improvements in impairment in daily activities and in perceived change in overall health seen at 1 year with these treatments were also sustained for those who received GET and CBT (appendix p 4). Participants originally allocated to APT reported further improvements in fatigue, physical functioning, and impairment in daily activities from the 1 year trial outcome assessment to long-term follow-up, as did those allocated to SMC alone (who also reported further improvements in perceived change in overall health; figure 2; table 3; appendix p 4).

If the investigators are taking their RCT design seriously, they should give precedence to the null findings for group differences at follow-up. They should not be emphasizing the sustaining of benefits within the GET and CBT groups.

The investigators increase their positive spin on the trial in the opening sentence of the Discussion

The main finding of this long-term follow-up study of the PACE trial participants is that the beneficial effects of the rehabilitative CBT and GET therapies on fatigue and physical functioning observed at the final 1 year outcome of the trial were maintained at long-term follow-up 2·5 years from randomisation.

This is incorrect. The main finding   is that any reported advantages of CBT and GET at the end of the trial were lost by long-term follow up. Because an RCT is designed to focus on between group differences, the statement about sustaining of benefits is post-hoc.

The Discussion further states

In so far as the need to seek additional treatment is a marker of continuing illness, these findings support the superiority of CBT and GET as treatments for chronic fatigue syndrome.

This makes unwarranted and self-serving assumptions that treatment choice was mainly driven by the need for further treatment, when decision-making was contaminated by investigative preference, as stated in the newsletter. Note also that CBT is a novel treatment for research participants and more likely to be chosen on the basis of novelty alone in the face of overall modest improvement rates for the trial and lack of improvements in objective measures. Whether or not the investigators designate a limited range of self-report measures as primary, participant decision-making may be driven by other, more objective measures.

Regardless, investigators have yet to present any data concerning how decisions for further treatment were made, if such data exist.

The investigators further congratulate themselves with

There was some evidence from an exploratory analysis that improvement after the 1 year trial final outcome was not associated with receipt of additional treatment with CBT or GET, given according to need. However this finding must be interpreted with caution because it was a post-hoc subgroup analysis that does not allow the separation of patient and treatment factors that random allocation provides.

However, why is this analysis singled out has exploratory and to be interpreted with caution because it is a post-hoc subgroup analysis when similarly post-hoc subgroup analyses are recommended without such caution?

The investigators finally get around to depicting what should be their primary finding, but do so in a dismissive fashion.

Between the original groups, few differences in outcomes were seen at long-term follow-up. This convergence in outcomes reflects the observed improvement in those originally allocated to SMC and APT, the possible reasons for which are listed above.

The discussion then discloses a limitation of the study that should have informed earlier presentation and discussion of results

First, participant response was incomplete; some outcome data were missing. If these data were not missing at random it could have led to either overestimates or underestimates of the actual differences between the groups.

This minimizes the implausibility of the assumption of random missing variables, as well as the problems introduced by the complex attempts to control confounds statistically.

And then there is an unsubstantiated statement that is sure to upset persons who suffer from CFS and those who care for them.

the outcomes were all self-rated, although these are arguably the most pertinent measures in a condition that is defined by symptoms.

I could double the length of this already lengthy blog post if I fully discussed this. But let me raise a few issues.

  1. The self-report measures do not necessarily capture subjective experience, only forced choice responses to a limited set of statements.
  2. One of the two outcome measures, the physical health scale of the SF-36  requires forced choice responses to a limited set of statements selected for general utility across all mental and physical conditions. Despite its wide use, the SF-36 suffers from problems in internal consistency and confounding with mental health variables. Anyone inclined to get excited about it should examine  its items and response options closely. Ask yourself, do differences in scores reliably capture clinically and personally significant changes in the experience and functioning associated with the full range of symptoms of CHF?
  3. The validity other primary outcome measure, the Chalder Fatigue Scale depends heavily on research conducted by this investigator group and has inadequate validation of its sensitivity to change in objective measures of functioning.
  4. Such self-report measures are inexorably confounded with morale and nonspecific mental health symptoms with large, unwanted correlation tendency to endorse negative self-statements that is not necessarily correlated with objective measures.

Although it was a long time ago, I recall well my first meeting with Professor Simon Wessely. It was at a closed retreat sponsored by NIH to develop a consensus about the assessment of fatigue by self-report questionnaire. I listened to a lot of nonsense that was not well thought out. Then, I presented slides demonstrating a history of failed attempts to distinguish somatic complaints from mental health symptoms by self-report. Much later, this would become my “Stalking bears, finding bear scat in the woods” slide show.

you can't see itBut then Professor Wessely arrived at the meeting late, claiming to be grumbly because of jet lag and flight delays. Without slides and with devastating humor, he upstaged me in completing the demolition of any illusions that we could create more refined self-report measures of fatigue.

I wonder what he would say now.

But alas, people who suffer from CFS have to contend with a lot more than fatigue. Just ask them.

borg max[To be continued later if there is interest in my doing so. If there is, I will discuss the disappearance of objective measures of functioning from the PACE study and you will find out why you should find some 3-D glasses if you are going to search for reports of these outcomes.]