Before you enroll your child in the MAGENTA chronic fatigue syndrome study: Issues to be considered

[October 3 8:23 AM Update: I have now inserted Article 21 of the Declaration of Helsinki below, which is particularly relevant to discussions of the ethical problems of Dr. Esther Crawley’s previous SMILE trial.]

Petitions are calling for shutting down the MAGENTA trial. Those who organized the effort and signed the petition are commendably brave, given past vilification of any effort by patients and their allies to have a say about such trials.

Below I identify a number of issues that parents should consider in deciding whether to enroll their children in the MAGENTA trial or to withdraw them if they have already been enrolled. I take a strong stand, but I believe I have adequately justified and documented my points. I welcome discussion to the contrary.

This is a long read but to summarize the key points:

  • The MAGENTA trial does not promise any health benefits for the children participating in the trial. The information sheet for the trial was recently modified to suggest they might benefit. However, earlier versions clearly stated that no benefit was anticipated.
  • There is inadequate disclosure of likely harms to children participating in the trial.
  • An estimate of a health benefit can be evaluated from the existing literature concerning the effectiveness of the graded exercise therapy intervention with adults. Obtaining funding for the MAGENTA trial depended on a misrepresentation of the strength of evidence that it works in adult populations.  I am talking about the PACE trial.
  • Beyond any direct benefit to their children, parents might be motivated by the hope of contributing to science and the availability of effective treatments. However, these possible benefits depend on publication of results of a trial after undergoing peer review. The Principal Investigator for the MAGENTA trial, Dr. Esther Crawley, has a history of obtaining parents’ consent for participation of their children in the SMILE trial, but then not publishing the results in a timely fashion. Years later, we are still waiting.
  • Dr. Esther Crawley exposed children to unnecessary risk without likely benefit in her conduct of the SMILE trial. This clinical trial involved inflicting a quack treatment on children. Parents were not adequately informed of the nature of the treatment and the absence of evidence for any mechanism by which the intervention could conceivably be effective. This reflects on the due diligence that Dr. Crawley can be expected to exercise in the MAGENTA trial.
  • The consent form for the MAGENTA trial involves parents granting permission for the investigator to use children and parents’ comments concerning effects of the treatment for its promotion. Insufficient restrictions are placed on how the comments can be used. There is the clear precedent of comments made in the context of the SMILE trial being used to promote the quack Lightning Process treatment in the absence of evidence that treatment was actually effective in the trial. There is no guarantee that any comments collected from children and parents in the MAGENTA trial would not similarly be misused.
  • Dr. Esther Crawley participated in a smear campaign against parents having legitimate concerns about the SMILE trial. Parents making legitimate use of tools provided by the government such as Freedom of Information Act requests, appeals of decisions of ethical review boards and complaints to the General Medical Council were vilified and shamed.
  • Dr. Esther Crawley has provided direct, self-incriminating quotes in the newsletter of the Science Media Centre about how she was coached and directed by their staff to slam the patient community.  She played a key role in a concerted and orchestrated attack on the credibility of not only parents of participants in the MAGENTA trial, but of all patients having chronic fatigue syndrome/ myalgic encephalomyelitis , as well as their advocates and allies.

I am not a parent of a child eligible for recruitment to the MAGENTA trial. I am not even a citizen or resident of the UK. Nonetheless, I have considered the issues and lay out some of my considerations below. On this basis, I signed the global support version  of the UK petition to suspend all trials of graded exercise therapy in children and adults with ME/CFS. I encourage readers who are similarly in my situation outside the UK to join me in signing the global support petition.

If I were a parent of an eligible child or a resident of the UK, I would not enroll my child in MAGENTA. I would immediately withdraw my child if he or she were currently participating in the trial. I would request all the child’s data be given back or evidence that it had been destroyed.

I recommend my PLOS Mind the Brain post, What patients should require before consenting to participate in research…  as either a prelude or epilogue to the following blog post.

What you will find here is a discussion of matters that parents should consider before enrolling their children in the MAGENTA trial of graded exercise for chronic fatigue syndrome. The previous blog post [http://blogs.plos.org/mindthebrain/2015/12/09/what-patients-should-require-before-consenting-to-participate-in-research/ ]  is rich in links to an ongoing initiative from The BMJ to promote broader involvement of patients (and implicitly, parents of patients) in the design, implementation, and interpretation of clinical trials. The views put forth by The BMJ are quite progressive, even if there is a gap between their expression of views and their actual implementation. Overall, that blog post presents a good set of standards for patients (and parents) making informed decisions concerning enrollment in clinical trials.

Simon McGrathLate-breaking update: See also

Simon McGrath: PACE trial shows why medicine needs patients to scrutinise studies about their health

Basic considerations.

Patients are under no obligation to participate in clinical trials. It should be recognized that any participation typically involves burden and possibly risk over what is involved in receiving medical care outside of a clinical trial.

It is a deprivation of their human rights and a violation of the Declaration of Helsinki to coerce patients to participate in medical research without freely given, fully informed consent.

Patients cannot be denied any medical treatment or attention to which they would otherwise be entitled if they fail to enroll in a clinical trial.

Issues are compounded when consent from parents is sought for participation of vulnerable children and adolescents for whom they have legal responsibility. Although assent to participate in clinical trials is sought from children and adolescents, it remains for their parents to consent to their participation.

Parents can at any time withdraw their consent for their children and adolescents participating in trials and have their data removed, without requiring the approval of any authorities of their reason for doing so.

Declaration of Helsinki

The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data.

It includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

[October 3 8:23 AM Update]: I have now inserted Article 21 of the Declaration of Helsinki which really nails the ethical problems of the SMILE trial:

21. Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.

There is clearly in adequate scientific justification for testing the quack Lightning Process Treatment.

What Is the Magenta Trial?

The published MAGENTA study protocol states

This study aims to investigate the acceptability and feasibility of carrying out a multicentre randomised controlled trial investigating the effectiveness of graded exercise therapy compared with activity management for children/teenagers who are mildly or moderately affected with CFS/ME.

Methods and analysis 100 paediatric patients (8–17 years) with CFS/ME will be recruited from 3 specialist UK National Health Service (NHS) CFS/ME services (Bath, Cambridge and Newcastle). Patients will be randomised (1:1) to receive either graded exercise therapy or activity management. Feasibility analysis will include the number of young people eligible, approached and consented to the trial; attrition rate and treatment adherence; questionnaire and accelerometer completion rates. Integrated qualitative methods will ascertain perceptions of feasibility and acceptability of recruitment, randomisation and the interventions. All adverse events will be monitored to assess the safety of the trial.

The first of two treatments being compared is:

Arm 1: activity management

This arm will be delivered by CFS/ME specialists. As activity management is currently being delivered in all three services, clinicians will not require further training; however, they will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). Clinicians therefore have flexibility in delivering the intervention within their National Health Service (NHS) setting. Activity management aims to convert a ‘boom–bust’ pattern of activity (lots 1 day and little the next) to a baseline with the same daily amount before increasing the daily amount by 10–20% each week. For children and adolescents with CFS/ME, these are mostly cognitive activities: school, schoolwork, reading, socialising and screen time (phone, laptop, TV, games). Those allocated to this arm will receive advice about the total amount of daily activity, including physical activity, but will not receive specific advice about their use of exercise, increasing exercise or timed physical exercise.

So, the first arm of the trial is a comparison condition consisting of standard care delivered without further training of providers. The treatment is flexibly delivered, expected to vary between settings, and thus largely uncontrolled. The treatment represents a methodologically weak condition that does not adequately control for attention and positive expectations. Control conditions should be equivalent to the intervention being evaluated in these dimensions.

The second arm of the study:

Arm 2: graded exercise therapy (GET)

This arm will be delivered by referral to a GET-trained CFS/ME specialist who will receive guidance on the mandatory, prohibited and flexible components (see online supplementary appendix 1). They will be encouraged to deliver GET as they would in their NHS setting.20 Those allocated to this arm will be offered advice that is focused on exercise with detailed assessment of current physical activity, advice about exercise and a programme including timed daily exercise. The intervention will encourage children and adolescents to find a baseline level of exercise which will be increased slowly (by 10–20% a week, as per NICE guidance5 and the Pacing, graded Activity and Cognitive behaviour therapy – a randomised Evaluation (PACE)12 ,21). This will be the median amount of daily exercise done during the week. Children and adolescents will also be taught to use a heart rate monitor to avoid overexertion. Participants will be advised to stay within the target heart rate zones of 50–70% of their maximum heart rate.5 ,7

The outcome of the trial will be evaluated in terms of

Quantitative analysis

The percentage recruited of those eligible will be calculated …Retention will be estimated as the percentage of recruited children and adolescents reaching the primary 6-month follow-up point, who provide key outcome measures (the Chalder Fatigue Scale and the 36-Item Short-Form Physical Functioning Scale (SF-36 PFS)) at that assessment point.

actigraphObjective data will be collected in the form of physical activity measured by Accelerometers. These are

Small, matchbox-sized devices that measure physical activity. They have been shown to provide reliable indicators of physical activity among children and adults.

However, actual evaluation of the outcome of the trial will focus on recruitment and retention and subjective, self-report measures of fatigue and physical functioning. These subjective measures have been shown to be less valid than objective measures. Scores are  vulnerable  to participants knowing what condition they are assigned to (called ‘being unblinded’) and their perception of which intervention the investigators prefer.

It is notable that in the PACE trial of CBT and GET for chronic fatigue syndrome in adults, the investigators manipulated participants’ self-reports with praise in newsletters sent out during the trial . The investigators also switched their scoring of the self-report measures and produced results that they later conceded to have been exaggerated by their changing in scoring of the self-report measures [http://www.wolfson.qmul.ac.uk/current-projects/pace-trial#news ].

Irish ME/CFS Association Officer & Tom Kindlon
Tom Kindlon, Irish ME/CFS Association Officer

See an excellent commentary by Tom Kindlon at PubMed Commons [What’s that? ]

The validity of using subjective outcome measures as primary outcomes is questionable in such a trial

The bottom line is that the investigators have a poorly designed study with inadequate control condition. They have chosen subjective self-reports that are prone to invalidity and manipulation over objective measures like actual changes in activity or practical real-world measures like school attendance. Not very good science here. But they are asking parents to sign their children up.

What is promised to parents consenting to have the children enrolled in the trial?

The published protocol to which the investigators supposedly committed themselves stated

What are the possible benefits and risks of participating?
Participants will not benefit directly from taking part in the study although it may prove enjoyable contributing to the research. There are no risks of participating in the study.

Version 7 of the information sheet provided to parents, states

Your child may benefit from the treatment they receive, but we cannot guarantee this. Some children with CFS/ME like to know that they are helping other children in the future. Your child may also learn about research.

Survey assessments conducted by the patient community strongly contradict the suggestion that there is no risk of harm with GET.

alemAlem Matthees, the patient activist who obtained release of the PACE data and participated in reanalysis has commented:

“Given that post-exertional symptomatology is a hallmark of ME/CFS, it is premature to do trials of graded exercise on children when safety has not first been properly established in adults. The assertion that graded exercise is safe in adults is generally based on trials where harms are poorly reported or where the evidence of objectively measured increases in total activity levels is lacking. Adult patients commonly report that their health was substantially worsened after trying to increase their activity levels, sometimes severely and permanently, therefore this serious issue cannot be ignored when recruiting children for research.”

See also

Kindlon T. Reporting of harms associated with graded exercise therapy and cognitive behavioural therapy in myalgic encephalomyelitis/chronic fatigue syndrome. Bulletin of the IACFS/ME. 2011;19(2):59-111.

This thorough systematic review reports inadequacy in harm reporting in clinical trials, but:

Exercise-related physiological abnormalities have been documented in recent studies and high rates of adverse  reactions  to exercise have been  recorded in  a number of  patient surveys. Fifty-one percent of  survey respondents (range 28-82%, n=4338, 8 surveys) reported that GET worsened their health while 20% of respondents (range 7-38%, n=1808, 5 surveys) reported similar results for CBT.

The unpublished results of Dr. Esther Crawley’s SMILE trial

 A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The published protocol for the SMILE trial

[Note the ® in the title below, indicating a test of trademarked commercial product. The significance of that is worthy of a whole other blog post. ]

Crawley E, Mills N, Hollingworth W, Deans Z, Sterne JA, Donovan JL, Beasant L, Montgomery A. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial). Trials. 2013 Dec 26;14(1):1.

States

The data monitoring group will receive notice of serious adverse events (SAEs) for the sample as whole. If the incidence of SAEs of a similar type is greater than would be expected in this population, it will be possible for the data monitoring group to receive data according to trial arm to determine any evidence of excess in either arm.

Primary outcome data at six months will be examined once data are available from 50 patients, to ensure that neither arm is having a detrimental effect on the majority of patients. An independent statistician with no other involvement in the study will investigate whether more than 20 participants in the study sample as a whole have experienced a reduction of ≥ 30 points on the SF-36 at six months. In this case, the data will then be summarised separately by trial arm, and sent to the data monitoring group for review. This process will ensure that the trial team will not have access to the outcome data separated by treatment arm.

A Bristol University website indicates that recruitment of the SMILE trial was completed in 2013. The trial was thus completed a number of years ago, but these valuable data have never been published.

The only publication from the trial so far uses selective quotes from child participants that cannot be independently evaluated. Readers are not told how representative these quotes, the outcomes for the children being quoted or the overall outcomes of the trial.

Parslow R, Patel A, Beasant L, Haywood K, Johnson D, Crawley E. What matters to children with CFS/ME? A conceptual model as the first stage in developing a PROM. Archives of Disease in Childhood. 2015 Dec 1;100(12):1141-7.

The “evaluation” of the quack Lightning Treatment in the SMILE trial and quotes from patients have also been used to promote Parker’s products as being used in NHS clinics.

How can I say the Lightning Process is quackery?

 Dr. Crawley describes the Lightning Process in the Research Ethics Application Form for the SMILE study as   ombining the principles of neurolinguistic programming, osteopathy, and clinical hypnotherapy.

That is an amazing array of three different frameworks from different disciplines. You would be hard pressed to find an example other than the Lightning Process that claimed to integrate them. Yet, any mechanisms for explaining therapeutic interventions cannot be a creative stir fry of whatever is on hand being thrown together. For a treatment to be considered science-based, there has to be a solid basis of evidence that these presumably complex processes fit together as assumed and work as assumed. I challenge Dr. Crawley or anyone else to produce a shred of credible, peer-reviewed evidence for the basic mechanism of the Lightning Process.

The entry for Neuro-linguistic programming (NLP) in Wikipedia states

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

 The Hampshire (UK) County Council Trading Standards Office filed a formal complaint against Phil Parker for claims made on the Lightning Process website concerning effects on CFS/ME:

The “CFS/ME” page of the website included the statements “Our survey found that 81.3 %* of clients report that they no longer have the issues they came with by day three of the LP course” and “The Lightning Process is working with the NHS on a feasibility study, please click here for further details, and for other research information click here”.

parker nhs advert
Seeming endorsements on Parker’s website. Two of them –Northern Ireland and NHS Suffolk subsequently complained that use of their insignias was unauthorized and they were quickly removed.

The “working with the NHS” refers to the collaboration with Dr. Easter Crawley.

The UK Advertising Standards Authority upheld this complaint, as well as about Parker’s claims about effectiveness with other conditions, including  multiple sclerosis, irritable bowel syndrome and fibromyalgia

 Another complaint in 2013 about claims on Phil Parker’s website was similarly upheld:

 The claims must not appear again in their current form. We welcomed the decision to remove the claims. We told Phil Parker Group not to make claims on websites within their control that were directly connected with the supply of their goods and services if those claims could not be supported with robust evidence. We also told them not to refer to conditions for which advice should be sought from suitably qualified health professionals.

 As we will see, these upheld charges of quackery occurred when parents of children participating in the SMILE trial were being vilified in the BMJ and elsewhere. Dr. Crawley was prominently featured in this vilification and was quoted in a celebration of its success by the Science Media Centre, which had orchestrated the vilification.

Captured cfs praker ad

The Research Ethics Committee approval of the SMILE trial and the aftermath

 I was not very aware of the CFS/ME literature, and certainly not all its controversies when the South West Research Ethics Committee (REC) reviewed the application for the SMILE trial and ultimately approved it on September 8, 2010.

I would have had strong opinions about it. I only first started blogging a little afterwards.  But I was very concerned about any patients being exposed to alternative and unproven medical treatments in other contexts that were not evidence-based – even more so to treatments for which promoters claimed implausible mechanisms by which they worked. I would not have felt it appropriate to inflict the Lightning Process on unsuspecting children. It is insufficient justification to put them a clinical trial simply because a particular treatment has not been evaluated.

 Prince Charles once advocated organic coffee enemas to treat advanced cancer. His endorsement generated a lot of curiosity from cancer patients. But that would not justify a randomized trial of coffee enemas. By analogy, I don’t think Dr. Esther Crawley had sufficient justification to conduct her trial, especially without warnings that that there was no scientific basis to expect the Lightning Process to work or that it would not hurt the children.

 I am concerned about clinical trials that have little likelihood of producing evidence that a treatment is effective, but that seemed designed to get these treatments into routine clinical care. it is now appreciated that some clinical trials have little scientific value but serve as experimercials or means of placing products in clinical settings. Pharmaceutical companies notoriously do this.

As it turned out, the SMILE trial succeeded admirably as a promotion for the Lightning Process, earning Phil Parker unknown but substantial fees through its use in the SMILE trial, but also in successful marketing throughout the NHS afterwards.

In short, I would been concerned about the judgment of Dr. Esther Crawley in organizing the SMILE trial. I would been quite curious about conflicts of interest and whether patients were adequately informed of how Phil Parker was benefiting.

The ethics review of the SMILE trial gave short shrift to these important concerns.

When the patient community and its advocate, Dr. Charles Shepherd, became aware of the SMILE trial’s approval, there were protests leading to re-evaluations all the way up to the National Patient Safety Agency. Examining an Extract of Minutes from South West 2 REC meeting held on 2 December 2010, I see many objections to the approval being raised and I am unsatisfied by the way in which they were discounted.

Patient, parent, and advocate protests escalated. If some acted inappropriate, this did not undermine the righteousness of others legitimate protest. By analogy, I feel strongly about police violence aimed against African-Americans and racist policies that disproportionately target African-Americans for police scrutiny and stoppng. I’m upset when agitators and provocateurs become violent at protests, but that does not delegitimize my concerns about the way black people are treated in America.

Dr. Esther Crawley undoubtedly experienced considerable stress and unfair treatment, but I don’t understand why she was not responsive to patient concerns nor  why she failed to honor her responsibility to protect child patients from exposure to unproven and likely harmful treatments.

Dr. Crawley is extensively quoted in a British Medical Journal opinion piece authored by a freelance journalist,  Nigel Hawkes:

Hawkes N. Dangers of research into chronic fatigue syndrome. BMJ. 2011 Jun 22;342:d3780.

If I had been on the scene, Dr. Crawley might well have been describing me in terms of how I would react, including my exercising of appropriate, legally-provided means of protest and complaint:

Critics of the method opposed the trial, first, Dr Crawley says, by claiming it was a terrible treatment and then by calling for two ethical reviews. Dr Shepherd backed the ethical challenge, which included the claim that it was unethical to carry out the trial in children, made by the ME Association and the Young ME Sufferers Trust. After re-opening its ethical review and reconsidering the evidence in the light of the challenge, the regional ethical committee of the NHS reiterated its support for the trial.

There was arguably some smearing of Dr. Shepherd, even in some distancing of him from the action of others:

This point of view, if not the actions it inspires, is defended by Charles Shepherd, medical adviser to and trustee of the ME Association. “The anger and frustration patients have that funding has been almost totally focused on the psychiatric side is very justifiable,” he says. “But the way a very tiny element goes about protesting about it is not acceptable.

This article escalated with unfair comparisons to animal rights activists, with condemnation of appropriate use of channels of complaint – reporting physicians to the General Medical Council.

The personalised nature of the campaign has much in common with that of animal rights activists, who subjected many scientists to abuse and intimidation in the 1990s. The attitude at the time was that the less said about the threats the better. Giving them publicity would only encourage more. Scientists for the most part kept silent and journalists desisted from writing about the subject, partly because they feared anything they wrote would make the situation worse. Some journalists have also been discouraged from writing about CFS/ME, such is the unpleasant atmosphere it engenders.

While the campaigners have stopped short of the violent activities of the animal rights groups, they have another weapon in their armoury—reporting doctors to the GMC. Willie Hamilton, an academic general practitioner and professor of primary care diagnostics at Peninsula Medical School in Exeter, served on the panel assembled by the National Institute for Health and Clinical Excellence (NICE) to formulate treatment advice for CFS/ME.

Simon Wessely and the Principal Investigator of the PACE trial, Peter White, were given free rein to dramatize their predicament posed by the protest. Much later, in the 2016 Lower Tribunal Hearing, testimony would be given by PACE

Co-Investigator Trudie Chalder would much later (2016) cast doubt on whether the harassment was as severe or violent as it was portrayed. Before that, the financial conflicts of interest of Peter White that were denied in the article would be exposed.

In response to her testimony, the UK Information Officer stated:

Professor Chalder’s evidence when she accepts that unpleasant things have been said to and about PACE researchers only, but that no threats have been made either to researchers or participants.

But in 2012, a pamphlet celebrating the success of The Science Media Centre started by Wessely would be rich in indiscreet quotes from Esther Crawley. The article in BMJ was revealed to be part of a much larger orchestrated campaign to smear, discredit and silence patients, parents, advocates and their allies.

Dr. Esther Crawley’s participation in a campaign organized by the Science Media Center to discredit patients, parents, advocates and supporters.

 The SMC would later organize a letter writing campaign to Parliament in support of Peter White and his refusal to release the PACE data to Alem Mattheees who had made a requestunder the Freedom of Information Act. The letter writing campaign was an effort to get scientific data excluded from the provisions of the freedom of information act. The effort failed and the data were subsequently released.

But here is how Esther Crawley described her assistance:

The SMC organised a meeting so we could discuss what to do to protect researchers. Those who had been subject to abuse met with press officers, representatives from the GMC and, importantly, police who had dealt with the  animal rights campaign. This transformed my view of  what had been going on. I had thought those attacking us were “activists”; the police explained they were “extremists”.

And

We were told that we needed to make better use of the law and consider using the press in our favour – as had researchers harried by animal rights extremists. “Let the public know what you are trying to do and what is happening to you,” we were told. “Let the public decide.”

And

I took part in quite a few interviews that day, and have done since. I was also inundated with letters, emails and phone calls from patients with CFS/ME all over the world asking me to continue and not “give up”. The malicious, they pointed out, are in a minority. The abuse has stopped completely. I never read the activists’ blogs, but friends who did told me that they claimed to be “confused” and “upset” – possibly because their role had been switched from victim to abuser. “We never thought we were doing any harm…”

 The patient community and its allies are still burdened by the damage of this effort and are rebuilding its credibility only slowly. Only now are they beginning to get an audience as suffering human beings with significant, legitimate unmet needs. Only now are they escaping the stigmatization that occurred at this time with Esther Crawley playing a key role.

Where does this leave us?

stop posterParents are being asked to enroll in a clinical trial without clear benefit to the children but with the possibility of considerable risk from the graded exercise. They are being asked by Esther Crawley, a physician, who has previously inflicted a quack treatment on their children with CFS/ME in the guise of a clinical trial, for which he is never published the resulting data. She has played an effective role in damaging the legitimacy and capacity of patients and parents to complain.

Given this history and these factors, why would a parent possibly want to enroll their children in the MAGENTA trial? Somebody please tell me.

Special thanks to all the patient citizen-scientists who contributed to this blog post. Any inaccuracies or excesses are entirely my own, but these persons gave me substantial help. Some are named in the blog, but others prefer anonymity.

 All opinions expressed are solely those of James C Coyne. The blog post in no way conveys any official position of Mind the Brain, PLOS blogs or the larger PLOS community. I appreciate the free expression of  personal opinion that I am allowed.

 

 

 

 

 

 

Stalking a Cheshire cat: Figuring out what happened in a psychotherapy intervention trial

John Ioannidis, the “scourge of sloppy science”  has documented again and again that the safeguards being introduced into the biomedical literature against untrustworthy findings are usually ineffective. In Ioannidis’ most recent report , his group:

…Assessed the current status of reproducibility and transparency addressing these indicators in a random sample of 441 biomedical journal articles published in 2000–2014. Only one study provided a full protocol and none made all raw data directly available.

As reported in a recent post in Retraction Watch, Did a clinical trial proceed as planned? New project finds out, Psychiatrist Ben Goldacre has a new project with

…The relatively straightforward task of comparing reported outcomes from clinical trials to what the researchers said they planned to measure before the trial began. And what they’ve found is a bit sad, albeit not entirely surprising.

Ben Goldacre specifically excludes psychotherapy studies from this project. But there are reasons to believe that the psychotherapy literature is less trustworthy than the biomedical literature because psychotherapy trials are less frequently registered, adherence to CONSORT reporting standards is less strict, and investigators more routinely refuse to share data when requested.

Untrustworthiness of information provided in the psychotherapy literature can have important consequences for patients, clinical practice, and public health and social policy.

cheshire cat1The study that I will review twice switched outcomes in its reports, had a poorly chosen comparison control group and flawed analyses, and its protocol was registered after the study started. Yet, the study will likely provide data for decision-making about what to do with primary care patients with a few unexplained medical symptoms. The recommendation of the investigators is to deny these patients medical tests and workups and instead provide them with an unvalidated psychiatric diagnosis and a treatment that encourages them to believe that their concerns are irrational.

In this post I will attempt to track what should have been an orderly progression from (a) registration of a psychotherapy trial to (b) publishing of its protocol to (c) reporting of the trial’s results in the peer-reviewed literature. This exercise will show just how difficult it is to make sense of studies in a poorly documented psychological intervention literature.

  • I find lots of surprises, including outcome switching in both reports of the trial.
  • The second article reporting results of the trial that does not acknowledge registration, minimally cites the first reports of outcomes, and hides important shortcomings of the trial. But the authors inadvertently expose new crucial shortcomings without comment.
  • Detecting important inconsistencies between registration and protocols and reports in the journals requires an almost forensic attention to detail to assess the trustworthiness of what is reported. Some problems hide in plain sight if one takes the time to look, but others require a certain clinical connoisseurship, a well-developed appreciation of the subtle means by which investigators spin outcomes to get novel and significant findings.
  • Outcome switching and inconsistent cross-referencing of published reports of a clinical trial will bedevil any effort to integrate the results of the trial into the larger literature in a systematic review or meta-analysis.
  • Two journals – Psychosomatic Medicine and particularly Journal of Psychosomatic Research– failed to provide adequate peer review of articles based on this trial, in terms of trial registration, outcome switching, and allowing multiple reports of what could be construed as primary outcomes from the same trial into the literature.
  • Despite serious problems in their interpretability, results of this study are likely to be cited and influence far-reaching public policies.
  • cheshire cat4The generalizability of results of my exercise is unclear, but my findings encourage skepticism more generally about published reports of results of psychotherapy interventions. It is distressing that more alarm bells have not been sounded about the reports of this particular study.

The publicly accessible registration of the trial is:

Cognitive Behaviour Therapy for Abridged Somatization Disorder (Somatic Symptom Index [SSI] 4,6) patients in primary care. Current controlled trials ISRCTN69944771

The publicly accessible full protocol is:

Magallón R, Gili M, Moreno S, Bauzá N, García-Campayo J, Roca M, Ruiz Y, Andrés E. Cognitive-behaviour therapy for patients with Abridged Somatization Disorder (SSI 4, 6) in primary care: a randomized, controlled study. BMC Psychiatry. 2008 Jun 22;8(1):47.

The second report of treatment outcomes in Journal of Psychosomatic Research

Readers can more fully appreciate the problems that I uncovered if I work backwards from the second published report of outcomes from the trial. Published in Journal of Psychosomatic Research, the article is behind a pay wall, but readers can write to the corresponding author for a PDF: mgili@uib.es. This person is also the corresponding author for the second paper in Psychosomatic Medicine, and so readers might want to request both papers.

Gili M, Magallón R, López-Navarro E, Roca M, Moreno S, Bauzá N, García-Cammpayo J. Health related quality of life changes in somatising patients after individual versus group cognitive behavioural therapy: A randomized clinical trial. Journal of Psychosomatic Research. 2014 Feb 28;76(2):89-93.

The title is misleading in its ambiguity because “somatising” does not refer to an established diagnostic category. In this article, it refers to an unvalidated category that encompasses a considerable proportion of primary care patients, usually those with comorbid anxiety or depression. More about that later.

PubMed, which usually reliably attaches a trial registration number to abstracts, doesn’t do so for this article 

The article does not list the registration, and does not provide the citation when indicating that a trial protocol is available. The only subsequent citations of the trial protocol are ambiguous:

More detailed design settings and study sample of this trial have been described elsewhere [14,16], which explain the effectiveness of CBT reducing number and severity of somatic symptoms.

The above quote is also the sole citation of a key previous paper that presents outcomes for the trial. Only an alert and motivated reader would catch this. No opportunity within the article is provided for comparing and contrasting results of the two papers.

The brief introduction displays a decided puffer fish phenomenon, exaggerating the prevalence and clinical significance of the unvalidated “abridged somatization disorder.” Essentially, the authors invoke the  problematic, but accepted psychiatric diagnostic categories somatoform or somatization disorders in claiming validity for a diagnosis with much less stringent criteria. Oddly, the category has different criteria when applied to men and women: men require four unexplained medical symptoms, whereas women require six.

I haven’t previously counted the term “abridged” in psychiatric diagnosis. Maybe the authors mean “subsyndromal,” as in “subsyndromal depression.” This is a dubious labeling because it suggested all characteristics needed for diagnosis are not present, some of which may be crucial. Think of it: is a persistent cough subsyndromal lung cancer or maybe emphysema? References to symptoms being “subsyndromal”often occur in context where exaggerated claims about prevalence are being made with inappropriate, non-evidence-based inferences  about treatment of milder cases from the more severe.

A casual reader might infer that the authors are evaluating a psychiatric treatment with wide applicability to as many as 20% of primary care patients. As we will see, the treatment focuses on discouraging any diagnostic medical tests and trying to convince the patient that their concerns are irrational.

The introduction identifies the primary outcome of the trial:

The aim of our study is to assess the efficacy of a cognitive behavioural intervention program on HRQoL [health-related quality of life] of patients with abridged somatization disorder in primary care.

This primary outcome is inconsistent with what was reported in the registration, the published protocol, and the first article reporting outcomes. The earlier report does not even mention the inclusion of a measure of HRQoL, measured by the SF-36. It is listed in the study protocol as a “secondary variable.”

The opening of the methods section declares that the trial is reported in this paper consistent with the Consolidated Standards of Reporting Clinical Trials (CONSORT). This is not true because the flowchart describing patients from recruitment to follow-up is missing. We will see that when it is reported in another paper, some important information is contained in that flowchart.

The methods section reports only three measures were administered: a Standardized Polyvalent Psychiatric Interview (SPPI), a semistructured interview developed by the authors with minimal validation; a screening measure for somatization administered by primary care physicians to patients whom they deemed appropriate for the trial, and the SF-36.

Crucial details are withheld about the screening and diagnosis of “abridged somatization disorder.” If these details had been presented, a reader would further doubt the validity of this unvalidated and idiosyncratic diagnosis.

Few readers, even primary care physicians or psychiatrists, will know what to make of the Smith’s guidelines (Googling it won’t yield much), which is essentially a matter of simply sending a letter to the referring GP. Sending such a letter is a notoriously ineffective intervention in primary care. It mainly indicates that patients referred to a trial did not get assigned to an active treatment. As I will document later, the authors were well aware that this would be an ineffectual control/comparison intervention, but using it as such guarantees that their preferred intervention would look quite good in terms of effect size.

The two active interventions are individual- and group-administered CBT which is described as:

Experimental or intervention group: implementation of the protocol developed by Escobar [21,22] that includes ten weekly 90-min sessions. Patients were assessed at 4 time points: baseline, post-treatment, 6 and 12 months after finishing the treatment. The CBT intervention mainly consists of two major components: cognitive restructuring, which focuses on reducing pain-specific dysfunctional cognitions, and coping, which focuses on teaching cognitive and behavioural coping strategies. The program is structured as follows. Session 1: the connection between stress and pain. Session 2: identification of automated thoughts. Session 3: evaluation of automated thoughts. Session 4: questioning the automatic thoughts and constructing alternatives. Session 5: nuclear beliefs. Session 6: nuclear beliefs on pain. Session 7: changing coping mechanisms. Session 8: coping with ruminations, obsessions and worrying. Session 9: expressive writing. Session 10: assertive communication.

There is sparse presentation of data from the trial in the results section, but some fascinating details await a skeptical, motivated reader.

Table 1 displays social demographic and clinical variables. Psychiatric comorbidity is highly prevalent. Readers can’t tell exactly what is going on, because the authors’ own interview schedule is used to assess comorbidity. But it appears that all but a small minority of patients diagnosed with “abridged somatization disorder” have substantial anxiety and depression. Whether these symptoms meet formal criteria cannot be determined. There is no mention of physical comorbidities.

But there is something startling awaiting an alert reader in Table 2.

sf-36 gili

There is something very odd going on here, and very likely a breakdown of randomization. Baseline differences in the key outcome measure, SF-36 are substantially greater between groups than any within-group change. The treatment as usual condition (TAU) has much lower functioning [lower scores mean lower functioning] than the group CBT condition, which is substantially below the individual CBT difference.

If we compare the scores to adult norms, all three groups of patients are poorly functioning, but those “randomized” to TAU are unusually impaired, strikingly more so than the other two groups.

Keep in mind that evaluations of active interventions, in this case CBT, in randomized trials always involve a between difference between groups, not just difference observed within a particular group. That’s because a comparison/control group is supposed to be equivalent for nonspecific factors, including natural recovery. This trial is going to be very biased in its evaluation of individual CBT, a group within which patients started much higher in physical functioning and ended up much higher. Statistical controls fail to correct for such baseline differences. We simply do not have an interpretable clinical trial here.

cheshire cat2The first report of treatment outcomes in Psychosomatic Medicine

 Moreno S, Gili M, Magallón R, Bauzá N, Roca M, del Hoyo YL, Garcia-Campayo J. Effectiveness of group versus individual cognitive-behavioral therapy in patients with abridged somatization disorder: a randomized controlled trial. Psychosomatic medicine. 2013 Jul 1;75(6):600-8.

The title indicates that the patients are selected on the basis of “abridged somatization disorder.”

The abstract prominently indicates the trial registration number (ISRCTN69944771), which can be plugged into Google to reach the publicly accessible registration.

If a reader is unaware of the lack of validation for “abridged somatization disorder,” they probably won’t infer that from the introduction. The rationale given for the study is that

A recently published meta-analysis (18) has shown that there has been ongoing research on the effectiveness of therapies for abridged somatization disorder in the last decade.

Checking that meta-analysis, it only included a single null trial for treatment of abridged somatization disorder. This seems like a gratuitous, ambiguous citation.

I was surprised to learn that in three of the five provinces in which the study was conducted, patients

…Were not randomized on a one-to-one basis but in blocks of four patients to avoid a long delay between allocation and the onset of treatment in the group CBT arm (where the minimal group size required was eight patients). This has produced, by chance, relatively big differences in the sizes of the three arms.

This departure from one-to-one randomization was not mentioned in the second article reporting results of the study, and seems an outright contradiction of what is presented there. Neither is it mentioned nor in the study protocol. This patient selection strategy may have been the source of lack of baseline equivalence of the TAU and to intervention groups.

For the vigilant skeptic, the authors’ calculation of sample size is an eye-opener. Sample size estimation was based on the effectiveness of TAU in primary care visits, which has been assumed to be very low (approximately 10%).

Essentially, the authors are justifying a modest sample size because they don’t expect the TAU intervention is utterly ineffective. How could authors believe there is equipoise, that the comparison control and active interventions treatments could be expected to be equally effective? The authors seem to say that they don’t believe this. Yet,equipoise is an ethical and practical requirement for a clinical trial for which human subjects are being recruited. In terms of trial design, do the authors really think this poor treatment provides an adequate comparison/control?

In the methods section, the authors also provide a study flowchart, which was required for the other paper to adhere to CONSORT standards but was missing in the other paper. Note the flow at the end of the study for the TAU comparison/control condition at the far right. There was substantially more dropout in this group. The authors chose to estimate the scores with the Last Observation Carried Forward (LOCF) method which assumes the last available observation can be substituted for every subsequent one. This is a discredited technique and particularly inappropriate in this context. Think about it: the TAU condition was expected by the authors to be quite poor care. Not surprisingly,  more patients assigned to it dropped out. But they might have  dropped out while deteriorating, and so the last observation obtained is particularly inappropriate. Certainly it cannot be assumed that the smaller number of dropouts from the other conditions were from the same reason. We have a methodological and statistical mess on our hands, but it was hidden from us in our discussion of the second report.

 

flowchart

Six measures are mentioned: (1) the Othmer-DeSouza screening instrument used by clinicians to select patients; (2) the Screening for Somatoform Disorders (SOMS, a 39 item questionnaire that includes all bodily symptoms and criteria relevant to somatoform disorders according to either DSM-IV or ICD-10; (3) a Visual Analog Scale of somatic symptoms (Severity of Somatic Symptoms scale) that patients useto assess changes in severity in each of 40 symptoms; (4) the authors own SPPI semistructured psychiatric interview for diagnosis of psychiatric morbidity in primary care settings; (5) the clinician administered Hamilton Anxiety Rating Scale; and the (6) Hamilton Depression Rating Scale.

We are never actually told what the primary outcome is for the study, but it can be inferred from the opening of the discussion:

The main finding of the trial is a significant improvement regardless of CBT type compared with no intervention at all. CBT was effective for the relief of somatization, reducing both the number of somatic symptoms (Fig. 2) and their intensity (Fig. 3). CBT was also shown to be effective in reducing symptoms related to anxiety and depression.

But I noticed something else here, after a couple of readings. The items used to select patients and identify them with “abridged somatization disorder” reference  39 or 40 symptoms, and men only needing four, while women only needing six symptoms for a diagnosis. That means that most pairs of patients receiving a diagnosis will not have a symptom in common. Whatever “abridged somatization disorder” means, patients who received this diagnosis are likely to be different from each other in terms of somatic symptoms, but probably have other characteristics in common. They are basically depressed and anxious patients, but these mood problems are not being addressed directly.

Comparison of this report to the outcomes paper  reviewed earlier shows none of these outcomes are mentioned as being assessed and certainly not has outcomes.

Comparison of this report to the published protocol reveals that number and intensity of somatic symptoms are two of the three main outcomes, but this article makes no mention of the third, utilization of healthcare.

Readers can find something strange in Table 2 presenting what seems to be one of the primary outcomes, severity of symptoms. In this table the order is TAU, group CBT, and individual CBT. Note the large difference in baseline symptoms with group CBT being much more severe. It’s difficult to make sense of the 12 month follow-up because there was differential drop out and reliance on an inappropriate LOCR imputation of missing data. But if we accept the imputation as the authors did, it appears that they were no differences between TAU and group CBT. That is what the authors reported with inappropriate analyses of covariance.

Moreno severity of symptoms

The authors’ cheerful take away message?

This trial, based on a previous successful intervention proposed by Sumathipala et al. (39), presents the effectiveness of CBT applied at individual and group levels for patients with abridged somatization (somatic symptom indexes 4 and 6).

But hold on! In the introduction, the authors’ justification for their trial was:

Evidence for the group versus individual effectiveness of cognitive-behavioral treatment of medically unexplained physical symptoms in the primary care setting is not yet available.

And let’s take a look at Sumathipala et al.

Sumathipala A, Siribaddana S, Hewege S, Sumathipala K, Prince M, Mann A. Understanding the explanatory model of the patient on their medically unexplained symptoms and its implication on treatment development research: a Sri Lanka Study. BMC Psychiatry. 2008 Jul 8;8(1):54.

The article presents speculations based on an observational study, not an intervention study so there is no success being reported.

The formal registration 

The registration of psychotherapy trials typically provides sparse details. The curious must consult the more elaborate published protocol. Nonetheless, the registration can often provide grounds for skepticism, particularly when it is compared to any discrepant details in the published protocol, as well as subsequent publications.

This protocol declares

Study hypothesis

Patients randomized to cognitive behavioural therapy significantly improve in measures related to quality of life, somatic symptoms, psychopathology and health services use.

Primary outcome measures

Severity of Clinical Global Impression scale at baseline, 3 and 6 months and 1-year follow-up

Secondary outcome measures

The following will be assessed at baseline, 3 and 6 months and 1-year follow-up:
1. Quality of life: 36-item Short Form health survey (SF-36)
2. Hamilton Depression Scale
3. Hamilton Anxiety Scale
4. Screening for Somatoform Symptoms [SOMS]

Overall trial start date

15/01/2008

Overall trial end date

01/07/2009

The published protocol 

Primary outcome

Main outcome variables:

– SSS (Severity of somatic symptoms scale) [22]: a scale of 40 somatic symptoms assessed by a 7-point visual analogue scale.

– SSQ (Somatic symptoms questionnaire) [22]: a scale made up of 40 items on somatic symptoms and patients’ illness behaviour.

When I searched for, Severity of Clinical Global Impression, the primary outcome declared in the registration , and I could find no reference to it.

The protocol was submitted on May 14, 2008 and published on June 22, 2008. This suggests that the protocol was submitted after the start of the trial.

To calculate the sample size we consider that the effectiveness of usual treatment (Smith’s norms) is rather low, estimated at about 20% in most of the variables [10,11]. We aim to assess whether the new intervention is at least 20% more effective than usual treatment.

Comparison group

Control group or standardized recommended treatment for somatization disorder in primary care (Smith’s norms) [10,11]: standardized letter to the family doctor with Smith’s norms that includes: 1. Provide brief, regularly scheduled visits. 2. Establish a strong patient-physician relationship. 3. Perform a physical examination of the area of the body where the symptom arises. 4. Search for signs of disease instead of relying of symptoms. 5. Avoid diagnostic tests and laboratory or surgical procedures. 6. Gradually move the patient to being “referral ready”.

Basically, TAU, the comparison/control group involves simply sending a letter to referring physicians encouraging them simply to meet regularly with the patients but discouraged diagnostic test or medical procedures. Keep in mind that patients for this study were selected by the physicians because they found them particularly frustrating to treat. Despite the authors’ repeated claims about the high prevalence of “abridged somatization disorder,” they relied on a large number of general practice settings to each contribute only a few patients . These patients are very heterogeneous in terms of somatic symptoms, but most share anxiety or depressive symptoms.

House of GodThere is an uncontrolled selection bias here that makes generalization from results of the study problematic. Just who are these patients? I wonder if these patients have some similarity to the frustrating GOMERS (Get Out Of My Emergency Room) in the classic House of God, a book described by Amazon  as “an unvarnished, unglorified, and amazingly forthright portrait revealing the depth of caring, pain, pathos, and tragedy felt by all who spend their lives treating patients and stand at the crossroads between science and humanity.”

Imagine the disappointment about the referring physicians and the patients when consent to participate in this study simply left the patients back in routine care provided by the same physicians . It’s no wonder that the patients deteriorated and that patients assigned to this treatment were more likely to drop out.

Whatever active ingredients the individual and group CBT have, they also include some nonspecific factors missing from the TAU comparison group: frequency and intensity of contact, reassurance and support, attentive listening, and positive expectations. These nonspecific factors can readily be confused with active ingredients and may account for any differences between the active treatments and the TAU comparison. What terrible study.

The two journals providing reports of the studies failed to responsibility to the readership and the larger audience seeking clinical and public policy relevance. Authors have ample incentive to engage in questionable publication practices, including ignoring and even suppressing registration, switching outcomes, and exaggerating the significance of their results. Journals of necessity must protect authors from their own inclinations, as well as the readers and the larger medical community from on trustworthy reports. Psychosomatic Medicine and Journal of Psychosomatic Research failed miserably in their peer review of these articles. Neither journal is likely to be the first choice for authors seeking to publish findings from well-designed and well reported trials. Who knows, maybe the journals’ standards are compromised by the need to attract randomized trials for what is construed as a psychosomatic condition, at least by the psychiatric community.

Regardless, it’s futile to require registration and posting of protocols for psychotherapy trials if editors and reviewers ignore these resources in evaluating articles for publication.

Postscript: imagine what will be done with the results of this study

You can’t fix with a meta analysis what investigators bungled by design .

In a recent blog post, I examined a registration for a protocol for a systematic review and meta-analysis of interventions to address medically unexplained symptoms. The review protocol was inadequately described, had undisclosed conflicts of interest, and one of the senior investigators had a history of switching outcomes in his own study and refusing to share data for independent analysis. Undoubtedly, the study we have been discussing meets the vague criteria for inclusion in this meta-analysis. But what outcomes will be chosen, particularly when they should only be one outcome per study? And will be recognized that these two reports are actually the same study? Will key problems in the designation of the TAU control group be recognized, with its likely inflation of treatment effects, when used to calculate effect sizes?

cheshire_cat_quote_poster_by_jc_790514-d7exrjeAs you can see, it took a lot of effort to compare and contrast documents that should have been in alignment. Do you really expect those who conduct subsequent meta-analyses to make those multiple comparisons or will they simply extract multiple effect sizes from the two papers so far reporting results?

Obviously, every time we encounter a report of a psychotherapy in the literature, we won’t have the time or inclination to undertake such a cross comparison of articles, registration, and protocol. But maybe we should be skeptical of authors’ conclusions without such checks.

I’m curious what a casual reader would infer from encountering either of these reports of this clinical trial I have reviewed in a literature search, but not the other one.

 

 

PLSO-Blogs-Survey_240x310
http://plos.io/PLOSblogs16

What patients should require before consenting to participate in research…

A bold BMJ editorial  calls for more patient involvement in the design, implementation, and interpretation of research – but ends on a sobering note: The BMJ has so little such involvement to report.

In this edition of Mind the Brain, I suggest how patients, individually and collectively, can take responsibility for advancing this important initiative themselves.

I write in a context defined by recent events.

  • Government-funded researchers offered inaccurate interpretations of their results [1, 2].
  • An unprecedented number of patients have judged the researchers’ interpretation of their results as harmful to their well-being.
  • The researchers then violated government-supported data sharing policies in refusing to release their data for independent analysis.
  • Patients were vilified in the investigators’ efforts to justify their refusal to release the data.

These events underscore the need for patients to require certain documentation before deciding whether to participate in research.

Declining to participate in clinical research is a patient’s inalienable right that must not jeopardize the receipt of routine treatment or lead to retaliation.

A simple step: in deciding whether to participate in research, patients can insist that any consent form they sign contains documentation of patient involvement at all phases of the research. If there is no detailing of how patients were involved in the design of this study and how they will be involved in the interpretation, patients should consider not consenting.

Similarly, patients should consider refusing to sign consent forms that do not expressly indicate that the data will be readily available for further analyses, preferably by placing the data in a publicly accessible depository.

Patients exercising their rights in these ways will make for better and more useful biomedical research, as well as research that is more patient-oriented

The BMJ editorial

bmj-logo-ogThe editorial Research Is the Future, Get Involved declares:

More than three million NHS patients took part in research over the past five years. Bravo. Now let’s make sure that patients are properly involved, not just as participants but in trial conception, design, and conduct and the analysis, reporting, and dissemination of results.

But in the next sentences, the editorial describes how The BMJ’s laudable efforts to get researchers to demonstrate how patients were involved have not produced impressive results:

man with empty pocketsYou may have noticed the new “patient involvement” box in The BMJ’s research articles. Sadly, all too often the text reads something like, “No patients were involved in setting the research question or the outcome measures; nor were they involved in the design and implementation of the study. There are no plans to involve patients in the dissemination of results.” We hope that the shock of such statements will stimulate change. Examples of good patient involvement will also help: see the multicentre randomised trial on stepped care for depression and anxiety (doi:10.1136/bmj.h6127).

Our plan is to shine a light on the current state of affairs and then gradually raise the bar. Working with other journals, research funders, and ethics committees, we hope that at some time in the future only research in which patients have been fully involved will be considered acceptable.

In their instructions to authors, The BMJ includes a section Reporting patients’ involvement in research which states:

As part of its patient partnership strategy, The BMJ is encouraging active patient involvement in setting the research agenda.

We appreciate that not all authors of research papers will have done this, and we will still consider your paper if you did not involve patients at an early stage. We do, however, request that all authors provide a statement in the methods section under the subheading Patient involvement.

This should provide a brief response to the following questions:

How was the development of the research question and outcome measures informed by patients’ priorities, experience, and preferences?

How did you involve patients in the design of this study?

Were patients involved in the recruitment to and conduct of the study?

How will the results be disseminated to study participants?

For randomised controlled trials, was the burden of the intervention assessed by patients themselves?

Patient advisers should also be thanked in the contributorship statement/acknowledgements.

If patients were not involved please state this.

If this information is not in the submitted manuscript we will ask you to provide it during the peer review process.

Please also note also note that The BMJ now sends randomised controlled trials and other relevant studies for peer review by patients.

Recent events suggest that these instructions should be amended with the following question:

How were patients involved in the interpretation of results?

The instructions to authors should also elaborate that the intent is require description of how results were shared with patients before publication and dissemination to the news media. This process should be interactive with the possibility of corrective feedback, rather than a simple presentation of the results to the patients without opportunity for comment or for suggesting qualification of the interpretations that will be made. This process should be described in the article.

partnering with patientsMaterial offered by The BMJ in support of their initiative include an editorial, Patient Partnership, which explains:

The strategy brings landmark changes to The BMJ’s internal processes, and seeks to place the journal at the forefront of the international debate on the science, art, and implementation of meaningful, productive partnership with patients. It was “co –produced” with the members of our new international patient advisory panel, which was set up in January 2014. It’s members continue to inform our thinking and help us with implementation of our strategy.

patient includedFor its efforts, The BMJ has been the first medical journal to receive the “Patients Included” Certificate from Lucien Engelen’s Radboud REshape Academy. For his part, Lucien had previously announced:

I will ‘NO-SHOW’ at healthcare conferences that do not add patients TO or IN their programme or invite them to be IN the audience. Also I will no longer give lectures/keynotes at ‘NO-SHOW’ conferences.

But strong words need an action plan to become more than mere words. Although laudable exceptions can be noted, they are few and far between.

In Beyond rhetoric: we need a strategy for patient involvement in the health service, NHS user Sarah Thornton has called the UK government to task for being heavy on the hyperbole of empowering patients but lacking a robust strategy for implementing it. The same could be said for the floundering effort of The BMJ to support patient empowerment in research.

So, should patients just remain patient, keep signing up for clinical trials and hope that funders eventually get more patient oriented in the decisions about grants and that researchers eventually become more patient-oriented?

Recent events suggest that is unwise.

The BMJ patient-oriented initiative versus the PACE investigators’ refusal to share data and the vilification of patients who object to their interpretation of the data

As previously detailed here  the PACE investigators have steadfastly refused to provide the data for independent evaluation of claims. In doing so, they are defying numerous published standards from governmental and funding agencies that dictate sharing of data. Ironically, in justifying this refusal, the investigators cite possible repercussions of releasing the data for the ability to conduct future research.

Fortunately, in a decision against the PACE investigators, the UK Information Commissioner’s Office (ICO) rejected this argument because

He is also not convinced that there is sufficient evidence for him to determine that disclosure would be likely to deter significant numbers of other potential participants from volunteering to take part in future studies so as to affect the University’s ability to undertake such research. As a result, the Commissioner is reluctant to accept that disclosure of the withheld information would be likely to have an adverse effect on the University’s future ability to attract necessary funding and to carry out research in this area, with a consequent effect on its reputation and ability to recruit staff and students.

But the PACE investigators have appealed this decision and continue to withhold their data. Moreover in their initial refusal to share the data, they characterized patients who objected to the possible harm of their interpretations as a small vocal minority.

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Physician Charles Shepherd, himself a sufferer of myalgic encephalomyelitis (ME) notes:

  • Over 10,000 people signed a petition calling for claims of the PACE investigators relating to so-called recovery to be retracted.
  • In a survey of 1,428 people with ME, 73 per cent reported that CBT had no effect on symptoms while 74 per cent reported that GET had made their condition worse.

The BMJ’s position on data sharing

A May 15, 2015 editorial spelled out a new policy at The BMJ concerning data sharing, The BMJ requires data sharing on request for all trials:

Heeding calls from the Institute of Medicine, WHO, and the Nordic Trial Alliance, we are extending our policy

The movement to make data from clinical trials widely accessible has achieved enormous success, and it is now time for medical journals to play their part. From 1 July The BMJ will extend its requirements for data sharing to apply to all submitted clinical trials, not just those that test drugs or devices.1 The data transparency revolution is gathering pace.2 Last month, the World Health Organization (WHO) and the Nordic Trial Alliance released important declarations about clinical trial transparency.3 4

Note that The BMJ was making the data sharing requirement to all trials, not just medical and medical device trials.

But The BMJ was simply following the lead of the family of PLOS journals that made an earlier, broader, and simpler commitment to data from clinical trials being available to others.

plosThe PLOS journals’ policy on data sharing

On December 12, 2013, the PLOS journals scooped other major publishers with:

PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception.

When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS’s policy. The data availability statement will be published with the article if accepted.

Refusal to share data and related metadata and methods in accordance with this policy will be grounds for rejection. PLOS journal editors encourage researchers to contact them if they encounter difficulties in obtaining data from articles published in PLOS journals. If restrictions on access to data come to light after publication, we reserve the right to post a correction, to contact the authors’ institutions and funders, or in extreme cases to retract the publication

This requirement took effect on March 1, 2014. However, one of the most stringent of data sharing policies in the industry was already in effect.

Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.

Even the earlier requirement for publication in PLOS journals would have forestalled the delays, struggles, and complicated quasi-legal maneuvering to characterized the PACE investigators’ refusing to release their data.

Why medically ill people agree to be in clinical research

Patients are not obligated to participate in research, but should freely choose whether to participate based on a weighing of the benefits and risk. Consent to treatment in clinical research needs to be voluntary and fully informed.

Medically ill patients often cannot expect direct personal benefit from participating in a research trial. This is particularly true when trials involve comparison of a treatment that they want that is not otherwise available, but they risk getting randomized to a poorly defined and inadequate routine care. Their needs continue to be neglected, but now burdened by multiple and sometimes intrusive assessments. This is also the case with descriptive observational research and particularly phase 1 clinical studies that provide no direct benefit to participating patients, only the prospect of improving the care of future patients.

In recognition that many research projects do not directly benefit individual patients, consent forms identify possible benefits to other current and future patients and to society at large.

Protecting the rights of participants in research

The World Medical Association (WMA) Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects spells out a set of principles protecting the rights of human subjects, it includes:

In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information.

Can patients pick up the challenge of realizing the promise of The BMJ editorial, Research Is the Future, Get Involved ?

One patient to whom I showed an earlier draft objected that this is just another burden being thrust on medical patients who already have their condition and difficult treatment decisions with which to contend. She pointed out so often patient empowerment strategies ended up leaving patients with responsibilities they could not shoulder and that the medical system should have met for them.

I agree that not every patient can take up this burden of promoting  both more patient involvement in research and data sharing, but groups of patients can. And when individual patients are willing to take on the sacrifice of insisting on these conditions for their consent, they should be recognized and supported by others. This is not a matter for patients with particular illnesses or members of patient organizations organized around a particular illness. Rather, this is a contribution to the well-being of society should be applauded and supported across the artificial boundaries drawn around particular conditions or race or class.

The mere possibility that patients are going to refuse to participate in research that does not have plans for patient involvement or data sharing can have a powerful effect. It is difficult enough for researchers to accrue sufficient numbers of patients for their studies. If the threat is that they will run into problems because they don’t adequately involve patients, they will be proactive in redesigning the research strategies and reflecting it in their consent forms, if they are serious about getting their research done.

just-say-noPatients are looking after the broader society in participating in medical research. However, if researchers do not take steps to ensure that society gets the greatest possible benefit, patients can just say no, we won’t consent to participation.

Acknowledgments: I benefited from discussions with numerous patients and some professionals in writing and revising this blog. Because some of the patients desired anonymity, I will simply give credit to the group. However, I am responsible for any excesses or inaccuracies that may have escaped their scrutiny.

 

Why the scientific community needs the PACE trial data to be released

To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001University and clinical trial investigators must release data to a citizen-scientist patient, according to a landmark decision in the UK. But the decision could still be overturned if the University and investigators appeal. The scientific community needs the decision to be upheld. I’ll argue that it’s unwise for any appeal to be made. The reasons for withholding the data in the first place were archaic. Overturning of the decision would set a bad precedent and would remove another tooth from almost toothless requirements for data sharing.

We didn’t need Francis Collins, Director of National Institutes of Health to tell us what we already knew, the scientific and biomedical literature is untrustworthy.

And there is the new report from the UK Academy of Medical Sciences, Reproducibility and reliability of biomedical research: improving research practice.

There has been a growing unease about the reproducibility of much biomedical research, with failures to replicate findings noted in high-profile scientific journals, as well as in the general and scientific media. Lack of reproducibility hinders scientific progress and translation, and threatens the reputation of biomedical science.

Among the report’s recommendations:

  • Journals mandating that the data underlying findings are made available in a timely manner. This is already required by certain publishers such as the Public Library of Science (PLOS) and it was agreed by many participants that it should become more common practice.
  • Funders requiring that data be released in a timely fashion. Many funding agencies require that data generated with their funding be made available to the scientific community in a timely and responsible manner

A consensus has been reached: The crisis in the trustworthiness of science can be only overcome only if scientific data are routinely available for reanalysis. Independent replication of socially significant findings is often unfeasible, and unnecessary if original data are fully available for inspection.

Numerous governmental funding agencies and regulatory bodies are endorsing routine data sharing.

The UK Medical Research Council (MRC) 2011 policy on data sharing and preservation  has endorsed principles laid out by the Research Councils UK including

Publicly funded research data are a public good, produced in the public interest, which should be made openly available with as few restrictions as possible in a timely and responsible manner.

To enable research data to be discoverable and effectively re-used by others, sufficient metadata should be recorded and made openly available to enable other researchers to understand the research and re-use potential of the data. Published results should always include information on how to access the supporting data.

The Wellcome Trust Policy On Data Management and Sharing opens with

The Wellcome Trust is committed to ensuring that the outputs of the research it funds, including research data, are managed and used in ways that maximise public benefit. Making research data widely available to the research community in a timely and responsible manner ensures that these data can be verified, built upon and used to advance knowledge and its application to generate improvements in health.

The Cochrane Collaboration has weighed in that there should be ready access to all clinical trial data

Summary results for all protocol-specified outcomes, with analyses based on all participants, to become publicly available free of charge and in easily accessible electronic formats within 12 months after completion of planned collection of trial data;

Raw, anonymised, individual participant data to be made available free of charge; with appropriate safeguards to ensure ethical and scientific integrity and standards, and to protect participant privacy (for example through a central repository, and accompanied by suitably detailed explanation).

Many similar statements can be found on the web. I’m unaware of credible counterarguments gaining wide acceptance.

toothless manYet, endorsements of routine sharing of data are only a promissory reform and depend on enforcement that has been spotty, at best. Those of us who request data from previously published clinical trials quickly realize that requirements for sharing data have no teeth. In light of that, scientists need to watch closely whether a landmark decision concerning sharing of data from a publicly funded trial is appealed and overturned.

The Decision requiring release of the PACE data

The UK’s Information Commissioner’s Office (ICO) ordered Queen Mary University of London (QMUL) on October 27, 2015 to release anonymized from the PACE chronic fatigue syndrome trial data to an unnamed complainant. QMUL has 28 days to appeal.

Even if scientists don’t know enough to care about Chronic Fatigue Syndrome/Myalgic Encephalomyelitis, they should be concerned about the reasons that were given in a previous refusal to release the data.

I took a critical look at the long-term follow up results for the PACE trial in a previous Mind the Brain blog post  and found fatal flaws in the authors’ self-congratulatory interpretation of results. Despite authors’ claims to the contrary and their extraordinary efforts to encourage patients to report the intervention was helpful, there were simply no differences between groups at follow-up

Background on the request for release of PACE data

  • A complainant requested release of specific PACE data from QMUL under the Freedom of Information Act.
  • QMUL refused the request.
  • The complainant requested an internal review but QMUL maintained its decision to withhold the data.
  • The complainant contacted the ICO with concerns about how the request had been handled.
  • On October 27, 2015, the ICO sided with the complainant and order the release of the data.

A report outlines Queen Mary’s arguments for refusing to release the data and the Commissioner’s justification for siding with the patient requesting the data be released.

Reasons the request release of data was initially refused

The QMU PACE investigators claimed

  • They were entitled to withhold data prior to publication of planned papers.
  • An exemption to having to share data because data contained sensitive medical information from which it was possible to identify the trial participants.
  • Release of the data might harm their ability to recruit patients for research studies in the future.

The QMU PACE researchers specifically raised concerns about a motivated intruder being able to facilitate re-identification of participants:

In relation to a motivated intruder being able facilitate re-identification of participants, the University argued that:

“The PACE trial has been subject to extreme scrutiny and opponents have been against it for several years. There has been a concerted effort by a vocal minority whose views as to the causes and treatment of CFS/ME do not comport with the PACE trial and who, it is QMUL’s belief, are trying to discredit the trial. Indeed, as noted by the editor of the Lancet, after the 2011 paper’s publication, the nature of this comprised not a ‘scientific debate’ but an “orchestrated response trying to undermine the credibility of the study from patient groups [and]… also the credibility of the investigators and that’s what I think is one of the other alarming aspects of this. This isn’t a purely scientific debate; this is going to the heart of the integrity of the scientists who conducted this study.”

Magneto_430Bizarre. This is obviously a talented masked motivated intruder. Do they have evidence that Magneto is at it again? Mostly he now is working with the good guys, as seen in the help he gave Neurocritic and me.

Let’s think about this novel argument. I checked with University of Pennsylvania bioethicist Jon Merz, an expert who has worked internationally to train researchers and establish committees for the protection of human subjects. His opinion was clear:

The litany of excuses – not reasons – offered by the researchers and Queen Mary University is a bald attempt to avoid transparency and accountability, hiding behind legal walls instead of meeting their critics on a level playing field.  They should be willing to provide the data for independent analyses in pursuit of the truth.  They of course could do this willingly, in a way that would let them contractually ensure that data would be protected and that no attempts to identify individual subjects would be made (and it is completely unclear why anyone would care to undertake such an effort), or they can lose this case and essentially lose any hope for controlling distribution.

The ‘orchestrated response to undermine the credibility of the study’ claimed by QMU and the PACE investigators, as well as issue being raised of the “integrity of the scientists who conducted the study” sounds all too familiar. It’s the kind of defense that is heard from scientists under scrutiny of the likes of Open Science Collaborations, as in psychology and cancer. Reactionaries resisting post-publication peer review say we must be worried about harassment from

“replication police” “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.”

Far fetched? Compare this to a QMU quote drawn from the National Radio, Australian Broadcast Company April 18, 2011 interview of Richard Horton and PACE investigator Michael Sharpe in which former Lancet Editor Richard Horton condemned:

A fairly small, but highly organised, very vocal and very damaging group of individuals who have…hijacked this agenda and distorted the debate…

dost thou feel‘Distorted the debate’? Was someone so impertinent as to challenge investigators’ claims about their findings? Sounds like Pubpeer  We have seen what they can do.

Alas, all scientific findings should be scrutinized, all data relevant to the claims that are made should be available for reanalysis. Investigators just need to live with the possibility that their claims will be proven wrong or exaggerated. This is all the more true for claims that have substantial impact on public policy and clinical services, and ultimately, patient welfare.

[It is fascinating to note that Richard Horton spoke at the meeting that produced the UK Academy of Medical Sciences report to which I provided a link above. Horton covered the meaning in a Lancet editorial  in which he amplified the sentiment of the meeting: “The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world.” His editorial echoed a number of recommendations of the meeting report, but curiously omitted mentioning of data sharing.]

jacob-bronowski-scientist-that-is-the-essence-of-science-ask-anFortunately the ICO has rejected the arguments of QMUL and the PACE investigators. The Commissioner found that QMUL and the PACE investigators incorrectly interpreted regulations in their withholding of the data and should provide the complaint with the data or risk being viewed as in contempt of court.

The 30-page decision is a fascinating read, but here’s an accurate summary from elsewhere:

In his decision, the Commissioner found that QMUL failed to provide any plausible mechanism through which patients could be identified, even in the case of a “motivated intruder.” He was also not convinced that there is sufficient evidence to determine that releasing the data would result in the mass exodus of a significant number of the trial’s 640 participants nor that it would deter significant numbers of participants from volunteering to take part in future research.

Requirements for data sharing in the United States have no teeth and situation would be worsened by reversal of ICO decision

Like the UK, the United States supposedly has requirements for sharing of data from publicly funded trials. But good luck in getting support from regulatory agencies associated with funding sources for obtaining data. Here’s my recent story, still unfolding – or maybe, sadly, over, at least for now.

For a long time I’ve fought my own battles about researchers making unwarranted claims that psychotherapy extend the lives of cancer patients. Research simply does not support the claim. The belief that psychological factors have such influence on the course and outcome of cancer sets up cancer patients to be blamed and to blame themselves when they don’t overcome their disease by some sort of mind control. Our systematic review concluded

“No randomized trial designed with survival as a primary endpoint and in which psychotherapy was not confounded with medical care has yielded a positive effect.”

Investigators who conducted some of the best ambitious, well-designed trials to test the efficacy of psychological interventions on cancer but obtained null results echoed our assessment. The commentaries were entitled “Letting Go of Hope” and “Time to Move on.”

I provided an extensive review of the literature concerning whether psychotherapy and support groups increased survival time in an earlier blog post. Hasn’t the issue of mind-over-cancer been laid to rest? I was recently contacted by a science journalist interested in writing an article about this controversy. After a long discussion, he concluded that the issue was settled — no effect had been found — and he could not succeed in pitching his idea for an article to a quality magazine.

But as detailed here one investigator has persisted in claims that a combination of relaxation exercises, stress reduction, and nutritional counseling increases survival time. My colleagues and I gave this 2008 study a careful look.  We ran chi-square analyses of basic data presented in the paper’s tables. But none of our analyses of group assignment on mortality more disease recurrence was significant. The investigators’ claim of an effect depended on dubious multivariate analyses with covariates that could not be independently evaluated without a look at the data.

The investigator group initially attempted to block publication of a letter to the editor, citing a policy of the journal Cancer that critical letters could not be published unless investigators agreed to respond and they were refusing to respond. We appealed and the journal changed its policy and allowed us additional length to our letter.

We then requested from the investigator’s University Research Integrity Officer the specific data needed to replicate the multivariate analyses in which the investigators claimed an effect on survival. The request was denied:

The data, if disclosed, would reveal pending research ideas and techniques. Consequently, the release of such information would put those using such data for research purposes in a substantial competitive disadvantage as competitors and researchers would have access to the unpublished intellectual property of the University and its faculty and students.

Recall that we were requesting in 2014 specific data needed to evaluate analyses published in 2008.

I checked with statistician Andrew Gelman whether my objections to the multivariate analyses were well-founded and he agreed they were.

Since then, another eminent statistician Helena Kraemer has published an incisive critique of reliance in a randomized controlled trial on multivariate analyses and simple bivariate analyses do not support the efficacy of interventions. She labeled adjustments with covariates as a “source of false-positive findings.”

We appealed to the US Health and Human Services Office of Research Integrity  (ORI) but they indicated no ability to enforce data sharing.

Meanwhile, the principal investigator who claimed an effect on survival accompanied National Cancer Institute program officers to conferences in Europe and the United States where she promoted her intervention as effective. I complained to Robert Croyle, Director, NCI Division of Cancer Control and Population Sciences who twice has been one of the program officer’s co-presenting with her. Ironically, in his capacity as director he is supposedly facilitating data sharing for the division. Professionals were being misled to believe that this intervention would extend the lives of cancer patients, and the claim seemingly had the endorsement NCI.

I told Robert Croyle  that if only the data for the specific analyses were released, it could be demonstrated that the claims were false. Croyle did not disagree, but indicated that there was no way to compel release of the data.

The National Cancer Institute recently offered to pay the conference fees to the International Psycho-Oncology Congress in Washington DC of any professionals willing to sign up for free training in this intervention.

I don’t think I could get any qualified professional including  Croyle to debate me publicly as to whether psychotherapy increases the survival of cancer patients. Yet the promotion of the idea persists because it is consistent with the power of mind over body and disease, an attractive talking point

I have not given up in my efforts to get the data to demonstrate that this trial did not show that psychotherapy extends the survival of cancer patients, but I am blocked by the unwillingness of authorities to enforce data sharing rules that they espouse.

There are obvious parallels between the politics behind persistence of the claim in the US for psychotherapy increasing survival time for cancer patients and those in the UK about cognitive behavior therapy being sufficient treatment for schizophrenia in the absence of medication or producing recovery from the debilitating medical condition, Chronic Fatigue Syndrome/Myalgic Encephalomyelitis. There are also parallels to investigators making controversial claims based on multivariate analyses, but not allowing access to data to independently evaluate the analyses. In both cases, patient well-being suffers.

If the ICO upholds the release of data for the PACE trial in the UK, it will pressure the US NIH to stop hypocritically endorsing data sharing and rewarding investigators whose credibility depends on not sharing their data.

As seen in a PLOS One study, unwillingness to share data in response to formal requests is

associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Why the PACE investigators should not appeal

In the past, PACE investigators have been quite dismissive of criticism, appearing to have assumed that being afflicted with Chronic Fatigue Syndrome/Myalgic Encephalomyelitis precludes a critic being taken seriously, even when the criticism is otherwise valid. However, with publication of the long-term follow-up data in Lancet Psychiatry, they are now contending with accomplished academics whose criticisms cannot be so easily brushed aside. Yes, the credibility of the investigators’ interpretations of their data are being challenged. And even if they do not believe they need to be responsive to patients, they need to be responsive to colleagues. Releasing the data is the only acceptable response and not doing so risks damage to their reputations.

QMUL, Professors White and Sharpe, let the People’s data go.