School-Based Mindfulness Based Stress-Reduction Program (MBSR) fails to deliver positive results

No positive effects found for Jon Kabat-Zinn’s Mindfulness Based Stress-Reduction Program with middle and high school students. Evidence of deterioration was found in some subgroup analyses.

mind the brain logo

No positive effects found for Jon Kabat-Zinn’s Mindfulness Based Stress-Reduction Program with middle and high School Students. Evidence of deterioration was found in some subgroup analyses.

mindfulness in schoolsWe should be cautious about interpreting negative effects that are confined to subgroup analyses. They may well be due to chance. But we should be concerned about the lack of positive findings across measures in the primary analyses. MBSR (a mindfulness training product trademarked and controlled by Jon Kabat-Zinn) and other mindfulness programs have heavily promoted as having wondrous benefits and mandated in many school settings.

The study [with link to the PDF]

Johnson C, Burke C, Brinkman S, Wade T. Effectiveness of a school-based mindfulness program for transdiagnostic prevention in young adolescents. Behaviour Research and Therapy. 2016 Jun 30;81:1-1.

Abstract

Anxiety, depression and eating disorders show peak emergence during adolescence and share common risk factors. School-based prevention programs provide a unique opportunity to access a broad spectrum of the population during a key developmental window, but to date, no program targets all three conditions concurrently. Mindfulness has shown promising early results across each of these psychopathologies in a small number of controlled trials in schools, and therefore this study investigated its use in a randomised controlled design targeting anxiety, depression and eating disorder risk factors together for the first time. Students (M age 13.63; SD = .43) from a broad band of socioeconomic demographics received the eight lesson, once weekly.b (“Dot be”) mindfulness in schools curriculum (N = 132) or normal lessons (N = 176). Anxiety, depression, weight/shape concerns and wellbeing were the primary outcome factors. Although acceptability measures were high, no significant improvements were found on any outcome at post-intervention or 3-month follow-up. Adjusted mean differences between groups at post-intervention were .03 (95% CI: -.06 to -.11) for depression, .01 (-.07 to -.09) for anxiety, .02 (-.05 to -.08) for weight/shape concerns, and .06 (-.08 to -.21) for wellbeing. Anxiety was higher in the mindfulness than the control group at follow-up for males, and those of both genders with low baseline levels of weight/shape concerns or depression. Factors that may be important to address for effective dissemination of mindfulness-based interventions in schools are discussed. Further research is required to identify active ingredients and optimal dose in mindfulness-based interventions in school settings.

The discussion noted:

The design of this study addresses several shortcomings identified in the literature (Britton et al., 2014; Burke, 2010; Felver et al., 2015; Meiklejohn et al., 2012; Tan, 2015; Waters et al., 2014). First, it was a multi-site, randomised controlled design with a moderately large sample size based on a priori power calculations. Second, it included follow-up (three months). Third, it sought to replicate an existing mindfulness-based intervention for youth. Fourth, socioeconomic status was not only reported but a broad range of socioeconomic bands included, although it was unfortunate that poor opt-in consent rates resulted in high data wastage in the lower range schools. Use of the same instructor for all classes in the intervention arm represents a strength (consistency) and a limitation (generalisability of findings).

Coverage in Scientific American

Mindfulness Training for Teens Fails Important Test

A large trial in schools showed no evidence of benefits, and hints it could even cause problems

The fact that this carefully-controlled investigation showed no benefits of mindfulness for any measure, and furthermore indicated an adverse effect for some participants, indicates that mindfulness training is not a universal solution for addressing anxiety or depression in teens, nor does it qualify as a replacement for more traditional psychotherapy or psychopharmacology, at least not as implemented in this school-based paradigm.

eBook_Mindfulness_345x550Preorders are being accepted for e-books providing skeptical looks at mindfulness and positive psychology, and arming citizen scientists with critical thinking skills. Right now there is a special offer for free access to a Mindfulness Master Class. But hurry, it won’t last.

I will also be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. I want to give researchers the tools to get into the journals where their work will get the attention it deserves.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.
 

“It’s certainly not bareknuckle:” Comments to a journalist about a critique of mindfulness research

We can’t assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results.

mind the brain logo

I recently had a Skype interview with science journalist Peter Hess concerning an article in Psychological Science.

Peter was exceptionally prepared, had a definite point of view, but was open to what I said. In the end seem to be persuaded by me on a number of points.  The resulting article in Inverse  faithfully conveyed my perspective and juxtaposed quotes from me with those from an author of the Psych Science piece in a kind of debate.

My point of view

larger dogWhen evaluating an article about mindfulness in a peer-reviewed journal, we need to take into account that authors may not necessarily be striving to do the best science, but to maximally benefit their particular brand of mindfulness, their products, or the settings in which they operate. Many studies of mindfulness are a little more than infomercials, weak research intended only to get mindfulness promoters’ advertisement of themselves into print or to allow the labeling of claims as “peer-reviewed”. Caveat Lector.

We cannot assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results. Rather they may be simply try to get the strongest possible claims through peer review, ignoring best research practices and best publication practices.

Psychologists Express Growing Concern With Mindfulness Meditation

“It’s not bare-knuckle, that’s for sure.”

There was much from the author of the Psych Science article with which  I would agree:

“In my opinion, there are far too many organizations, companies, and therapists moving forward with the implementation of ‘mindfulness-based’ treatments, apps, et cetera before the research can actually tell us whether it actually works, and what the risk-reward ratio is,” corresponding author and University of Melbourne research fellow Nicholas Van Dam, Ph.D. tells Inverse.

Bravo! And

“People are spending a lot of money and time learning to meditate, listening to guest speakers about corporate integration of mindfulness, and watching TED talks about how mindfulness is going to supercharge their brain and help them live longer. Best case scenario, some of the advertising is true. Worst case scenario: very little to none of the advertising is true and people may actually get hurt (e.g., experience serious adverse effects).”

But there were some statements that renewed the discomfort and disappointment I experienced when I read the original article in Psychological Science:

 “I think the biggest concern among my co-authors and I is that people will give up on mindfulness and/or meditation because they try it and it doesn’t work as promised,” says Van Dam.

“There may really be something to mindfulness, but it will be hard for us to find out if everyone gives up before we’ve even started to explore its best potential uses.”

So, how long before we “give up” on thousands of studies pouring out of an industry? In the meantime, should consumers act on what seem to be extravagant claims?

The Inverse article segued into some quotes from me after delivering another statement from the author which I could agree:

The authors of the study make their attitudes clear when it comes to the current state of the mindfulness industry: “Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed,” they write. And while this comes off as unequivocal, some think they don’t go far enough in calling out specific instances of quackery.

“It’s not bare-knuckle, that’s for sure. I’m sure it got watered down in the review process,” James Coyne, Ph.D., an outspoken psychologist who’s extensively criticized the mindfulness industry, tells Inverse.

Coyne agrees with the conceptual issues outlined in the paper, specifically the fact that many mindfulness therapies are based on science that doesn’t really prove their efficacy, as well as the fact that researchers with copyrights on mindfulness therapies have financial conflicts of interest that could influence their research. But he thinks the authors are too concerned with tone policing.

“I do appreciate that they acknowledged other views, but they kept out anybody who would have challenged their perspective,” he says.

Regarding Coyne’s criticism about calling out individuals, Van Dam says the authors avoided doing that so as not to alienate people and stifle dialogue.

“I honestly don’t think that my providing a list of ‘quacks’ would stop people from listening to them,” says Van Dam. “Moreover, I suspect my doing so would damage the possibility of having a real conversation with them and the people that have been charmed by them.” If you need any evidence of this, look at David “Avocado” Wolfe, whose notoriety as a quack seems to make him even more popular as a victim of “the establishment.” So yes, this paper may not go so far as some would like, but it is a first step toward drawing attention to the often flawed science underlying mindfulness therapies.

To whom is the dialogue directed about unwarranted claims from the mindfulness industry?

As one of the authors of an article claiming to be an authoritative review from a group of psychologists with diverse expertise, Van Dam says he is speaking to consumers. Why won’t he and his co-authors provide citations and name names so that readers can evaluate for themselves what they are being told? Is the risk of reputational damage and embarrassment to the psychologists so great as to cause Van Dam to protect them versus protecting consumers from the exaggerated and even fraudulent claims of psychologists hawking their products branded as ‘peer-reviewed psychological and brain science’.

I use the term ‘quack’ sparingly outside of discussing unproven and unlikely-to-be-proven products supposed to promote physical health and well-being or to prevent or cure disease and distress.

I think Harvard psychologist Ellen Langer deserves the term “quack” for her selling of expensive trips to spas in Mexico to women with advanced cancer so that they can change their mind set to reverse the course of their disease. Strong evidence, please! Given that this self-proclaimed mother of mindfulness gets her claims promoted through the Association for Psychological Science website, I think it particularly appropriate for Van Dam and his coauthors to name her in their publication in an APS journal. Were they censored or only censoring themselves?

Let’s put aside psychologists who can be readily named as quacks. How about Van Dam and co-authors naming names of psychologists claiming to alter the brains and immune systems of cancer patients with mindfulness practices so that they improve their physical health and fight cancer, not just cope better with a life-altering disease?

I simply don’t buy Van Dam’s suggestion that to name names promotes quackery any more than I believe exposing anti-vaxxers promotes the anti-vaccine cause.

Is Van Dam only engaged in a polite discussion with fellow psychologists that needs to be strictly tone-policed to avoid offense or is he trying to reach, educate, and protect consumers as citizen scientists looking after their health and well-being? Maybe that is where we parted ways.

Reflections on my tour of the Soteria Project at St Hedwig Hospital, Berlin

A fabulous, enlightened experiment in Berlin with humane treatment of patients suffering severe mental disorder that we cannot reproduce in the United States.

 

mind the brain logo

A fabulous, enlightened experiment in Berlin with humane treatment of patients suffering severe mental disorder that we cannot reproduce in the United States.

soteria doorI visited the Soteria project at St Hedwig Hospital, Berlin at the invitation of Professor Andreas Heinz, Director and Chair of the Department of Psychiatry and Psychotherapy at the Charité— Universitätsmedizin Berlin.

I was actually coming to St Hedwig Hospital, Berlin to give a talk on scientific writing, and was surprised by an offer of a tour of their Soteria Project.

I came away with great respect for a wonderful experiment in the treatment of psychosis that must be protected.

outside SoteriaI was also saddened to realize that such treatment could not conceivably be offered in the United States, even for patients with families who could pay large expenses out of pocket.

In Germany, financial arrangements allow months for the stabilization of acutely psychotic patients. The question is how best to use these resources.

 

In contrast, newly admitted patients in the United States generally are allowed only stays of 48 to 72 hours at the most to stabilize. Inpatient psychiatric beds are in short supply, and often unavailable to those who can afford to pay out of pocket.

The largest inpatient psychiatric facility in the United States is the Los Angeles County jail, where patients are thrown in with criminal populations or forced into anti-suicide smocks and isolated. Access to mental care in the jail is highly restricted.

In United States, the challenge is to get minimal resources to vulnerable severely disturbed population. Efforts to do so must compete with diversion of mental health funds to populations much less in need but amenable to outpatient psychotherapy.

It takes a mass killing to activate calls for better psychiatric care for the severely disturbed, on the false promise that better and more accessible care will measurably reduce mass killings. Of course, this is all a distraction from the need to restrict the firearms used in mass killings.

Professor Heinz and I became friends when I critiqued his study of open versus locked inpatient psychiatric wards, Why Lancet Psychiatry study didn’t show locked inpatient wards ineffective in reducing suicide   . We can still agree to disagree about the interpretation of complex observational/administrative data, but we came to appreciate differences in our sociocultural perspectives.

In my blog I was actually taking aim at Mental Elf’s pandering to the anti-psychiatry crowd with the goofy claim of the lack of “any compelling evidence that locking people up actually increases safety.” Sometimes vulnerable psychotic and suicidal persons need to be protected from themselves.

Furthermore, experimentation with unlocked wards frquently   come to an end with the suicide of a single absconding patient.

In Germany, better staffing and time to develop better relationships with patients allow much more respect for patient autonomy and self-responsibility. But open wards are always vulnerable to these adverse events.

The original Soteria, Palo Alto Project

I came to St Hedwigs with negative feelings about the  original Soteria Project. I was Director of Research at MRI Palo Alto in 1980s when it was housed there. I came away thinking its strong anti-psychiatry attitude was disastrous and led to much harm when it got disseminated.

Loren Mosher and Alma Menn were determined to demonstrate that antipsychotic medication was unnecessary in treatong psychotic patients.

Frankly, Moher and Menn were so committed to their ideological position, they distorted presentation of  their data. They misprepresented comparisons between disparate community mental health and Soteria samples as randomized trials. They relied on a huge selection bias and unreliable diagnoses that lumped acutely maniac patients and personality disorders with patients with schizophrenia. They tortured their data with a variety of p- hacking techniques and still didn’t come up with much.

After Soteria Palo Alto closed, an effort to get an NIMH grant for follow-up failed because the initial presentations of patients was so badly recorded that no retrospective diagnosis was possible.

Subsequent Soteria projects around the world have had a full range of attitudes towards the role of medication in the treatment of vulnerable and highly disorganized patients.

St Hedwig has an  enlightened, evidence informed approach that of course includes judicious use of antipsychotics. Antipsychotic medication is provided with acutely psychotic patients, but at an appropriate dosage. Patient response is closely monitored and tapering is attempted when there is improvement. Importantly, decisions about medication prioritize patient well-being, not staff convenience..

The best evidence is that patients who experiencing  episodes of unmedicated psychosis are increasingly doomed to poor recovery of social and personal functioning. On the other hand, particularly with treatment of ambiguous acute first episodes, has to be a lot of monitoring and reconsideration of medication. In understaffed and underresourced American psychiatric settings, there is little monitoring antipsychotic medications and little efforts at tapering. Furthermore, dosages often excessively high because that makes patients more manageable for overwhelmed staff. Overmedicated patients are easier to handle

Unfortunately, the quality of care offered in Berlin is unimaginable in the US even for those who can afford to pay out of pocket.

group meetingWith Professor Heinz’s permission, here is a refined Google translation of the Project website.

See also  an excellent discussion of the thinking that went into the architecture of Soteria, aimed at maximizing its potential as a therapeutic environment.

around the hearth

Special thanks also to Psychiatrists Dr med Felix Bermpohl and Dr med Martin Voss Oberarzt.

SOTERIA

Soteria’s program at the Charité’s Psychiatric University Clinic in the St. Hedwig Hospital is aimed at young people who are in an acute psychotic crisis, who are afraid of the onset of a psychosis, or who still need a professional stationary environment after a psychotic crisis.

There are 12 treatment rooms in the Soteria. Since the Soteria works within the scope of the compulsory supply, these places are intended exclusively for people from the districts of Wedding, Mitte, Tiergarten and Moabit.

[note from Prof Heinz: The difficult to translate passage refers to our hospital having a catchment area, from which we have to take every patient who wishes to be admitted and particularly every compulsory admission. We serve one of the poorest areas in Berlin, so we do not do “raisin picking” of easy to treat patients.]

“Soteria” (ancient Greek: healing, well-being, preservation, salvation) denotes a special treatment approach for people in psychotic crises with the so-called “milieutherapy”.

The residential environment, the co-patients, the attitude of the therapists as well as the orientation towards normality and “real life” outside the clinic represent the therapeutic milieu. Patients and employees meet in therapeutic communities on the same level and shape together – with the involvement of the social Environment – the day.

The psychosis treatment takes place in the form of active “being-yourself”, if necessary also in continuous 1: 1 care in the so-called “soft room”. The healing therapeutic milieu provides protection, calming and relief of tension, so that psychopharmaceuticals can be used very cautiously. This medication-saving effect of the soteria treatment is scientifically well documented, among other positive effects. (1)

1) Calton, T. et al. (2008): A Systematic Review of the Soteria Paradigm for the Treatment of People Diagnosed With Schizophrenia. Schizophrenia Bulletin 34,1:181-192;

2) L. Ciompi, H. Hoffmann, M. Broccard (Hrsg.), Wie wirkt Soteria? Online Ausgabe (2011), Heidelberg: Carl-Auer-System-Verlag.

3) Hl. Thérèse von Lisieux: Nonne, Mystikerin, KirchenlehrerinGeboren: 2. Januar 1873 in Alencon in der Normandie in Frankreich Verstorben: 30. September 1897 in Lisieux in Frankreich

The reports on the original Soteria, Palo Alto project

Mosher LR, Menn AZ, Matthew SM. Soteria: evaluation of a home-based treatment for schizophrenia. Am J Orthopsychiatry. 1975;45:455–467. [PubMed]

Mosher LR. Implications of family studies for the treatment of schizophrenia. Ir Med J. 1976;69:456–463. [PubMed]

Mosher LR, Menn AZ. Soteria: an alternative to hospitalisation for schizophrenia. Curr Psychiatr Ther. 1975;15:287–296. [PubMed]

Mosher LR, Menn AZ. Soteria House: one year outcome data. Psychopharmacol Bull. 1977;13:46–48.[PubMed]

Mosher LR, Menn AZ. Community residential treatment for schizophrenia: two-year follow-up. Hosp Community Psychiatry. 1978;29:715–723. [PubMed]

Mosher LR, Menn AZ. Soteria: an alternative to hospitalisation for schizophrenics. Curr Psychiatr Ther. 1982;21:189–203. [PubMed]

Matthews SM, Roper MT, Mosher LR, Menn AZ. A non-neuroleptic treatment for schizophrenia: analysis of the two-year post-discharge risk of relapse. Schizophr Bull. 1979;5:322–333. [PubMed]

Mosher LR, Vallone R, Menn AZ. The treatment of acute psychosis without neuroleptics: six-week psychopathology outcome data from the Soteria project. Int J Soc Psychiatry. 1995;41:157–173. [PubMed]

Mosher LR. Soteria and other alternatives to acute psychiatric hospitalisation. J Nerv Ment Dis. 1999;187:142–149. [PubMed]

About Professor Heinz

Andreas Heinz is Director and Chair of the Department of Psychiatry and Psychotherapy at the Charité— Universitätsmedizin Berlin.

He is the author of the just released A New Understanding of Mental Disorders: Computational Models for Dimensional Psychiatry, MIT Press, 2017.

 

 

Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing

Notes

[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.

 

 

 

 

 

 

 

Creating illusions of wondrous effects of yoga and meditation on health: A skeptic exposes tricks

The tour of the sausage factory is starting, here’s your brochure telling you’ll see.

 

A recent review has received a lot of attention with it being used for claims that mind-body interventions have distinct molecular signatures that point to potentially dramatic health benefits for those who take up these practices.

What Is the Molecular Signature of Mind–Body Interventions? A Systematic Review of Gene Expression Changes Induced by Meditation and Related Practices.  Frontiers in Immunology. 2017;8.

Few who are tweeting about this review or its press coverage are likely to have read it or to understand it, if they read it. Most of the new agey coverage in social media does nothing more than echo or amplify the message of the review’s press release.  Lazy journalists and bloggers can simply pass on direct quotes from the lead author or even just the press release’s title, ‘Meditation and yoga can ‘reverse’ DNA reactions which cause stress, new study suggests’:

“These activities are leaving what we call a molecular signature in our cells, which reverses the effect that stress or anxiety would have on the body by changing how our genes are expressed.”

And

“Millions of people around the world already enjoy the health benefits of mind-body interventions like yoga or meditation, but what they perhaps don’t realise is that these benefits begin at a molecular level and can change the way our genetic code goes about its business.”

[The authors of this review actually identified some serious shortcomings to the studies they reviewed. I’ll be getting to some excellent points at the end of this post that run quite counter to the hype. But the lead author’s press release emphasized unwarranted positive conclusions about the health benefits of these practices. That is what is most popular in media coverage, especially from those who have stuff to sell.]

Interpretation of the press release and review authors’ claims requires going back to the original studies, which most enthusiasts are unlikely to do. If readers do go back, they will have trouble interpreting some of the deceptive claims that are made.

Yet, a lot is at stake. This review is being used to recommend mind-body interventions for people having or who are at risk of serious health problems. In particular, unfounded claims that yoga and mindfulness can increase the survival of cancer patients are sometimes hinted at, but occasionally made outright.

This blog post is written with the intent of protecting consumers from such false claims and providing tools so they can spot pseudoscience for themselves.

Discussion in the media of the review speaks broadly of alternative and complementary interventions. The coverage is aimed at inspiring  confidence in this broad range of treatments and to encourage people who are facing health crises investing time and money in outright quackery. Seemingly benign recommendations for yoga, tai chi, and mindfulness (after all, what’s the harm?) often become the entry point to more dubious and expensive treatments that substitute for established treatments.  Once they are drawn to centers for integrative health care for classes, cancer patients are likely to spend hundreds or even thousands on other products and services that are unlikely to benefit them. One study reported:

More than 72 oral or topical, nutritional, botanical, fungal and bacterial-based medicines were prescribed to the cohort during their first year of IO care…Costs ranged from $1594/year for early-stage breast cancer to $6200/year for stage 4 breast cancer patients. Of the total amount billed for IO care for 1 year for breast cancer patients, 21% was out-of-pocket.

Coming up, I will take a skeptical look at the six randomized trials that were highlighted by this review.  But in this post, I will provide you with some tools and insights so that you do not have to make such an effort in order to make an informed decision.

Like many of the other studies cited in the review, these randomized trials were quite small and underpowered. But I will focus on the six because they are as good as it gets. Randomized trials are considered a higher form of evidence than simple observational studies or case reports [It is too bad the authors of the review don’t even highlight what studies are randomized trials. They are lumped with others as “longitudinal studies.]

As a group, the six studies do not actually add any credibility to the claims that mind-body interventions – specifically yoga, tai chi, and mindfulness training or retreats improve health by altering DNA.  We can be no more confident with what the trials provide than we would be without them ever having been done.

I found the task of probing and interpreting the studies quite labor-intensive and ultimately unrewarding.

I had to get past poor reporting of what was actually done in the trials, to which patients, and with what results. My task often involved seeing through cover ups with authors exercising considerable flexibility in reporting what measures were they actually collected and what analyses were attempted, before arriving at the best possible tale of the wondrous effects of these interventions.

Interpreting clinical trials should not be so hard, because they should be honestly and transparently reported and have a registered protocol and stick to it. These reports of trials were sorely lacking, The full extent of the problems took some digging to uncover, but some things emerged before I got to the methods and results.

The introductions of these studies consistently exaggerated the strength of existing evidence for the effects of these interventions on health, even while somehow coming to the conclusion that this particular study was urgently needed and it might even be the “first ever”. The introductions to the six papers typically cross-referenced each other, without giving any indication of how poor quality the evidence was from the other papers. What a mutual admiration society these authors are.

One giveaway is how the introductions  referred to the biggest, most badass, comprehensive and well-done review, that of Goyal and colleagues.

That review clearly states that the evidence for the effects of mindfulness is poor quality because of the lack of comparisons with credible active treatments. The typical randomized trial of mindfulness involves a comparison with no-treatment, a waiting list, or patients remaining in routine care where the target problem is likely to be ignored.  If we depend on the bulk of the existing literature, we cannot rule out the likelihood that any apparent benefits of mindfulness are due to having more positive expectations, attention, and support over simply getting nothing.  Only a handful  of hundreds of trials of mindfulness include appropriate, active treatment comparison/control groups. The results of those studies are not encouraging.

One of the first things I do in probing the introduction of a study claiming health benefits for mindfulness is see how they deal with the Goyal et al review. Did the study cite it, and if so, how accurately? How did the authors deal with its message, which undermines claims of the uniqueness or specificity of any benefits to practicing mindfulness?

For yoga, we cannot yet rule out that it is better than regular exercising – in groups or alone – having relaxing routines. The literature concerning tai chi is even smaller and poorer quality, but there is the same need to show that practicing tai chi has any benefits over exercising in groups with comparable positive expectations and support.

Even more than mindfulness, yoga and tai chi attract a lot of pseudoscientific mumbo jumbo about integrating Eastern wisdom and Western science. We need to look past that and insist on evidence.

Like their introductions, the discussion sections of these articles are quite prone to exaggerating how strong and consistent the evidence is from existing studies. The discussion sections cherry pick positive findings in the existing literature, sometimes recklessly distorting them. The authors then discuss how their own positively spun findings fit with what is already known, while minimizing or outright neglecting discussion of any of their negative findings. I was not surprised to see one trial of mindfulness for cancer patients obtain no effects on depressive symptoms or perceived stress, but then go on to explain mindfulness might powerfully affect the expression of DNA.

If you want to dig into the details of these studies, the going can get rough and the yield for doing a lot of mental labor is low. For instance, these studies involved drawing blood and analyzing gene expression. Readers will inevitably encounter passages like:

In response to KKM treatment, 68 genes were found to be differentially expressed (19 up-regulated, 49 down-regulated) after adjusting for potentially confounded differences in sex, illness burden, and BMI. Up-regulated genes included immunoglobulin-related transcripts. Down-regulated transcripts included pro-inflammatory cytokines and activation-related immediate-early genes. Transcript origin analyses identified plasmacytoid dendritic cells and B lymphocytes as the primary cellular context of these transcriptional alterations (both p < .001). Promoter-based bioinformatic analysis implicated reduced NF-κB signaling and increased activity of IRF1 in structuring those effects (both p < .05).

Intimidated? Before you defer to the “experts” doing these studies, I will show you some things I noticed in the six studies and how you can debunk the relevance of these studies for promoting health and dealing with illness. Actually, I will show that even if these 6 studies got the results that the authors claimed- and they did not- at best, the effects would trivial and lost among the other things going on in patients’ lives.

Fortunately, there are lots of signs that you can dismiss such studies and go on to something more useful, if you know what to look for.

Some general rules:

  1. Don’t accept claims of efficacy/effectiveness based on underpowered randomized trials. Dismiss them. The rule of thumb is reliable to dismiss trials that have less than 35 patients in the smallest group. Over half the time, true moderate sized effects will be missed in such studies, even if they are actually there.

Due to publication bias, most of the positive effects that are published from such sized trials will be false positives and won’t hold up in well-designed, larger trials.

When significant positive effects from such trials are reported in published papers, they have to be large to have reached significance. If not outright false, these effect sizes won’t be matched in larger trials. So, significant, positive effect sizes from small trials are likely to be false positives and exaggerated and probably won’t replicate. For that reason, we can consider small studies to be pilot or feasibility studies, but not as providing estimates of how large an effect size we should expect from a larger study. Investigators do it all the time, but they should not: They do power calculations estimating how many patients they need for a larger trial from results of such small studies. No, no, no!

Having spent decades examining clinical trials, I am generally comfortable dismissing effect sizes that come from trials with less than 35 patients in the smaller group. I agree with a suggestion that if there are two larger trials are available in a given literature, go with those and ignore the smaller studies. If there are not at least two larger studies, keep the jury out on whether there is a significant effect.

Applying the Rule of 35, 5 of the 6 trials can be dismissed and the sixth is ambiguous because of loss of patients to follow up.  If promoters of mind-body interventions want to convince us that they have beneficial effects on physical health by conducting trials like these, they have to do better. None of the individual trials should increase our confidence in their claims. Collectively, the trials collapse in a mess without providing a single credible estimate of effect size. This attests to the poor quality of evidence and disrespect for methodology that characterizes this literature.

  1. Don’t be taken in by titles to peer-reviewed articles that are themselves an announcement that these interventions work. Titles may not be telling the truth.

What I found extraordinary is that five of the six randomized trials had a title that indicating a positive effect was found. I suspect that most people encountering the title will not actually go on to read the study. So, they will be left with the false impression that positive results were indeed obtained. It’s quite a clever trick to make the title of an article, by which most people will remember it, into a false advertisement for what was actually found.

For a start, we can simply remind ourselves that with these underpowered studies, investigators should not even be making claims about efficacy/effectiveness. So, one trick of the developing skeptic is to confirm that the claims being made in the title don’t fit with the size of the study. However, actually going to the results section one can find other evidence of discrepancies between what was found in what is being claimed.

I think it’s a general rule of thumb that we should be careful of titles for reports of randomized that declare results. Even when what is claimed in the title fits with the actual results, it often creates the illusion of a greater consistency with what already exists in the literature. Furthermore, even when future studies inevitably fail to replicate what is claimed in the title, the false claim lives on, because failing to replicate key findings is almost never a condition for retracting a paper.

  1. Check the institutional affiliations of the authors. These 6 trials serve as a depressing reminder that we can’t go on researchers’ institutional affiliation or having federal grants to reassure us of the validity of their claims. These authors are not from Quack-Quack University and they get funding for their research.

In all cases, the investigators had excellent university affiliations, mostly in California. Most studies were conducted with some form of funding, often federal grants.  A quick check of Google would reveal from at least one of the authors on a study, usually more, had federal funding.

  1. Check the conflicts of interest, but don’t expect the declarations to be informative. But be skeptical of what you find. It is also disappointing that a check of conflict of interest statements for these articles would be unlikely to arouse the suspicion that the results that were claimed might have been influenced by financial interests. One cannot readily see that the studies were generally done settings promoting alternative, unproven treatments that would benefit from the publicity generated from the studies. One cannot see that some of the authors have lucrative book contracts and speaking tours that require making claims for dramatic effects of mind-body treatments could not possibly be supported by: transparent reporting of the results of these studies. As we will see, one of the studies was actually conducted in collaboration with Deepak Chopra and with money from his institution. That would definitely raise flags in the skeptic community. But the dubious tie might be missed by patients in their families vulnerable to unwarranted claims and unrealistic expectations of what can be obtained outside of conventional medicine, like chemotherapy, surgery, and pharmaceuticals.

Based on what I found probing these six trials, I can suggest some further rules of thumb. (1) Don’t assume for articles about health effects of alternative treatments that all relevant conflicts of interest are disclosed. Check the setting in which the study was conducted and whether it was in an integrative [complementary and alternative, meaning mostly unproven.] care setting was used for recruiting or running the trial. Not only would this represent potential bias on the part of the authors, it would represent selection bias in recruitment of patients and their responsiveness to placebo effects consistent with the marketing themes of these settings.(2) Google authors and see if they have lucrative pop psychology book contracts, Ted talks, or speaking gigs at positive psychology or complementary and alternative medicine gatherings. None of these lucrative activities are typically expected to be disclosed as conflicts of interest, but all require making strong claims that are not supported by available data. Such rewards are perverse incentives for authors to distort and exaggerate positive findings and to suppress negative findings in peer-reviewed reports of clinical trials. (3) Check and see if known quacks have prepared recruitment videos for the study, informing patients what will be found (Serious, I was tipped off to look and I found that).

  1. Look for the usual suspects. A surprisingly small, tight, interconnected group is generating this research. You could look the authors up on Google or Google Scholar or  browse through my previous blog posts and see what I have said about them. As I will point out in my next blog, one got withering criticism for her claim that drinking carbonated sodas but not sweetened fruit drinks shortened your telomeres so that drinking soda was worse than smoking. My colleagues and I re-analyzed the data of another of the authors. We found contrary to what he claimed, that pursuing meaning, rather than pleasure in your life, affected gene expression related to immune function. We also showed that substituting randomly generated data worked as well as what he got from blood samples in replicating his original results. I don’t think it is ad hominem to point out a history for both of the authors of making implausible claims. It speaks to source credibility.
  1. Check and see if there is a trial registration for a study, but don’t stop there. You can quickly check with PubMed if a report of a randomized trial is registered. Trial registration is intended to ensure that investigators commit themselves to a primary outcome or maybe two and whether that is what they emphasized in their paper. You can then check to see if what is said in the report of the trial fits with what was promised in the protocol. Unfortunately, I could find only one of these was registered. The trial registration was vague on what outcome variables would be assessed and did not mention the outcome emphasized in the published paper (!). The registration also said the sample would be larger than what was reported in the published study. When researchers have difficulty in recruitment, their study is often compromised in other ways. I’ll show how this study was compromised.

Well, it looks like applying these generally useful rules of thumb is not always so easy with these studies. I think the small sample size across all of the studies would be enough to decide this research has yet to yield meaningful results and certainly does not support the claims that are being made.

But readers who are motivated to put in the time of probing deeper come up with strong signs of p-hacking and questionable research practices.

  1. Check the report of the randomized trial and see if you can find any declaration of one or two primary outcomes and a limited number of secondary outcomes. What you will find instead is that the studies always have more outcome variables than patients receiving these interventions. The opportunities for cherry picking positive findings and discarding the rest are huge, especially because it is so hard to assess what data were collected but not reported.
  1. Check and see if you can find tables of unadjusted primary and secondary outcomes. Honest and transparent reporting involves giving readers a look at simple statistics so they can decide if results are meaningful. For instance, if effects on stress and depressive symptoms are claimed, are the results impressive and clinically relevant? Almost in all cases, there is no peeking allowed. Instead, authors provide analyses and statistics with lots of adjustments made. They break lots of rules in doing so, especially with such a small sample. These authors are virtually assured to get results to crow about.

Famously, Joe Simmons and Leif Nelson hilariously published claims that briefly listening to the Beatles’ “When I’m 64” left students a year and a half older younger than if they were assigned to listening to “Kalimba.”  Simmons and Leif Nelson knew this was nonsense, but their intent was to show what researchers can do if they have free reign with how they analyze their data and what they report and  . They revealed the tricks they used, but they were so minor league and amateurish compared to what the authors of these trials consistently did in claiming that yoga, tai chi, and mindfulness modified expression of DNA.

Stay tuned for my next blog post where I go through the six studies. But consider this, if you or a loved one have to make an immediate decision about whether to plunge into the world of woo woo unproven medicine in hopes of  altering DNA expression. I will show the authors of these studies did not get the results they claimed. But who should care if they did? Effects were laughably trivial. As the authors of this review about which I have been complaining noted:

One other problem to consider are the various environmental and lifestyle factors that may change gene expression in similar ways to MBIs [Mind-Body Interventions]. For example, similar differences can be observed when analyzing gene expression from peripheral blood mononuclear cells (PBMCs) after exercise. Although at first there is an increase in the expression of pro-inflammatory genes due to regeneration of muscles after exercise, the long-term effects show a decrease in the expression of pro-inflammatory genes (55). In fact, 44% of interventions in this systematic review included a physical component, thus making it very difficult, if not impossible, to discern between the effects of MBIs from the effects of exercise. Similarly, food can contribute to inflammation. Diets rich in saturated fats are associated with pro-inflammatory gene expression profile, which is commonly observed in obese people (56). On the other hand, consuming some foods might reduce inflammatory gene expression, e.g., drinking 1 l of blueberry and grape juice daily for 4 weeks changes the expression of the genes related to apoptosis, immune response, cell adhesion, and lipid metabolism (57). Similarly, a diet rich in vegetables, fruits, fish, and unsaturated fats is associated with anti-inflammatory gene profile, while the opposite has been found for Western diet consisting of saturated fats, sugars, and refined food products (58). Similar changes have been observed in older adults after just one Mediterranean diet meal (59) or in healthy adults after consuming 250 ml of red wine (60) or 50 ml of olive oil (61). However, in spite of this literature, only two of the studies we reviewed tested if the MBIs had any influence on lifestyle (e.g., sleep, diet, and exercise) that may have explained gene expression changes.

How about taking tango lessons instead? You would at least learn dance steps, get exercise, and decrease any social isolation. And so what if there were more benefits than taking up these other activities?

 

 

Complex PTSD, STAIR, Social Ecology and lessons learned from 9/11- a conversation with Dr. Marylene Cloitre

Dr. Marylene Cloitre is the Associate Director of Research of the National Center for PTSD Dissemination and Training Division and a research Professor of Psychiatry and Child and Adolescent Psychiatry at the New York University, Langone Medical Center in New York City. She is a recipient of several honors related to her service in New York City following 9-11 and was an advisory committee member for the National September 11 Memorial Museum. She has specific expertise in complex PTSD and for the development and dissemination of STAIR (Skills Training in Affective and Interpersonal Regulation), a psychological therapy designed to help survivors of trauma.

Dr. Jain: What exactly is complex PTSD?

Dr. Cloitre:
Complex PTSD has a very long history, really pushed primarily by clinicians who looked at their patients and thought there’s something more going on here than PTSD.
In DSM-4, complex PTSD was recognized in the additional features where there is a mix of problems related to emotion regulation, self-concept and interpersonal relationships. After that, there was really no funding around investigating this further and the research for it has been spotty and it was sort of dying on the vine.

But with the development of a new version of ICD-11, there was an opportunity really to refresh consideration about complex PTSD. I was part of a work group that started in 2012, we looked at the literature and thought there seems to be enough data to support two different forms of PTSD , the classic fear circuitry disturbance and then this more general kind of disturbance in these three core areas of emotion regulation, self-concept and interpersonal relationships.

We proposed that there should be two distinct disorders: PTSD and complex PTSD and it looks like it’s been accepted and it will part of the ICD-11 coming out in the 2018.

Since the initial proposal, I’ve been working with many people, mostly Europeans, where ICD is more prominent than in the United States and there are now about nine published papers providing supporting evidence that these two distinct disorders.

Dr. Jain:
Can you summarize in which ways they’re distinct? So on a clinical level what would you see in complex PTSD?

Dr. Cloitre: Mostly we’ve been looking at latent class analysis which is a newish kind of data analytic technique which looks at how people cluster together and you look at their symptom profile. There are a group of people who very distinctly have PTSD in terms of re-experiencing, avoidance and hyperarousal and then they’re fine on everything else. Then you have another group of people who have these problems as well as problems in these three other areas.And then there are another group of people who, despite exposure to trauma, do fairly well.

What we’ve been seeing are these three groups in clinical populations as well as in community populations and adults as well as in children.

Overall, these latent class analyses are really showing that people cluster together in very distinctly different ways. I think the important thing about this distinction is, what’s next? Perhaps there are different clinical interventions that we want to look at to maximize good outcome. Some people may do very well with exposure therapy. I would say the PTSD clustered folks will do very well and have optimal outcome because that’s all that bothers them. For the other folks, they have a lot of other problems that really contribute to their functional impairment.

For me as a clinician as well as a researcher, I’ve always been worried not so much about the diagnosis of the person in front of me but about how well they’re functioning in the world. What I have noticed is you can get rid of the PTSD symptoms, for people with complex PTSD, but they’re still very impaired.
My motivation for thinking about a different diagnosis and different treatment is to identify these other problems and then to provide interventions, that target these other problems, for the goal of improving our day to day life functioning. If you don’t have ability to relate well to people because you mistrust them or are highly avoidant or if you think poorly about yourself these are huge issues then we need to target these issues in treatment.

Dr. Jain
Have you noticed that different types of trauma contribute to PTSD v complex PTSD?

Dr. Cloitre Yes, it does and it kind of makes sense that people who have had sustained and repeated trauma (e.g. multiple and sustained trauma doing childhood) are the ones who have complex PTSD.

Dr. Jain: Can you tell us a little bit about the fundamental philosophy that drove you to come up with STAIR and what evidence is there for it’s effectiveness?

Dr. Cloitre I came to develop STAIR as a result of paying attention to what my patients were telling me they wanted help with, that was the driving force. It wasn’t a theoretical model, it was that patients came and said,” I’m really having problems with my relationships and that’s what I need help with” or “I really have problems with my moods and I need help with that”.

So, I thought, why don’t we start there? That is why I developed STAIR and developed it as a sequence therapy while respecting the importance of getting into the trauma and doing exposure based work, I also wanted to engage the patient and respect their presented needs. That what it’s all about for me.
Overtime I saw a secondary benefit, that an improved sense of self and improved emotion regulation could actually impact the value of exposure therapy in a positive way.

In my mind, the real question is: What kind of treatments work best for whom? That is the question. There will be some people for whom going straight to exposure therapy is the most effective and efficient way to get them functioning and they’ll be happy with three or four sessions, just like some 9/11 survivors I saw. They only needed three or four sessions.

Other people might do better with combination therapies .

Dr. Jain The studies that you’ve done with STAIR can you summarize the populations you have used it for?

Dr. Cloitre: I began using STAIR + exposure with the population I thought would most need it which is people with histories of childhood abuse. In fact, our data show that the combination of skills training, plus exposure was significantly better than skills alone or exposure alone. So that’s very important. It also reduced dropout very significantly as compared to exposure, which is a continuing problem with exposure therapy especially for this population

Dr. Jain Can you speak to the social ecology/social bonds and PTSD, what the research world can tell us about the social dimensions of PTSD and how we can apply this to returning military members and veterans?

Dr. Cloitre: I think that social support is critical to the recovery of people who have been exposed to trauma and who are vulnerable to symptoms .We have enough studies showing that it’s the critical determinant of return to health.

I think we have done a very poor job of translating this observation into something meaningful for returning veterans. There is general attention that families are part of the solution and communities are part of the solution but it is vague –there isn’t really a sense of what are we going to do about it.

I think these wars (Afghanistan and Iraq) are very different than Vietnam, that’s where soldiers came back and they were called baby killers and had tomatoes and garbage thrown at them. You can really understand why a vulnerable person would spiral downwards into pretty significant PTSD and substance abuse.

I think we need to be more thoughtful and engage veterans in discussions about what’s most helpful in the reintegration progress, because there are probably really explicit things like, being welcomed home, but also very subtle things that we haven’t figured out about the experience.
I think on a community or family level, there’s a general awareness but we haven’t really gotten clear thinking or effective about what to do. I think that’s our next step. The parade and the welcome home signs are not enough.

I’ll give an example of what I witnessed after 9/11. The community around survivors feels awkward and doesn’t know what to do, so they start moving away. Combine this with the survivor who is sad or being irritable and so not the most attractive person to engage with. I say to patients sometimes, it’s a really unfair and unfortunate circumstance, that in a way, not only are you suffering but you’re also kind of responsible for making people around you comfortable with you.

I used to do STAIR because patients ask for it and also I thought,” oh well some people never had these social skills in the first place, which is why they are vulnerable with PTSD” but then I noticed that STAIR was useful for everybody with PTSD because I think the traumatized patient has an unfair burden to actually reach out to people in a process of re-engagement because the community and the family is confused. Others, strangers or say employers are scared. So they have to kind of compensate for the discomfort of others, which is asking a lot.

I think in our therapies we can say look, it’s not fair, but people feel uncomfortable around the veteran. They don’t know how to act and in a way you not only have to educate yourself about your circumstance but, in the end, educate others.

Dr. Jain Survivor perception of social support really matters. If you take a group of disaster survivors, we may feel well we’re doing this for them and we’re doing that for them but if the survivors, for whatever reason, don’t perceive it as being helpful it doesn’t matter. When I think about marginalized populations in our society, I don’t think to communicate to others about how to help you or how to support you is that simple.

Dr. Cloitre It’s very complicated because it is a dynamic. I think we need to talk to trauma survivors and understand what their needs are so that the community can respond effectively and be a match. Not everybody wants the same thing. That’s the real challenge. I think if survivors can be a little bit more compassionate, not only towards themselves for what they have been through but to others who are trying to communicate with them and failing.

Dr. Jain That can be hard to do when you’re hurting. The social ecology of PTSD is really important but it’s really complicated and we are not there, in terms of harnessing social ecology to improve lives.

Dr. Cloitre No. I think we’re just groping around in the dark, in a room that says the social ecology of PTSD is important. We don’t know how to translate that observation into actionable plans either in our individual therapies or in our family therapies and then in our community actions or policies.
But I do think that, in the individual therapy, recognizing the importance of trying to enhance perception of support where they’re real. Secondly, recognizing the burden that they have that’s unfair and try to enhance skills for communicating with people. Thirdly, having compassion for people who are out there who are trying to communicate but failing.
I have had a lot of patients who come, into therapy, and say,
” This is so ridiculous. They’re saying stupid things to me”.
And, I say,
“well at least they’re trying”
I think it’s important for the affected community to have the voice and take leadership, instead of people kind of smothering them with social support that they may or may not need.

Dr. Jain
I know you’re a native New Yorker and you provided a lot of service to New York City following 9/11. Can you speak about that work? And in particular what I’m really interested in is that body of research that emerged after 9/11 because I feel like that has helped us understand so much about disaster related PTSD.

Dr. Cloitre We found out was most people are very resilient. We were able to get prevalence rates of PTSD following 9/11, that in of itself was very important. I think that’s the strongest research that came out.

I think on a social level it broke open awareness, in this country and maybe globally, about the impact of trauma and about PTSD because it came with very little shame or guilt.
Some people say what was so different about 9/11? Well, because it happened to the most powerful country and the most powerful city then if it could happen to them it could happen anywhere. That was the response, there was not this marginalization, ”Well this is a victim circumstance and it couldn’t happen to me and they must have done something to bring it on themselves”.
There was a hugely different response and that was so key to the shift in recognition of the diagnosis of PTSD which then led to more general research about it. I think that that was huge.
Before 9/11, I would say I do research in PTSD and people would say, what is that? Now I say I do research in PTSD, not a single person ever asks me what that is. I mean I’m sure they don’t really know what it is but they never looked confused. It’s a term that is now part and parcel of American society.
9/11 revolutionized the awareness of PTSD and also the acceptability of adverse effects, as a result of trauma. There was new knowledge gained and also just a transformation in awareness that was national and probably global because the impact it had and the ripple effects on another countries.
I think those are the two main things.
I don’t think it’s really done very much for our thinking about treatment. I think we continue to do some of our central treatments and we didn’t get too far in really advancing or diversifying.
For me personally, I learned a lot about the diversity of kinds of trauma survivors. Very different people, very different reactions.
I think probably the other important academic or scholarly advance, was in the recognition of this blend of loss and trauma and how they come together. That people’s responses to death ,under circumstances of unexpected and violent death, has also advanced. In fact now ICD-11 there will be a traumatic grief diagnosis, which I think has moved forward because of 9/11. That’s pretty big.

Danish RCT of cognitive behavior therapy for whatever ails your physician about you

I was asked by a Danish journalist to examine a randomized controlled trial (RCT) of cognitive behavior therapy (CBT) for functional somatic symptoms. I had not previously given the study a close look.

I was dismayed by how highly problematic the study was in so many ways.

I doubted that the results of the study showed any benefits to the patients or have any relevance to healthcare.

I then searched and found the website for the senior author’s clinical offerings.  I suspected that the study was a mere experimercial or marketing effort of the services he offered.

Overall, I think what I found hiding in plain sight has broader relevance to scrutinizing other studies claiming to evaluate the efficacy of CBT for what are primarily physical illnesses, not psychiatric disorders. Look at the other RCTs. I am confident you will find similar problems. But then there is the bigger picture…

[A controversial assessment ahead? You can stop here and read the full text of the RCT  of the study and its trial registration before continuing with my analysis.]

Schröder A, Rehfeld E, Ørnbøl E, Sharpe M, Licht RW, Fink P. Cognitive–behavioural group treatment for a range of functional somatic syndromes: randomised trial. The British Journal of Psychiatry. 2012 Apr 13:bjp-p.

A summary overview of what I found:

 The RCT:

  • Was unblinded to patients, interventionists, and to the physicians continuing to provide routine care.
  • Had a grossly unmatched, inadequate control/comparison group that leads to any benefit from nonspecific (placebo) factors in the trial counting toward the estimated efficacy of the intervention.
  • Relied on subjective self-report measures for primary outcomes.
  • With such a familiar trio of design flaws, even an inert homeopathic treatment would be found effective, if it were provided with the same positive expectations and support as the CBT in this RCT. [This may seem a flippant comment that reflects on my credibility, not the study. But please keep reading to my detailed analysis where I back it up.]
  • The study showed an inexplicably high rate of deterioration in both treatment and control group. Apparent improvement in the treatment group might only reflect less deterioration than in the control group.
  • The study is focused on unvalidated psychiatric diagnoses being applied to patients with multiple somatic complaints, some of whom may not yet have a medical diagnosis, but most clearly had confirmed physical illnesses.

But wait, there is more!

  • It’s not CBT that was evaluated, but a complex multicomponent intervention in which what was called CBT is embedded in a way that its contribution cannot be evaluated.

The “CBT” did not map well on international understandings of the assumptions and delivery of CBT. The complex intervention included weeks of indoctrination of the patient with an understanding of their physical problems that incorporated simplistic pseudoscience before any CBT was delivered. We focused on goals imposed by a psychiatrist that didn’t necessarily fit with patients’ sense of their most pressing problems and the solutions.

OMGAnd the kicker.

  • The authors switched primary outcomes – reconfiguring the scoring of their subjective self-report measures years into the trial, based on a peeking at the results with the original scoring.

Investigators have a website which is marketing services. Rather than a quality contribution to the literature, this study can be seen as an experimercial doomed to bad science and questionable results from before the first patient was enrolled. An undeclared conflict of interest in play? There is another serious undeclared conflict of interest for one of the authors.

For the uninformed and gullible, the study handsomely succeeds as an advertisement for the investigators’ services to professionals and patients.

Personally, I would be indignant if a primary care physician tried to refer me or friend or family member to this trial. In the absence of overwhelming evidence to the contrary, I assume that people around me who complain of physical symptoms have legitimate physical concerns. If they do not yet have a confirmed diagnosis, it serves little purpose to stop the probing and refer them to psychiatrists. This trial operates with an anachronistic Victorian definition of psychosomatic condition.

something is rotten in the state of DenmarkBut why should we care about a patently badly conducted trial with switched outcomes? Is it only a matter of something being rotten in the state of Denmark? Aside from the general impact on the existing literature concerning CBT for somatic conditions, results of this trial  were entered into a Cochrane review of nonpharmacological interventions for medically unexplained symptoms. I previously complained about one of the authors of this RCT also being listed as an author on another Cochrane review protocol. Prior to that, I complained to Cochrane  about this author’s larger research group influencing a decision to include switched outcomes in another Cochrane review.  A lot of us rightfully depend heavily on the verdict of Cochrane reviews for deciding best evidence. That trust is being put into jeopardy.

Detailed analysis

1.This is an unblinded trial, a particularly weak methodology for examining whether a treatment works.

The letter that alerted physicians to the trial had essentially encouraged them to refer patients they were having difficulty managing.

‘Patients with a long-term illness course due to medically unexplained or functional somatic symptoms who may have received diagnoses like fibromyalgia, chronic fatigue syndrome, whiplash associated disorder, or somatoform disorder.

Patients and the physicians who referred them subsequently got feedback about to which group patients were assigned, either routine care or what was labeled as CBT. This information could have had a strong influence on the outcomes that were reported, particularly for the patients left in routine care.

Patients’ learning that they did not get assigned to the intervention group was undoubtedly disappointing and demoralizing. The information probably did nothing to improve the positive expectations and support available to patients in routine. This could have had a nocebo effect. The feedback may have contributed to the otherwise  inexplicably high rates of subjective deterioration [to be noted below] reported by patients left in the routine care condition. In contrast, the authors’ disclosure that patients had been assigned to the intervention group undoubtedly boosted the morale of both patients and physicians and also increased the gratitude of the patients. This would be reflected in the responses to the subjective outcome measures.

The gold standard alternative to an unblinded trial is a double-blind, placebo-controlled trial in which neither providers, nor patients, nor even the assessors rating outcomes know to which group particular patients were assigned. Of course, this is difficult to achieve in a psychotherapy trial. Yet a fair alternative is a psychotherapy trial in which patients and those who refer them are blind to the nature of the different treatments, and in which an effort is made to communicate credible positive expectations about the comparison control group.

Conclusion: A lack of blinding seriously biases this study toward finding a positive effect for the intervention, regardless of whether the intervention has any active, effective component.

2. A claim that this is a randomized controlled trial depends on the adequacy of the control offered by the comparison group, enhanced routine care. Just what is being controlled by the comparison? In evaluating a psychological treatment, it’s important that the comparison/control group offers the same frequency and intensity of contact, positive expectations, attention and support. This trial decidedly did not.

 There were large differences between the intervention and control conditions in the amount of contact time. Patients assigned to the cognitive therapy condition received an additional 9 group sessions with a psychiatrist of 3.5 hour duration, plus the option of even more consultations. The over 30 hours of contact time with a psychiatrist should be very attractive to patients who wanted it and could not otherwise obtain it. For some, it undoubtedly represented an opportunity to have someone to listen to their complaints of pain and suffering in a way that had not previously happened. This is also more than the intensity of psychotherapy typically offered in clinical trials, which is closer to 10 to 15, 50-minute sessions.

The intervention group thus received substantially more support and contact time, which was delivered with more positive expectations. This wealth of nonspecific factors favoring the intervention group compromises an effort to disentangle the specific effects of any active ingredient in the CBT intervention package. From what has been said so far, the trials’ providing a fair and generalizable evaluation of the CBT intervention is nigh impossible.

Conclusion: This is a methodologically poor choice of control groups with the dice loaded to obtain a positive effect for CBT.

3.The primary outcomes, both as originally scored and after switching, are subjective self-report measures that are highly responsive to nonspecific treatments, alleviation of mild depressive symptoms and demoralization. They are not consistently related to objective changes in functioning. They are particularly problematic when used as outcome measures in the context of an unblinded clinical trial within an inadequate control group.

There have been consistent demonstrations that assigning patients to inert treatments and measuring the outcomes with subjective measures may register improvements that will not correspond to what would be found with objective measures.

For instance, a provocative New England Journal of Medicine study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

There have been a number of demonstrations that treatments such as the one offered in the present study to patient populations similar to those in the study produce changes in subjective self-report that are not reflected in objective measures.

Much of the improvement in primary outcomes occurred before the first assessment after baseline and not very much afterwards. The early response is consistent with a placebo response.

The study actually included one largely unnoticed objective measure, utilization of routine care. Presumably if the CBT was effective as claimed, it would have produced a significant reduction in healthcare utilization. After all, isn’t the point of this trial to demonstrate that CBT can reduce health-care utilization associated with (as yet) medically unexplained symptoms? Curiously, utilization of routine care did not differ between groups.

The combination of the choice of subjective outcomes, unblinded nature of the trial, and poorly chosen control group bring together features that are highly likely to produce the appearance of positive effects, without any substantial benefit to the functioning and well-being of the patients.

Conclusion: Evidence for the efficacy of a CBT package for somatic complaints that depends solely on subjective self-report measures is unreliable, and unlikely to generalize to more objective measures of meaningful impact on patients’ lives.

4. We need to take into account the inexplicably high rates of deterioration in both groups, but particularly in the control group receiving enhanced care.

There was an unexplained deterioration of 50% deterioration in the control group and 25% in the intervention group. Rates of deterioration are only given a one-sentence mention in the article, but deserve much more attention. These rates of deterioration need to qualify and dampen any generalizable clinical interpretation of other claims about outcomes attributed to the CBT. We need to keep in mind that the clinical trials cannot determine how effective treatments are, but only how different a treatment is from a control group. So, an effect claimed for a treatment and control can largely or entirely come from deterioration in the control group, not what the treatment offers. The claim of success for CBT probably largely depends on the deterioration in the control group.

One interpretation of this trial is that spending an extraordinary 30 hours with a psychiatrist leads to only half the deterioration experienceddoing nothing more than routine care. But this begs the question of why are half the patients left in routine care deteriorating in such a large proportion. What possibly could be going on?

Conclusion: Unexplained deterioration in the control group may explain apparent effects of the treatment, but both groups are doing badly.

5. The diagnosis of “functional somatic symptoms” or, as the authors prefer – Severe Bodily Distress Syndromes – is considered by the authors to be a psychiatric diagnosis. It is not accepted as a valid diagnosis internationally. Its validation is limited to the work done almost entirely within the author group, which is explicitly labeled as “preliminary.” This biased sample of patients is quite heterogeneous, beyond their physicians having difficulty managing them. They have a full range of subjective complaints and documented physical conditions. Many of these patients would not be considered primarily having a psychiatric disorder internationally and certainly within the US, except where they had major depression or an anxiety disorder. Such psychiatric disorders were not an exclusion criteria.

Once sent on the pathway to a psychiatric diagnosis by their physicians’ making a referral to the study, patients had to meet additional criteria:

To be eligible for participation individuals had to have a chronic (i.e. of at least 2 years duration) bodily distress syndrome of the severe multi-organ type, which requires functional somatic symptoms from at least three of four bodily systems, and moderate to severe impairment.in daily living.

The condition identified in the title of the article is not validated as a psychiatric diagnosis. Two papers to which the authors refer to their  own studies ( 1 , 2 ) from a single sample. The title of one of these papers makes a rather immodest claim:

Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. Journal of Psychosomatic Research. 2010 May 31;68(5):415-26.

In neither the two papers nor the present RCT is there sufficient effort to rule out a physical basis for the complaints qualifying these patients for a psychiatric diagnosis. There is also a lack of follow-up to see if physical diagnoses were later applied.

Citation patterns of these papers strongly suggest  the authors are not having got much traction internationally. The criteria of symptoms from three out of four bodily systems is arbitrary and unvalidated. Many patients with known physical conditions would meet these criteria without any psychiatric diagnosis being warranted.

The authors relate what is their essentially homegrown diagnosis to functional somatic syndromes, diagnoses which are themselves subject to serious criticism. See for instance the work of Allen Frances M.D., who had been the chair of the American Psychiatric Association ‘s Diagnostic and Statistical Manual (DSM-IV) Task Force. He became a harsh critic of its shortcomings and the failures of APA to correct coverage of functional somatic syndromes in the next DSM.

Mislabeling Medical Illness As Mental Disorder

Unless DSM-5 changes these incredibly over inclusive criteria, it will greatly increase the rates of diagnosis of mental disorders in the medically ill – whether they have established diseases (like diabetes, coronary disease or cancer) or have unexplained medical conditions that so far have presented with somatic symptoms of unclear etiology.

And:

The diagnosis of mental disorder will be based solely on the clinician’s subjective and fallible judgment that the patient’s life has become ‘subsumed’ with health concerns and preoccupations, or that the response to distressing somatic symptoms is ‘excessive’ or ‘disproportionate,’ or that the coping strategies to deal with the symptom are ‘maladaptive’.

And:

 “These are inherently unreliable and untrustworthy judgments that will open the floodgates to the overdiagnosis of mental disorder and promote the missed diagnosis of medical disorder.

The DSM 5 Task force refused to adopt changes proposed by Dr. Frances.

Bad News: DSM 5 Refuses to Correct Somatic Symptom Disorder

Leading Frances to apologize to patients:

My heart goes out to all those who will be mislabeled with this misbegotten diagnosis. And I regret and apologize for my failure to be more effective.

The chair of The DSM Somatic Symptom Disorder work group has delivered a scathing critique of the very concept of medically unexplained symptoms.

Dimsdale JE. Medically unexplained symptoms: a treacherous foundation for somatoform disorders?. Psychiatric Clinics of North America. 2011 Sep 30;34(3):511-3.

Dimsdale noted that applying this psychiatric diagnosis sidesteps the quality of medical examination that led up to it. Furthermore:

Many illnesses present initially with nonspecific signs such as fatigue, long before the disease progresses to the point where laboratory and physical findings can establish a diagnosis.

And such diagnoses may encompass far too varied a group of patients for any intervention to make sense:

One needs to acknowledge that diseases are very heterogeneous. That heterogeneity may account for the variance in response to intervention. Histologically, similar tumors have different surface receptors, which affect response to chemotherapy. Particularly in chronic disease presentations such as irritable bowel syndrome or chronic fatigue syndrome, the heterogeneity of the illness makes it perilous to diagnose all such patients as having MUS and an underlying somatoform disorder.

I tried making sense of a table of the additional diagnoses that the patients in this study had been given. A considerable proportion of patients had physical conditions that would not be considered psychiatric problems in the United States.. Many patients could be suffering from multiple symptoms not only from the conditions, but side effects of the medications being offered. It is very difficult to manage multiple medications required by multiple comorbidities. Physicians from the community found their competence and ability to spend time with these patients taxing.

table of functional somatic symptoms

Most patients had a diagnosis of “functional headaches.” It’s not clear what this designation means, but conceivably it could include migraine headaches, which are accompanied by multiple physical complaints. CBT is not an evidence-based treatment of choice for functional headaches, much less migraines.

Over a third of the patients had irritable bowel syndrome (IBS). A systematic review of the comorbidity  of irritable bowel syndrome concluded physical comorbidity is the norm in IBS:

The nongastrointestinal nonpsychiatric disorders with the best-documented association are fibromyalgia (median of 49% have IBS), chronic fatigue syndrome (51%), temporomandibular joint disorder (64%), and chronic pelvic pain (50%).

In the United States, many patients and specialists would consider considering irritable bowel syndrome as a psychiatric condition offensive and counterproductive. There is growing evidence that irritable bowel syndrome is a disturbance in the gut microbiota. It involves a gut-brain interaction, but the primary direction of influence is of the disturbance in the gut on the brain. Anxiety and depression symptoms are secondary manifestations, a product of activity in the gut influencing the nervous system.

Most of the patients in the sample had a diagnosis of fibromyalgia and over half of all patients in this study had a diagnosis of chronic fatigue syndrome.

Other patients had diagnosable anxiety and depressive disorders, which, particularly at the lower end of severity, are responsive to nonspecific treatments.

Undoubtedly many of these patients, perhaps most of them, are demoralized by not been able to get a  diagnosis for what they have good basis to believe is a medical condition, aside from the discomfort, pain, and interference with their life that they are experiencing. They could be experiencing a demoralization secondary to physical illness.

These patients presented with pain, fatigue, general malaise, and demoralization. I have trouble imagining how their specific most pressing concerns could be addressed in group settings. These patients pose particular problems for making substantive clinical interpretation of outcomes that are highly general and subjective.

Conclusion: Diagnosing patients with multiple physical symptoms as having a psychiatric condition is highly controversial. Results will not generalize to countries and settings where the practice is not accepted. Many of the patients involved in the study had recognizable physical conditions, and yet they are being shunted to psychiatrists who focused only on their attitude towards the symptoms. They are being denied the specialist care and treatments that might conceivably reduce the impact of their conditions on their lives

6. The “CBT” offered in this study is as part of a complex, multicomponent treatment that does not resemble cognitive behavior therapy as it is practiced in the United States.

it is thoughtAs seen in figure 1 in the article, The multicomponent intervention is quite complex and consists of more than cognitive behavior therapy. Moreover, at least in the United States, CBT has distinctive elements of collaborative empiricism. Patients and therapist work together selecting issues on which to focus, developing strategies, with the patients reporting back on efforts to implement them. From the details available in the article, the treatment sounded much more like an exhortation or indoctrination, even arguing with the patients, if necessary. An English version available on the web of the educational material used in initial sessions confirmed a lot of condescending pseudoscience was presented to convince the patients that their problems were largely in their heads.

Without a clear application of learning theory, behavioral analysis, or cognitive science, the “CBT”  treatment offered in this RCT has much more in common with the creative novation therapy offered by Hans Eysenck, which is now known to have been justified with fraudulent data. Indeed,  the educational materials  for this study to what is offered in Eysenck’s study reveal striking similarities. Eysenck was advancing the claim that his intervention could prevent cardiovascular disease and cancer and overcome the iatrogenic effects. I know, this sounds really crazy, but see my careful documentation elsewhere.

Conclusion: The embedding of an unorthodox “CBT” in a multicomponent intervention in this study does not allow isolating any specific, active component ofCBT that might be at work.

7. The investigators disclose having altered their scoring of their primary outcome years after the trial began, and probably after a lot of outcome data had been collected.

I found a casual disclosure in the method section of this article unsettling, particularly noting that the original trial registration was:

We found an unexpected moderate negative correlation of the physical and mental component summary measures, which are constructed as independent measures. According to the SF-36 manual, a low or zero correlation of the physical and mental components is a prerequisite of their use.23 Moreover, three SF-36 scales that contribute considerably to the PCS did not fulfil basic scaling assumptions.31 These findings, together with a recent report of problems with the PCS in patients with physical and mental comorbidity,32 made us concerned that the PCS would not reliably measure patients’ physical health in the study sample. We therefore decided before conducting the analysis not to use the PCS, but to use instead the aggregate score as outlined above as our primary outcome measure. This decision was made on 26 February 2009 and registered as a protocol change at clinical trials. gov on 11 March 2009. Only baseline data had been analysed when we made our decision and the follow-up data were still concealed.

Switching outcomes, particularly after some results are known, constitutes a serious violation of best research practices and leads to suspicion of the investigators refining their hypotheses after they had peeked at the data. See How researchers dupe the public with a sneaky practice called “outcome switching”

The authors had originally proposed a scoring consistent with a very large body of literature. Dropping the original scoring precludes any direct comparison with this body of research, including basic norms. They claim that they switched scoring because two key subscales were correlated in the opposite direction of what is reported in the larger literature. This is troubling indication that something has gone terribly wrong in authors’ recruitment of a sample. It should not be pushed under the rug.

The authors claim that they switched outcomes based only on examining of baseline data from their study. However, one of the authors, Michael Sharpe is also an author on the controversial PACE trial  A parallel switch was made to the scoring of the subjective self-reports in that trial. When the data were eventually re-analyzed using the original scoring, any positive findings for the trial were substantially reduced and arguably disappeared.

Even if the authors of the present RCT did not peekat their outcome data before deciding to switch scoring of the primary outcome, they certainly had strong indications from other sources that the original scoring would produce weak or null findings. In 2009, one of the authors, Michael Sharpe had access to results of a relevant trial. What is called the FINE trial had null findings, which affected decisions to switch outcomes in the PACE trial. Is it just a coincidence that the scoring of the outcomes was then switched for the present RCT?

Conclusion: The outcome switching for the present trial  represents bad research practices. For the trial to have any credibility, the investigators should make their data publicly available so these data could be independently re-analyzed with the original scoring of primary outcomes.

The senior author’s clinic

 I invite readers to take a virtual tour of the website for the senior author’s clinical services  ]. Much of it is available in English. Recently, I blogged about dubious claims of a health care system in Detroit achieving a goal of “zero suicide.” . I suggested that the evidence for this claim was quite dubious, but was a powerful advertisement for the health care system. I think the present report of an RCT can similarly be seen as an infomercial for training and clinical services available in Denmark.

Conflict of interest

 No conflict of interest is declared for this RCT. Under somewhat similar circumstances, I formally complained about undeclared conflicts of interest in a series of papers published in PLOS One. A correction has been announced, but not yet posted.

Aside from the senior author’s need to declare a conflict of interest, the same can be said for one of the authors, Michael Sharpe.

Apart from the professional and reputational interest, (his whole career has been built making strong claims about such interventions) Sharpe works for insurance companies, and publishes on the subject. He declared a conflict of interest for the for PACE trial.

MS has done voluntary and paid consultancy work for government and for legal and insurance companies, and has received royalties from Oxford University Press.

Here’s Sharpe’s report written for the social benefits reinsurance company UnumProvident.

If results of this are accepted at face, they will lend credibility to the claims that effective interventions are available to reduce social disability. It doesn’t matter that the intervention is not effective. Rather persons receiving social disability payments can be disqualified because they are not enrolled in such treatment.

Effects on the credibility of Cochrane collaboration report

The switched outcomes of the trial were entered into a Cochrane systematic review, to which primary care health professionals look for guidance in dealing with a complex clinical situation. The review gives no indication of the host of problems that I exposed here. Furthermore, I have glanced at some of the other trials included and I see similar difficulties.

I been unable to convince the Cochrane to clean up conflicts of interest that are attached to switched outcomes being entered in reviews. Perhaps some of my readers will want to approach Cochrane to revisit this issue.
I think this post raises larger issues about whether Cochrane has any business conducting and disseminating reviews of such a bogus psychiatric diagnosis, medically unexplained symptoms. These reviews do patients no good, and may sidetrack them from getting the medical care they deserve. The reviews do serve the interest of special interests, including disability insurance companies.

Special thanks to John Peters and to Skeptical Cat for their assistance with my writing this blog. However, I have sole responsibility for any excesses or distortions.