When psychotherapy trials have multiple flaws…

Multiple flaws pose more threats to the validity of psychotherapy studies than would be inferred when the individual flaws are considered independently.

mind the brain logo

Multiple flaws pose more threats to the validity of psychotherapy studies than would be inferred when the individual flaws are considered independently.

We can learn to spot features of psychotherapy trials that are likely to lead to exaggerated claims of efficacy for treatments or claims that will not generalize beyond the sample that is being studied in a particular clinical trial. We can look to the adequacy of sample size, and spot what Cochrane collaboration has defined as risk of bias in their handy assessment tool.

We can look at the case-mix in the particular sites where patients were recruited.  We can examine the adequacy of diagnostic criteria that were used for entering patients to a trial. We can examine how blinded the trial was in terms of whoever assigned patients to particular conditions, but also what the patients, the treatment providers, and their evaluaters knew which condition to which particular patients were assigned.

And so on. But what about combinations of these factors?

We typically do not pay enough attention multiple flaws in the same trial. I include myself among the guilty. We may suspect that flaws are seldom simply additive in their effect, but we don’t consider whether they may be even synergism in the negative effects on the validity of a trial. As we will see in this analysis of a clinical trial, multiple flaws can provide more threats to the validity trial than what we might infer when the individual flaws are considered independently.

The particular paper we are probing is described in its discussion section as the “largest RCT to date testing the efficacy of group CBT for patients with CFS.” It also takes on added importance because two of the authors, Gijs Bleijenberg and Hans Knoop, are considered leading experts in the Netherlands. The treatment protocol was developed over time by the Dutch Expert Centre for Chronic Fatigue (NKCV, http://www.nkcv.nl; Knoop and Bleijenberg, 2010). Moreover, these senior authors dismiss any criticism and even ridicule critics. This study is cited as support for their overall assessment of their own work.  Gijs Bleijenberg claims:

Cognitive behavioural therapy is still an effective treatment, even the preferential treatment for chronic fatigue syndrome.

But

Not everybody endorses these conclusions, however their objections are mostly baseless.

Spoiler alert

This is a long read blog post. I will offer a summary for those who don’t want to read through it, but who still want the gist of what I will be saying. However, as always, I encourage readers to be skeptical of what I say and to look to my evidence and arguments and decide for themselves.

Authors of this trial stacked the deck to demonstrate that their treatment is effective. They are striving to support the extraordinary claim that group cognitive behavior therapy fosters not only better adaptation, but actually recovery from what is internationally considered a physical condition.

There are some obvious features of the study that contribute to the likelihood of a positive effect, but these features need to be considered collectively, in combination, to appreciate the strength of this effort to guarantee positive results.

This study represents the perfect storm of design features that operate synergistically:

perfect storm

 Referral bias – Trial conducted in a single specialized treatment setting known for advocating psychological factors maintaining physical illness.

Strong self-selection bias of a minority of patients enrolling in the trial seeking a treatment they otherwise cannot get.

Broad, overinclusive diagnostic criteria for entry into the trial.

Active treatment condition carry strong message how patients should respond to outcome assessment with improvement.

An unblinded trial with a waitlist control lacking the nonspecific elements (placebo) that confound the active treatment.

Subjective self-report outcomes.

Specifying a clinically significant improvement that required only that a primary outcome be less than needed for entry into the trial

Deliberate exclusion of relevant objective outcomes.

Avoidance of any recording of negative effects.

Despite the prestige attached to this trial in Europe, the US Agency for Healthcare Research and Quality (AHRQ) excludes this trial from providing evidence for its database of treatments for chronic fatigue syndrome/myalgic encephalomyelitis. We will see why in this post.

factsThe take away message: Although not many psychotherapy trials incorporate all of these factors, most trials have some. We should be more sensitive to when multiple factors occur in the same trial, like bias in the site for patient recruitment; lacking of blinding; lack of balance between active treatment and control condition in terms of nonspecific factors, and subjective self-report measures.

The article reporting the trial is

Wiborg JF, van Bussel J, van Dijk A, Bleijenberg G, Knoop H. Randomised controlled trial of cognitive behaviour therapy delivered in groups of patients with chronic fatigue syndrome. Psychotherapy and Psychosomatics. 2015;84(6):368-76.

Unfortunately, the article is currently behind a pay wall. Perhaps readers could contact the corresponding author Hans.knoop@radboudumc.nl  and request a PDF.

The abstract

Background: Meta-analyses have been inconclusive about the efficacy of cognitive behaviour therapies (CBTs) delivered in groups of patients with chronic fatigue syndrome (CFS) due to a lack of adequate studies. Methods: We conducted a pragmatic randomised controlled trial with 204 adult CFS patients from our routine clinical practice who were willing to receive group therapy. Patients were equally allocated to therapy groups of 8 patients and 2 therapists, 4 patients and 1 therapist or a waiting list control condition. Primary analysis was based on the intention-to-treat principle and compared the intervention group (n = 136) with the waiting list condition (n = 68). The study was open label. Results: Thirty-four (17%) patients were lost to follow-up during the course of the trial. Missing data were imputed using mean proportions of improvement based on the outcome scores of similar patients with a second assessment. Large and significant improvement in favour of the intervention group was found on fatigue severity (effect size = 1.1) and overall impairment (effect size = 0.9) at the second assessment. Physical functioning and psychological distress improved moderately (effect size = 0.5). Treatment effects remained significant in sensitivity and per-protocol analyses. Subgroup analysis revealed that the effects of the intervention also remained significant when both group sizes (i.e. 4 and 8 patients) were compared separately with the waiting list condition. Conclusions: CBT can be effectively delivered in groups of CFS patients. Group size does not seem to affect the general efficacy of the intervention which is of importance for settings in which large treatment groups are not feasible due to limited referral

The trial registration

http://www.isrctn.com/ISRCTN15823716

Who was enrolled into the trial?

Who gets into a psychotherapy trial is a function of the particular treatment setting of the study, the diagnostic criteria for entry, and patient preferences for getting their care through a trial, rather than what is being routinely provided in that setting.

 We need to pay particular attention to when patients enter psychotherapy trials hoping they will receive a treatment they prefer and not to be assigned to the other condition. Patients may be in a clinical trial for the betterment of science, but in some settings, they are willing to enroll because of a probability of getting treatment they otherwise could not get. This in turn also affects the evaluation of both the condition in which they get the preferred treatment, but also their evaluation of the condition in which they are denied it. Simply put, they register being pleased with what they wanted or not being pleased if they did not get what they wanted.

The setting is relevant to evaluating who was enrolled in a trial.

The authors’ own outpatient clinic at the Radboud University Medical Center was the site of the study. The group has an international reputation for promoting the biopsychosocial model, in which psychological factors are assumed to be the decisive factor in maintaining somatic complaints.

All patients were referred to our outpatient clinic for the management of chronic fatigue.

There is thus a clear referral bias  or case-mix bias but we are not provided a ready basis for quantifying it or even estimating its effects.

The diagnostic criteria.

The article states:

In accordance with the US Center for Disease Control [9], CFS was defined as severe and unexplained fatigue which lasts for at least 6 months and which is accompanied by substantial impairment in functioning and 4 or more additional complaints such as pain or concentration problems.

Actually, the US Center for Disease Control would now reject this trial because these entry criteria are considered obsolete, overinclusive, and not sufficiently exclusive of other conditions that might be associated with chronic fatigue.*

There is a real paradigm shift happening in America. Both the 2015 IOM Report and the Centers for Disease Control and Prevention (CDC) website emphasize Post Exertional Malaise and getting more ill after any effort with M.E. CBT is no longer recommended by the CDC as treatment.

cdc criteriaThe only mandatory symptom for inclusion in this study is fatigue lasting 6 months. Most properly, this trial targets chronic fatigue [period] and not the condition, chronic fatigue syndrome.

Current US CDC recommendations  (See box  7-1 from the IoM document, above) for diagnosis require postexertional malaise for a diagnosis of myalgic encephalomyelitis (ME). See below.

pemPatients meeting the current American criteria for ME would be eligible for enrollment in this trial, but it’s unclear what proportion of the patients enrolled actually met the American criteria. Because of the over-inclusiveness of the entry diagnostic criteria, it is doubtful whether the results would generalize to American sample. A look at patient flow into the study will be informative.

Patient flow

Let’s look at what is said in the text, but also in the chart depicting patient flow into the trial for any self-selection that might be revealed.

In total, 485 adult patients were diagnosed with CFS during the inclusion period at our clinic (fig. 1). One hundred and fifty-seven patients were excluded from the trial because they declined treatment at our clinic, were already asked to participate in research incompatible with inclusion (e.g. research focusing on individual CBT for CFS) or had a clinical reason for exclusion (i.e. they received specifically tailored interventions because they were already unsuccessfully treated with individual CBT for CFS outside our clinic or were between 18 and 21 years of age and the family had to be involved in the therapy). Of the 328 patients who were asked to engage in group therapy, 99 (30%) patients indicated that they were unwilling to receive group therapy. In 25 patients, the reason for refusal was not recorded. Two hundred and four patients were randomly allocated to one of the three trial conditions. Baseline characteristics of the study sample are presented in table 1. In total, 34 (17%) patients were lost to follow-up. Of the remaining 170 patients, 1 patient had incomplete primary outcome data and 6 patients had incomplete secondary outcome data.

flow chart

We see that the investigators invited two thirds of patients attending the clinic to enroll in the trial. Of these, 41% refused. We don’t know the reason for some of the refusals, but almost a third of the patients approached declined because they did not want group therapy. The authors left being able to randomize 42% of patients coming to the clinic or less than two thirds of patients they actually asked. Of these patients, a little more than two thirds received the treatment to which were randomized and were available for follow-up.

These patients receiving treatment to which they were randomized and who were available for follow-up are self-selected minority of the patients coming to the clinic. This self-selection process likely reduced the proportion of patients with myalgic encephalomyelitis. It is estimated that 25% of patients meeting the American criteria a housebound and 75% are unable to work. It’s reasonably to infer that patients being the full criteria would opt out of a treatment that require regular attendance of a group session.

The trial is biased to ambulatory patients with fatigue and not ME. Their fatigue is likely due to some combinations of factors such as multiple co-morbidities, as-yet-undiagnosed medical conditions, drug interactions, and the common mild and subsyndromal  anxiety and depressive symptoms that characterize primary care populations.

The treatment being evaluated

Group cognitive behavior therapy for chronic fatigue syndrome, either delivered in a small (4 patients and 1 therapist) or larger (8 patients and 2 therapists) group format.

The intervention consisted of 14 group sessions of 2 h within a period of 6 months followed by a second assessment. Before the intervention started, patients were introduced to their group therapist in an individual session. The intervention was based on previous work of our research group [4,13] and included personal goal setting, fixing sleep-wake cycles, reducing the focus on bodily symptoms, a systematic challenge of fatigue-related beliefs, regulation and gradual increase in activities, and accomplishment of personal goals. A formal exercise programme was not part of the intervention.

Patients received a workbook with the content of the therapy. During sessions, patients were explicitly invited to give feedback about fatigue-related cognitions and behaviours to fellow patients. This aspect was introduced to facilitate a pro-active attitude and to avoid misperceptions of the sessions as support group meetings which have been shown to be insufficient for the treatment of CFS.

And note:

In contrast to our previous work [4], we communicated recovery in terms of fatigue and disabilities as general goal of the intervention.

Some impressions of the intensity of this treatment. This is a rather intensive treatment with patients having considerable opportunities for interactions with providers. This factor alone distinguishes being assigned to the intervention group versus being left in the wait list control group and could prove powerful. It will be difficult to distinguish intensity of contact from any content or active ingredients of the therapy.

I’ll leave for another time a fuller discussion of the extent to which what was labeled as cognitive behavior therapy in this study is consistent with cognitive therapy as practiced by Aaron Beck and other leaders of the field. However, a few comments are warranted. What is offered in this trial does not sound like cognitive therapy as Americans practice it. What is often in this trial seems emphasize challenging beliefs, pushing patients to get more active, along with psychoeducational activities. I don’t see indications of the supportive, collaborative relationship in which patients are encouraged to work on what they want to work on, engage in outside activities (homework assignments) and get feedback.

What is missing in this treatment is what Beck calls collaborative empiricism, “a systemic process of therapist and patient working together to establish common goals in treatment, has been found to be one of the primary change agents in cognitive-behavioral therapy (CBT).”

Importantly, in Beck’s approach, the therapist does not assume cognitive distortions on the part of the patient. Rather, in collaboration with the patient, the therapist introduces alternatives to the interpretations that the patient has been making and encourages the patient to consider the difference. In contrast, rather than eliciting goal statements from patients, therapist in this study imposes the goal of increased activity. Therapists in this study also seem ready to impose their views that the patients’ fatigue-related beliefs are maladaptive.

The treatment offered in this trial is complex, with multiple components making multiple assumptions that seem quite different from what is called cognitive therapy or cognitive behavioral therapy in the US.

The authors’ communication of recovery from fatigue and disability seems a radical departure not only from cognitive behavior therapy for anxiety and depression and pain, but for cognitive behavior therapy offered for adaptation to acute and chronic physical illnesses. We will return to this “communication” later.

The control group

Patients not randomized to group CBT were placed on a waiting list.

Think about it! What do patients think about having gotten involved in all the inconvenience and burden of a clinical trial in hope that they would get treatment and then being assigned to the control group with just waiting? Not only are they going to be disappointed and register that in their subjective evaluations of the outcome assessments patients may worry about jeopardizing the right to the treatment they are waiting for if they overly endorse positive outcomes. There is a potential for  nocebo effect , compounding the placebo effect of assignment to the CBT active treatment groups.

What are informative comparisons between active treatments and  control conditions?

We need to ask more often what inclusion of a control group accomplishes for the evaluation of a psychotherapy. In doing so, we need to keep in mind that psychotherapies do not have effect sizes, only comparisons of psychotherapies and control condition have effect sizes.

A pre-post evaluation of psychotherapy from baseline to follow-up includes the effects of any active ingredient in the psychotherapy, a host of nonspecific (placebo) factors, and any changes that would’ve occurred in the absence of the intervention. These include regression to the mean– patients are more likely to enter a clinical trial now, rather than later or previously, if there has been exacerbation of their symptoms.

So, a proper comparison/control condition includes everything that the patients randomized to the intervention group get except for the active treatment. Ideally, the intervention and the comparison/control group are equivalent on all these factors, except the active ingredient of the intervention.

That is clearly not what is happening in this trial. Patients randomized to the intervention group get the intervention, the added intensity and frequency of contact with professionals that the intervention provides, and all the support that goes with it; and the positive expectations that come with getting a therapy that they wanted.

Attempts to evaluate the group CBT versus the wait-list control group involved confounding the active ingredients of the CBT and all these nonspecific effects. The deck is clearly being stacked in favor of CBT.

This may be a randomized trial, but properly speaking, this is not a randomized controlled trial, because the comparison group does not control for nonspecific factors, which are imbalanced.

The unblinded nature of the trial

In RCTs of psychotropic drugs, the ideal is to compare the psychotropic drug to an inert pill placebo with providers, patients, and evaluate being blinded as to whether the patients received psychotropic drug or the comparison pill.

While it is difficult to achieve a comparable level of blindness and a psychotherapy trial, more of an effort to achieve blindness is desirable. For instance, in this trial, the authors took pains to distinguish the CBT from what would’ve happened in a support group. A much more adequate comparison would therefore be CBT versus either a professional or peer-led support group with equivalent amounts of contact time. Further blinding would be possible if patients were told only two forms of group therapy were being compared. If that was the information available to patients contemplating consenting to the trial, it wouldn’t have been so obvious from the outset to the patients being randomly assigned that one group was preferable to the other.

Subjective self-report outcomes.

The primary outcomes for the trial were the fatigue subscale of the Checklist Individual Strength;  the physical functioning subscale of the Short Health Survey 36 (SF-36); and overall impairment as measured by the Sickness Impact Profile (SIP).

Realistically, self-report outcomes are often all that is available in many psychotherapy trials. Commonly these are self-report assessments of anxiety and depressive symptoms, although these may be supplemented by interviewer-based assessments. We don’t have objective biomarkers with which to evaluate psychotherapy.

These three self-report measures are relatively nonspecific, particularly in a population that is not characterized by ME. Self-reported fatigue in a primary care population lacks discriminative validity with respect to pain, anxiety and depressive symptoms, and general demoralization.  The measures are susceptible to receipt of support and re-moralization, as well as gratitude for obtaining a treatment that was sought.

Self-report entry criteria include a score 35 or higher on the fatigue severity subscale. Yet, a score of less than 35 on this scale at follow up is part of what is defined as a clinically significant improvement with a composite score from combined self-report measures.

We know from medical trials that differences can be observed with subjective self-report measures that will not be found with objective measures. Thus, mildly asthmatic patients will fail to distinguish in their subjective self-reports between [  between the effective inhalant albuterol, an inert inhalant, and sham acupuncture, but will rate improvement better than getting no intervention.  However,  there will be a strong advantage over the other three conditions with an objective measure, maximum forced expiratory volume in 1 second (FEV1) as assessed  with spirometry.

The suppression of objective outcome measures

We cannot let these the authors of this trial off the hook in their dependence on subjective self-report outcomes. They are instructing patients that recovery is the goal, which implies that it is an attainable goal. We can reasonably be skeptical about acclaim of recovery based on changes in self-report measures. Were the patients actually able to exercise? What was their exercise capacity, as objectively measured? Did they return to work?

These authors have included such objective measurements in past studies, but not included them as primary outcomes, nor, even in some cases, reported them in the main paper reporting the trial.

Wiborg JF, Knoop H, Stulemeijer M, Prins JB, Bleijenberg G. How does cognitive behaviour therapy reduce fatigue in patients with chronic fatigue syndrome? The role of physical activity. Psychol Med. 2010 Jan 5:1

The senior authors’ review fails to mention their three studies using actigraphy that did not find effects for CBT. I am unaware of any studies that did find enduring effects.

Perhaps this is what they mean when they say the protocol has been developed over time – they removed what they found to be threats to the findings that they wanted to claim.

Dismissing of any need to consider negative effects of treatment

Most psychotherapy fail to assess any adverse effects of treatment, but this is usually done discretely, without mention. In contrast, this article states

Potential harms of the intervention were not assessed. Previous research has shown that cognitive behavioural interventions for CFS are safe and unlikely to produce detrimental effects.

Patients who meet stringent criteria for ME would be put at risk for pressure to exert themselves. By definition they are vulnerable to postexertional malaise (PEM). Any trail of this nature needs to assess that risk. Maybe no adverse effects would be found. If that were so, it would strongly indicate the absence of patients with appropriate diagnoses.

Timing of assessment of outcomes varied between intervention and control group.

I at first did not believe what I was reading when I encountered this statement in the results section.

The mean time between baseline and second assessment was 6.2 months (SD = 0.9) in the control condition and 12.0 months (SD = 2.4) in the intervention group. This difference in assessment duration was significant (p < 0.001) and was mainly due to the fact that the start of the therapy groups had to be frequently postponed because of an irregular patient flow and limited treatment capacities for group therapy at our clinic. In accordance with the treatment manual, the second assessment was postponed until the fourteenth group session was accomplished. The mean time between the last group session and the second assessment was 3.3 weeks (SD = 3.5).

So, outcomes were assessed for the intervention group shortly after completion of therapy, when nonspecific (placebo) effects would be stronger, but a mean of six months later than for patients assigned to the control condition.

Post-hoc statistical controls are not sufficient to rescue the study from this important group difference, and it compounds other problems in the study.

Take away lessons

Pay more attention to how limitations any clinical trial may compound each other in terms of the trial provide exaggerated estimates of the effects of treatment or the generalizability of the results to other settings.

Be careful of loose diagnostic criteria because a trial may not generalize to the same criteria being applied in settings that are different either in terms of patient population of the availability of different treatments. This is particularly important when a treatment setting has a bias in referrals and only a minority of patients being invited to participate in the trial actually agree and are enrolled.

Ask questions about just what information is obtained in comparing active treatment group and the study to its control/comparison. For start, just what is being controlled and how might that affect the estimates of the effectiveness of the active treatment?

Pay particular attention to the potent combination of the trial being unblinded, a weak comparision/control, and an active treatment that is not otherwise available to patients.

Note

*The means of determining whether the six months of fatigue might be accounted for by other medical factors was specific to the setting. Note that a review of medical records for sufficient for an unknown proportion of patients, with no further examination or medical tests.

The Department of Internal Medicine at the Radboud University Medical Center assessed the medical examination status of all patients and decided whether patients had been sufficiently examined by a medical doctor to rule out relevant medical explanations for the complaints. If patients had not been sufficiently examined, they were seen for standard medical tests at the Department of Internal Medicine prior to referral to our outpatient clinic. In accordance with recommendations by the Centers for Disease Control, sufficient medical examination included evaluation of somatic parameters that may provide evidence for a plausible somatic explanation for prolonged fatigue [for a list, see [9]. When abnormalities were detected in these tests, additional tests were made based on the judgement of the clinician of the Department of Internal Medicine who ultimately decided about the appropriateness of referral to our clinic. Trained therapists at our clinic ruled out psychiatric comorbidity as potential explanation for the complaints in unstructured clinical interviews.

workup

School-Based Mindfulness Based Stress-Reduction Program (MBSR) fails to deliver positive results

No positive effects found for Jon Kabat-Zinn’s Mindfulness Based Stress-Reduction Program with middle and high school students. Evidence of deterioration was found in some subgroup analyses.

mind the brain logo

No positive effects found for Jon Kabat-Zinn’s Mindfulness Based Stress-Reduction Program with middle and high School Students. Evidence of deterioration was found in some subgroup analyses.

mindfulness in schoolsWe should be cautious about interpreting negative effects that are confined to subgroup analyses. They may well be due to chance. But we should be concerned about the lack of positive findings across measures in the primary analyses. MBSR (a mindfulness training product trademarked and controlled by Jon Kabat-Zinn) and other mindfulness programs have heavily promoted as having wondrous benefits and mandated in many school settings.

The study [with link to the PDF]

Johnson C, Burke C, Brinkman S, Wade T. Effectiveness of a school-based mindfulness program for transdiagnostic prevention in young adolescents. Behaviour Research and Therapy. 2016 Jun 30;81:1-1.

Abstract

Anxiety, depression and eating disorders show peak emergence during adolescence and share common risk factors. School-based prevention programs provide a unique opportunity to access a broad spectrum of the population during a key developmental window, but to date, no program targets all three conditions concurrently. Mindfulness has shown promising early results across each of these psychopathologies in a small number of controlled trials in schools, and therefore this study investigated its use in a randomised controlled design targeting anxiety, depression and eating disorder risk factors together for the first time. Students (M age 13.63; SD = .43) from a broad band of socioeconomic demographics received the eight lesson, once weekly.b (“Dot be”) mindfulness in schools curriculum (N = 132) or normal lessons (N = 176). Anxiety, depression, weight/shape concerns and wellbeing were the primary outcome factors. Although acceptability measures were high, no significant improvements were found on any outcome at post-intervention or 3-month follow-up. Adjusted mean differences between groups at post-intervention were .03 (95% CI: -.06 to -.11) for depression, .01 (-.07 to -.09) for anxiety, .02 (-.05 to -.08) for weight/shape concerns, and .06 (-.08 to -.21) for wellbeing. Anxiety was higher in the mindfulness than the control group at follow-up for males, and those of both genders with low baseline levels of weight/shape concerns or depression. Factors that may be important to address for effective dissemination of mindfulness-based interventions in schools are discussed. Further research is required to identify active ingredients and optimal dose in mindfulness-based interventions in school settings.

The discussion noted:

The design of this study addresses several shortcomings identified in the literature (Britton et al., 2014; Burke, 2010; Felver et al., 2015; Meiklejohn et al., 2012; Tan, 2015; Waters et al., 2014). First, it was a multi-site, randomised controlled design with a moderately large sample size based on a priori power calculations. Second, it included follow-up (three months). Third, it sought to replicate an existing mindfulness-based intervention for youth. Fourth, socioeconomic status was not only reported but a broad range of socioeconomic bands included, although it was unfortunate that poor opt-in consent rates resulted in high data wastage in the lower range schools. Use of the same instructor for all classes in the intervention arm represents a strength (consistency) and a limitation (generalisability of findings).

Coverage in Scientific American

Mindfulness Training for Teens Fails Important Test

A large trial in schools showed no evidence of benefits, and hints it could even cause problems

The fact that this carefully-controlled investigation showed no benefits of mindfulness for any measure, and furthermore indicated an adverse effect for some participants, indicates that mindfulness training is not a universal solution for addressing anxiety or depression in teens, nor does it qualify as a replacement for more traditional psychotherapy or psychopharmacology, at least not as implemented in this school-based paradigm.

eBook_Mindfulness_345x550Preorders are being accepted for e-books providing skeptical looks at mindfulness and positive psychology, and arming citizen scientists with critical thinking skills. Right now there is a special offer for free access to a Mindfulness Master Class. But hurry, it won’t last.

I will also be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. I want to give researchers the tools to get into the journals where their work will get the attention it deserves.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.
 

“It’s certainly not bareknuckle:” Comments to a journalist about a critique of mindfulness research

We can’t assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results.

mind the brain logo

I recently had a Skype interview with science journalist Peter Hess concerning an article in Psychological Science.

Peter was exceptionally prepared, had a definite point of view, but was open to what I said. In the end seem to be persuaded by me on a number of points.  The resulting article in Inverse  faithfully conveyed my perspective and juxtaposed quotes from me with those from an author of the Psych Science piece in a kind of debate.

My point of view

larger dogWhen evaluating an article about mindfulness in a peer-reviewed journal, we need to take into account that authors may not necessarily be striving to do the best science, but to maximally benefit their particular brand of mindfulness, their products, or the settings in which they operate. Many studies of mindfulness are a little more than infomercials, weak research intended only to get mindfulness promoters’ advertisement of themselves into print or to allow the labeling of claims as “peer-reviewed”. Caveat Lector.

We cannot assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results. Rather they may be simply try to get the strongest possible claims through peer review, ignoring best research practices and best publication practices.

Psychologists Express Growing Concern With Mindfulness Meditation

“It’s not bare-knuckle, that’s for sure.”

There was much from the author of the Psych Science article with which  I would agree:

“In my opinion, there are far too many organizations, companies, and therapists moving forward with the implementation of ‘mindfulness-based’ treatments, apps, et cetera before the research can actually tell us whether it actually works, and what the risk-reward ratio is,” corresponding author and University of Melbourne research fellow Nicholas Van Dam, Ph.D. tells Inverse.

Bravo! And

“People are spending a lot of money and time learning to meditate, listening to guest speakers about corporate integration of mindfulness, and watching TED talks about how mindfulness is going to supercharge their brain and help them live longer. Best case scenario, some of the advertising is true. Worst case scenario: very little to none of the advertising is true and people may actually get hurt (e.g., experience serious adverse effects).”

But there were some statements that renewed the discomfort and disappointment I experienced when I read the original article in Psychological Science:

 “I think the biggest concern among my co-authors and I is that people will give up on mindfulness and/or meditation because they try it and it doesn’t work as promised,” says Van Dam.

“There may really be something to mindfulness, but it will be hard for us to find out if everyone gives up before we’ve even started to explore its best potential uses.”

So, how long before we “give up” on thousands of studies pouring out of an industry? In the meantime, should consumers act on what seem to be extravagant claims?

The Inverse article segued into some quotes from me after delivering another statement from the author which I could agree:

The authors of the study make their attitudes clear when it comes to the current state of the mindfulness industry: “Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed,” they write. And while this comes off as unequivocal, some think they don’t go far enough in calling out specific instances of quackery.

“It’s not bare-knuckle, that’s for sure. I’m sure it got watered down in the review process,” James Coyne, Ph.D., an outspoken psychologist who’s extensively criticized the mindfulness industry, tells Inverse.

Coyne agrees with the conceptual issues outlined in the paper, specifically the fact that many mindfulness therapies are based on science that doesn’t really prove their efficacy, as well as the fact that researchers with copyrights on mindfulness therapies have financial conflicts of interest that could influence their research. But he thinks the authors are too concerned with tone policing.

“I do appreciate that they acknowledged other views, but they kept out anybody who would have challenged their perspective,” he says.

Regarding Coyne’s criticism about calling out individuals, Van Dam says the authors avoided doing that so as not to alienate people and stifle dialogue.

“I honestly don’t think that my providing a list of ‘quacks’ would stop people from listening to them,” says Van Dam. “Moreover, I suspect my doing so would damage the possibility of having a real conversation with them and the people that have been charmed by them.” If you need any evidence of this, look at David “Avocado” Wolfe, whose notoriety as a quack seems to make him even more popular as a victim of “the establishment.” So yes, this paper may not go so far as some would like, but it is a first step toward drawing attention to the often flawed science underlying mindfulness therapies.

To whom is the dialogue directed about unwarranted claims from the mindfulness industry?

As one of the authors of an article claiming to be an authoritative review from a group of psychologists with diverse expertise, Van Dam says he is speaking to consumers. Why won’t he and his co-authors provide citations and name names so that readers can evaluate for themselves what they are being told? Is the risk of reputational damage and embarrassment to the psychologists so great as to cause Van Dam to protect them versus protecting consumers from the exaggerated and even fraudulent claims of psychologists hawking their products branded as ‘peer-reviewed psychological and brain science’.

I use the term ‘quack’ sparingly outside of discussing unproven and unlikely-to-be-proven products supposed to promote physical health and well-being or to prevent or cure disease and distress.

I think Harvard psychologist Ellen Langer deserves the term “quack” for her selling of expensive trips to spas in Mexico to women with advanced cancer so that they can change their mind set to reverse the course of their disease. Strong evidence, please! Given that this self-proclaimed mother of mindfulness gets her claims promoted through the Association for Psychological Science website, I think it particularly appropriate for Van Dam and his coauthors to name her in their publication in an APS journal. Were they censored or only censoring themselves?

Let’s put aside psychologists who can be readily named as quacks. How about Van Dam and co-authors naming names of psychologists claiming to alter the brains and immune systems of cancer patients with mindfulness practices so that they improve their physical health and fight cancer, not just cope better with a life-altering disease?

I simply don’t buy Van Dam’s suggestion that to name names promotes quackery any more than I believe exposing anti-vaxxers promotes the anti-vaccine cause.

Is Van Dam only engaged in a polite discussion with fellow psychologists that needs to be strictly tone-policed to avoid offense or is he trying to reach, educate, and protect consumers as citizen scientists looking after their health and well-being? Maybe that is where we parted ways.

Reflections on my tour of the Soteria Project at St Hedwig Hospital, Berlin

A fabulous, enlightened experiment in Berlin with humane treatment of patients suffering severe mental disorder that we cannot reproduce in the United States.

 

mind the brain logo

A fabulous, enlightened experiment in Berlin with humane treatment of patients suffering severe mental disorder that we cannot reproduce in the United States.

soteria doorI visited the Soteria project at St Hedwig Hospital, Berlin at the invitation of Professor Andreas Heinz, Director and Chair of the Department of Psychiatry and Psychotherapy at the Charité— Universitätsmedizin Berlin.

I was actually coming to St Hedwig Hospital, Berlin to give a talk on scientific writing, and was surprised by an offer of a tour of their Soteria Project.

I came away with great respect for a wonderful experiment in the treatment of psychosis that must be protected.

outside SoteriaI was also saddened to realize that such treatment could not conceivably be offered in the United States, even for patients with families who could pay large expenses out of pocket.

In Germany, financial arrangements allow months for the stabilization of acutely psychotic patients. The question is how best to use these resources.

 

In contrast, newly admitted patients in the United States generally are allowed only stays of 48 to 72 hours at the most to stabilize. Inpatient psychiatric beds are in short supply, and often unavailable to those who can afford to pay out of pocket.

The largest inpatient psychiatric facility in the United States is the Los Angeles County jail, where patients are thrown in with criminal populations or forced into anti-suicide smocks and isolated. Access to mental care in the jail is highly restricted.

In United States, the challenge is to get minimal resources to vulnerable severely disturbed population. Efforts to do so must compete with diversion of mental health funds to populations much less in need but amenable to outpatient psychotherapy.

It takes a mass killing to activate calls for better psychiatric care for the severely disturbed, on the false promise that better and more accessible care will measurably reduce mass killings. Of course, this is all a distraction from the need to restrict the firearms used in mass killings.

Professor Heinz and I became friends when I critiqued his study of open versus locked inpatient psychiatric wards, Why Lancet Psychiatry study didn’t show locked inpatient wards ineffective in reducing suicide   . We can still agree to disagree about the interpretation of complex observational/administrative data, but we came to appreciate differences in our sociocultural perspectives.

In my blog I was actually taking aim at Mental Elf’s pandering to the anti-psychiatry crowd with the goofy claim of the lack of “any compelling evidence that locking people up actually increases safety.” Sometimes vulnerable psychotic and suicidal persons need to be protected from themselves.

Furthermore, experimentation with unlocked wards frquently   come to an end with the suicide of a single absconding patient.

In Germany, better staffing and time to develop better relationships with patients allow much more respect for patient autonomy and self-responsibility. But open wards are always vulnerable to these adverse events.

The original Soteria, Palo Alto Project

I came to St Hedwigs with negative feelings about the  original Soteria Project. I was Director of Research at MRI Palo Alto in 1980s when it was housed there. I came away thinking its strong anti-psychiatry attitude was disastrous and led to much harm when it got disseminated.

Loren Mosher and Alma Menn were determined to demonstrate that antipsychotic medication was unnecessary in treatong psychotic patients.

Frankly, Moher and Menn were so committed to their ideological position, they distorted presentation of  their data. They misprepresented comparisons between disparate community mental health and Soteria samples as randomized trials. They relied on a huge selection bias and unreliable diagnoses that lumped acutely maniac patients and personality disorders with patients with schizophrenia. They tortured their data with a variety of p- hacking techniques and still didn’t come up with much.

After Soteria Palo Alto closed, an effort to get an NIMH grant for follow-up failed because the initial presentations of patients was so badly recorded that no retrospective diagnosis was possible.

Subsequent Soteria projects around the world have had a full range of attitudes towards the role of medication in the treatment of vulnerable and highly disorganized patients.

St Hedwig has an  enlightened, evidence informed approach that of course includes judicious use of antipsychotics. Antipsychotic medication is provided with acutely psychotic patients, but at an appropriate dosage. Patient response is closely monitored and tapering is attempted when there is improvement. Importantly, decisions about medication prioritize patient well-being, not staff convenience..

The best evidence is that patients who experiencing  episodes of unmedicated psychosis are increasingly doomed to poor recovery of social and personal functioning. On the other hand, particularly with treatment of ambiguous acute first episodes, has to be a lot of monitoring and reconsideration of medication. In understaffed and underresourced American psychiatric settings, there is little monitoring antipsychotic medications and little efforts at tapering. Furthermore, dosages often excessively high because that makes patients more manageable for overwhelmed staff. Overmedicated patients are easier to handle

Unfortunately, the quality of care offered in Berlin is unimaginable in the US even for those who can afford to pay out of pocket.

group meetingWith Professor Heinz’s permission, here is a refined Google translation of the Project website.

See also  an excellent discussion of the thinking that went into the architecture of Soteria, aimed at maximizing its potential as a therapeutic environment.

around the hearth

Special thanks also to Psychiatrists Dr med Felix Bermpohl and Dr med Martin Voss Oberarzt.

SOTERIA

Soteria’s program at the Charité’s Psychiatric University Clinic in the St. Hedwig Hospital is aimed at young people who are in an acute psychotic crisis, who are afraid of the onset of a psychosis, or who still need a professional stationary environment after a psychotic crisis.

There are 12 treatment rooms in the Soteria. Since the Soteria works within the scope of the compulsory supply, these places are intended exclusively for people from the districts of Wedding, Mitte, Tiergarten and Moabit.

[note from Prof Heinz: The difficult to translate passage refers to our hospital having a catchment area, from which we have to take every patient who wishes to be admitted and particularly every compulsory admission. We serve one of the poorest areas in Berlin, so we do not do “raisin picking” of easy to treat patients.]

“Soteria” (ancient Greek: healing, well-being, preservation, salvation) denotes a special treatment approach for people in psychotic crises with the so-called “milieutherapy”.

The residential environment, the co-patients, the attitude of the therapists as well as the orientation towards normality and “real life” outside the clinic represent the therapeutic milieu. Patients and employees meet in therapeutic communities on the same level and shape together – with the involvement of the social Environment – the day.

The psychosis treatment takes place in the form of active “being-yourself”, if necessary also in continuous 1: 1 care in the so-called “soft room”. The healing therapeutic milieu provides protection, calming and relief of tension, so that psychopharmaceuticals can be used very cautiously. This medication-saving effect of the soteria treatment is scientifically well documented, among other positive effects. (1)

1) Calton, T. et al. (2008): A Systematic Review of the Soteria Paradigm for the Treatment of People Diagnosed With Schizophrenia. Schizophrenia Bulletin 34,1:181-192;

2) L. Ciompi, H. Hoffmann, M. Broccard (Hrsg.), Wie wirkt Soteria? Online Ausgabe (2011), Heidelberg: Carl-Auer-System-Verlag.

3) Hl. Thérèse von Lisieux: Nonne, Mystikerin, KirchenlehrerinGeboren: 2. Januar 1873 in Alencon in der Normandie in Frankreich Verstorben: 30. September 1897 in Lisieux in Frankreich

The reports on the original Soteria, Palo Alto project

Mosher LR, Menn AZ, Matthew SM. Soteria: evaluation of a home-based treatment for schizophrenia. Am J Orthopsychiatry. 1975;45:455–467. [PubMed]

Mosher LR. Implications of family studies for the treatment of schizophrenia. Ir Med J. 1976;69:456–463. [PubMed]

Mosher LR, Menn AZ. Soteria: an alternative to hospitalisation for schizophrenia. Curr Psychiatr Ther. 1975;15:287–296. [PubMed]

Mosher LR, Menn AZ. Soteria House: one year outcome data. Psychopharmacol Bull. 1977;13:46–48.[PubMed]

Mosher LR, Menn AZ. Community residential treatment for schizophrenia: two-year follow-up. Hosp Community Psychiatry. 1978;29:715–723. [PubMed]

Mosher LR, Menn AZ. Soteria: an alternative to hospitalisation for schizophrenics. Curr Psychiatr Ther. 1982;21:189–203. [PubMed]

Matthews SM, Roper MT, Mosher LR, Menn AZ. A non-neuroleptic treatment for schizophrenia: analysis of the two-year post-discharge risk of relapse. Schizophr Bull. 1979;5:322–333. [PubMed]

Mosher LR, Vallone R, Menn AZ. The treatment of acute psychosis without neuroleptics: six-week psychopathology outcome data from the Soteria project. Int J Soc Psychiatry. 1995;41:157–173. [PubMed]

Mosher LR. Soteria and other alternatives to acute psychiatric hospitalisation. J Nerv Ment Dis. 1999;187:142–149. [PubMed]

About Professor Heinz

Andreas Heinz is Director and Chair of the Department of Psychiatry and Psychotherapy at the Charité— Universitätsmedizin Berlin.

He is the author of the just released A New Understanding of Mental Disorders: Computational Models for Dimensional Psychiatry, MIT Press, 2017.

 

 

Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email jcoynester@Gmail.com

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing

Notes

[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.

 

 

 

 

 

 

 

Creating illusions of wondrous effects of yoga and meditation on health: A skeptic exposes tricks

The tour of the sausage factory is starting, here’s your brochure telling you’ll see.

 

A recent review has received a lot of attention with it being used for claims that mind-body interventions have distinct molecular signatures that point to potentially dramatic health benefits for those who take up these practices.

What Is the Molecular Signature of Mind–Body Interventions? A Systematic Review of Gene Expression Changes Induced by Meditation and Related Practices.  Frontiers in Immunology. 2017;8.

Few who are tweeting about this review or its press coverage are likely to have read it or to understand it, if they read it. Most of the new agey coverage in social media does nothing more than echo or amplify the message of the review’s press release.  Lazy journalists and bloggers can simply pass on direct quotes from the lead author or even just the press release’s title, ‘Meditation and yoga can ‘reverse’ DNA reactions which cause stress, new study suggests’:

“These activities are leaving what we call a molecular signature in our cells, which reverses the effect that stress or anxiety would have on the body by changing how our genes are expressed.”

And

“Millions of people around the world already enjoy the health benefits of mind-body interventions like yoga or meditation, but what they perhaps don’t realise is that these benefits begin at a molecular level and can change the way our genetic code goes about its business.”

[The authors of this review actually identified some serious shortcomings to the studies they reviewed. I’ll be getting to some excellent points at the end of this post that run quite counter to the hype. But the lead author’s press release emphasized unwarranted positive conclusions about the health benefits of these practices. That is what is most popular in media coverage, especially from those who have stuff to sell.]

Interpretation of the press release and review authors’ claims requires going back to the original studies, which most enthusiasts are unlikely to do. If readers do go back, they will have trouble interpreting some of the deceptive claims that are made.

Yet, a lot is at stake. This review is being used to recommend mind-body interventions for people having or who are at risk of serious health problems. In particular, unfounded claims that yoga and mindfulness can increase the survival of cancer patients are sometimes hinted at, but occasionally made outright.

This blog post is written with the intent of protecting consumers from such false claims and providing tools so they can spot pseudoscience for themselves.

Discussion in the media of the review speaks broadly of alternative and complementary interventions. The coverage is aimed at inspiring  confidence in this broad range of treatments and to encourage people who are facing health crises investing time and money in outright quackery. Seemingly benign recommendations for yoga, tai chi, and mindfulness (after all, what’s the harm?) often become the entry point to more dubious and expensive treatments that substitute for established treatments.  Once they are drawn to centers for integrative health care for classes, cancer patients are likely to spend hundreds or even thousands on other products and services that are unlikely to benefit them. One study reported:

More than 72 oral or topical, nutritional, botanical, fungal and bacterial-based medicines were prescribed to the cohort during their first year of IO care…Costs ranged from $1594/year for early-stage breast cancer to $6200/year for stage 4 breast cancer patients. Of the total amount billed for IO care for 1 year for breast cancer patients, 21% was out-of-pocket.

Coming up, I will take a skeptical look at the six randomized trials that were highlighted by this review.  But in this post, I will provide you with some tools and insights so that you do not have to make such an effort in order to make an informed decision.

Like many of the other studies cited in the review, these randomized trials were quite small and underpowered. But I will focus on the six because they are as good as it gets. Randomized trials are considered a higher form of evidence than simple observational studies or case reports [It is too bad the authors of the review don’t even highlight what studies are randomized trials. They are lumped with others as “longitudinal studies.]

As a group, the six studies do not actually add any credibility to the claims that mind-body interventions – specifically yoga, tai chi, and mindfulness training or retreats improve health by altering DNA.  We can be no more confident with what the trials provide than we would be without them ever having been done.

I found the task of probing and interpreting the studies quite labor-intensive and ultimately unrewarding.

I had to get past poor reporting of what was actually done in the trials, to which patients, and with what results. My task often involved seeing through cover ups with authors exercising considerable flexibility in reporting what measures were they actually collected and what analyses were attempted, before arriving at the best possible tale of the wondrous effects of these interventions.

Interpreting clinical trials should not be so hard, because they should be honestly and transparently reported and have a registered protocol and stick to it. These reports of trials were sorely lacking, The full extent of the problems took some digging to uncover, but some things emerged before I got to the methods and results.

The introductions of these studies consistently exaggerated the strength of existing evidence for the effects of these interventions on health, even while somehow coming to the conclusion that this particular study was urgently needed and it might even be the “first ever”. The introductions to the six papers typically cross-referenced each other, without giving any indication of how poor quality the evidence was from the other papers. What a mutual admiration society these authors are.

One giveaway is how the introductions  referred to the biggest, most badass, comprehensive and well-done review, that of Goyal and colleagues.

That review clearly states that the evidence for the effects of mindfulness is poor quality because of the lack of comparisons with credible active treatments. The typical randomized trial of mindfulness involves a comparison with no-treatment, a waiting list, or patients remaining in routine care where the target problem is likely to be ignored.  If we depend on the bulk of the existing literature, we cannot rule out the likelihood that any apparent benefits of mindfulness are due to having more positive expectations, attention, and support over simply getting nothing.  Only a handful  of hundreds of trials of mindfulness include appropriate, active treatment comparison/control groups. The results of those studies are not encouraging.

One of the first things I do in probing the introduction of a study claiming health benefits for mindfulness is see how they deal with the Goyal et al review. Did the study cite it, and if so, how accurately? How did the authors deal with its message, which undermines claims of the uniqueness or specificity of any benefits to practicing mindfulness?

For yoga, we cannot yet rule out that it is better than regular exercising – in groups or alone – having relaxing routines. The literature concerning tai chi is even smaller and poorer quality, but there is the same need to show that practicing tai chi has any benefits over exercising in groups with comparable positive expectations and support.

Even more than mindfulness, yoga and tai chi attract a lot of pseudoscientific mumbo jumbo about integrating Eastern wisdom and Western science. We need to look past that and insist on evidence.

Like their introductions, the discussion sections of these articles are quite prone to exaggerating how strong and consistent the evidence is from existing studies. The discussion sections cherry pick positive findings in the existing literature, sometimes recklessly distorting them. The authors then discuss how their own positively spun findings fit with what is already known, while minimizing or outright neglecting discussion of any of their negative findings. I was not surprised to see one trial of mindfulness for cancer patients obtain no effects on depressive symptoms or perceived stress, but then go on to explain mindfulness might powerfully affect the expression of DNA.

If you want to dig into the details of these studies, the going can get rough and the yield for doing a lot of mental labor is low. For instance, these studies involved drawing blood and analyzing gene expression. Readers will inevitably encounter passages like:

In response to KKM treatment, 68 genes were found to be differentially expressed (19 up-regulated, 49 down-regulated) after adjusting for potentially confounded differences in sex, illness burden, and BMI. Up-regulated genes included immunoglobulin-related transcripts. Down-regulated transcripts included pro-inflammatory cytokines and activation-related immediate-early genes. Transcript origin analyses identified plasmacytoid dendritic cells and B lymphocytes as the primary cellular context of these transcriptional alterations (both p < .001). Promoter-based bioinformatic analysis implicated reduced NF-κB signaling and increased activity of IRF1 in structuring those effects (both p < .05).

Intimidated? Before you defer to the “experts” doing these studies, I will show you some things I noticed in the six studies and how you can debunk the relevance of these studies for promoting health and dealing with illness. Actually, I will show that even if these 6 studies got the results that the authors claimed- and they did not- at best, the effects would trivial and lost among the other things going on in patients’ lives.

Fortunately, there are lots of signs that you can dismiss such studies and go on to something more useful, if you know what to look for.

Some general rules:

  1. Don’t accept claims of efficacy/effectiveness based on underpowered randomized trials. Dismiss them. The rule of thumb is reliable to dismiss trials that have less than 35 patients in the smallest group. Over half the time, true moderate sized effects will be missed in such studies, even if they are actually there.

Due to publication bias, most of the positive effects that are published from such sized trials will be false positives and won’t hold up in well-designed, larger trials.

When significant positive effects from such trials are reported in published papers, they have to be large to have reached significance. If not outright false, these effect sizes won’t be matched in larger trials. So, significant, positive effect sizes from small trials are likely to be false positives and exaggerated and probably won’t replicate. For that reason, we can consider small studies to be pilot or feasibility studies, but not as providing estimates of how large an effect size we should expect from a larger study. Investigators do it all the time, but they should not: They do power calculations estimating how many patients they need for a larger trial from results of such small studies. No, no, no!

Having spent decades examining clinical trials, I am generally comfortable dismissing effect sizes that come from trials with less than 35 patients in the smaller group. I agree with a suggestion that if there are two larger trials are available in a given literature, go with those and ignore the smaller studies. If there are not at least two larger studies, keep the jury out on whether there is a significant effect.

Applying the Rule of 35, 5 of the 6 trials can be dismissed and the sixth is ambiguous because of loss of patients to follow up.  If promoters of mind-body interventions want to convince us that they have beneficial effects on physical health by conducting trials like these, they have to do better. None of the individual trials should increase our confidence in their claims. Collectively, the trials collapse in a mess without providing a single credible estimate of effect size. This attests to the poor quality of evidence and disrespect for methodology that characterizes this literature.

  1. Don’t be taken in by titles to peer-reviewed articles that are themselves an announcement that these interventions work. Titles may not be telling the truth.

What I found extraordinary is that five of the six randomized trials had a title that indicating a positive effect was found. I suspect that most people encountering the title will not actually go on to read the study. So, they will be left with the false impression that positive results were indeed obtained. It’s quite a clever trick to make the title of an article, by which most people will remember it, into a false advertisement for what was actually found.

For a start, we can simply remind ourselves that with these underpowered studies, investigators should not even be making claims about efficacy/effectiveness. So, one trick of the developing skeptic is to confirm that the claims being made in the title don’t fit with the size of the study. However, actually going to the results section one can find other evidence of discrepancies between what was found in what is being claimed.

I think it’s a general rule of thumb that we should be careful of titles for reports of randomized that declare results. Even when what is claimed in the title fits with the actual results, it often creates the illusion of a greater consistency with what already exists in the literature. Furthermore, even when future studies inevitably fail to replicate what is claimed in the title, the false claim lives on, because failing to replicate key findings is almost never a condition for retracting a paper.

  1. Check the institutional affiliations of the authors. These 6 trials serve as a depressing reminder that we can’t go on researchers’ institutional affiliation or having federal grants to reassure us of the validity of their claims. These authors are not from Quack-Quack University and they get funding for their research.

In all cases, the investigators had excellent university affiliations, mostly in California. Most studies were conducted with some form of funding, often federal grants.  A quick check of Google would reveal from at least one of the authors on a study, usually more, had federal funding.

  1. Check the conflicts of interest, but don’t expect the declarations to be informative. But be skeptical of what you find. It is also disappointing that a check of conflict of interest statements for these articles would be unlikely to arouse the suspicion that the results that were claimed might have been influenced by financial interests. One cannot readily see that the studies were generally done settings promoting alternative, unproven treatments that would benefit from the publicity generated from the studies. One cannot see that some of the authors have lucrative book contracts and speaking tours that require making claims for dramatic effects of mind-body treatments could not possibly be supported by: transparent reporting of the results of these studies. As we will see, one of the studies was actually conducted in collaboration with Deepak Chopra and with money from his institution. That would definitely raise flags in the skeptic community. But the dubious tie might be missed by patients in their families vulnerable to unwarranted claims and unrealistic expectations of what can be obtained outside of conventional medicine, like chemotherapy, surgery, and pharmaceuticals.

Based on what I found probing these six trials, I can suggest some further rules of thumb. (1) Don’t assume for articles about health effects of alternative treatments that all relevant conflicts of interest are disclosed. Check the setting in which the study was conducted and whether it was in an integrative [complementary and alternative, meaning mostly unproven.] care setting was used for recruiting or running the trial. Not only would this represent potential bias on the part of the authors, it would represent selection bias in recruitment of patients and their responsiveness to placebo effects consistent with the marketing themes of these settings.(2) Google authors and see if they have lucrative pop psychology book contracts, Ted talks, or speaking gigs at positive psychology or complementary and alternative medicine gatherings. None of these lucrative activities are typically expected to be disclosed as conflicts of interest, but all require making strong claims that are not supported by available data. Such rewards are perverse incentives for authors to distort and exaggerate positive findings and to suppress negative findings in peer-reviewed reports of clinical trials. (3) Check and see if known quacks have prepared recruitment videos for the study, informing patients what will be found (Serious, I was tipped off to look and I found that).

  1. Look for the usual suspects. A surprisingly small, tight, interconnected group is generating this research. You could look the authors up on Google or Google Scholar or  browse through my previous blog posts and see what I have said about them. As I will point out in my next blog, one got withering criticism for her claim that drinking carbonated sodas but not sweetened fruit drinks shortened your telomeres so that drinking soda was worse than smoking. My colleagues and I re-analyzed the data of another of the authors. We found contrary to what he claimed, that pursuing meaning, rather than pleasure in your life, affected gene expression related to immune function. We also showed that substituting randomly generated data worked as well as what he got from blood samples in replicating his original results. I don’t think it is ad hominem to point out a history for both of the authors of making implausible claims. It speaks to source credibility.
  1. Check and see if there is a trial registration for a study, but don’t stop there. You can quickly check with PubMed if a report of a randomized trial is registered. Trial registration is intended to ensure that investigators commit themselves to a primary outcome or maybe two and whether that is what they emphasized in their paper. You can then check to see if what is said in the report of the trial fits with what was promised in the protocol. Unfortunately, I could find only one of these was registered. The trial registration was vague on what outcome variables would be assessed and did not mention the outcome emphasized in the published paper (!). The registration also said the sample would be larger than what was reported in the published study. When researchers have difficulty in recruitment, their study is often compromised in other ways. I’ll show how this study was compromised.

Well, it looks like applying these generally useful rules of thumb is not always so easy with these studies. I think the small sample size across all of the studies would be enough to decide this research has yet to yield meaningful results and certainly does not support the claims that are being made.

But readers who are motivated to put in the time of probing deeper come up with strong signs of p-hacking and questionable research practices.

  1. Check the report of the randomized trial and see if you can find any declaration of one or two primary outcomes and a limited number of secondary outcomes. What you will find instead is that the studies always have more outcome variables than patients receiving these interventions. The opportunities for cherry picking positive findings and discarding the rest are huge, especially because it is so hard to assess what data were collected but not reported.
  1. Check and see if you can find tables of unadjusted primary and secondary outcomes. Honest and transparent reporting involves giving readers a look at simple statistics so they can decide if results are meaningful. For instance, if effects on stress and depressive symptoms are claimed, are the results impressive and clinically relevant? Almost in all cases, there is no peeking allowed. Instead, authors provide analyses and statistics with lots of adjustments made. They break lots of rules in doing so, especially with such a small sample. These authors are virtually assured to get results to crow about.

Famously, Joe Simmons and Leif Nelson hilariously published claims that briefly listening to the Beatles’ “When I’m 64” left students a year and a half older younger than if they were assigned to listening to “Kalimba.”  Simmons and Leif Nelson knew this was nonsense, but their intent was to show what researchers can do if they have free reign with how they analyze their data and what they report and  . They revealed the tricks they used, but they were so minor league and amateurish compared to what the authors of these trials consistently did in claiming that yoga, tai chi, and mindfulness modified expression of DNA.

Stay tuned for my next blog post where I go through the six studies. But consider this, if you or a loved one have to make an immediate decision about whether to plunge into the world of woo woo unproven medicine in hopes of  altering DNA expression. I will show the authors of these studies did not get the results they claimed. But who should care if they did? Effects were laughably trivial. As the authors of this review about which I have been complaining noted:

One other problem to consider are the various environmental and lifestyle factors that may change gene expression in similar ways to MBIs [Mind-Body Interventions]. For example, similar differences can be observed when analyzing gene expression from peripheral blood mononuclear cells (PBMCs) after exercise. Although at first there is an increase in the expression of pro-inflammatory genes due to regeneration of muscles after exercise, the long-term effects show a decrease in the expression of pro-inflammatory genes (55). In fact, 44% of interventions in this systematic review included a physical component, thus making it very difficult, if not impossible, to discern between the effects of MBIs from the effects of exercise. Similarly, food can contribute to inflammation. Diets rich in saturated fats are associated with pro-inflammatory gene expression profile, which is commonly observed in obese people (56). On the other hand, consuming some foods might reduce inflammatory gene expression, e.g., drinking 1 l of blueberry and grape juice daily for 4 weeks changes the expression of the genes related to apoptosis, immune response, cell adhesion, and lipid metabolism (57). Similarly, a diet rich in vegetables, fruits, fish, and unsaturated fats is associated with anti-inflammatory gene profile, while the opposite has been found for Western diet consisting of saturated fats, sugars, and refined food products (58). Similar changes have been observed in older adults after just one Mediterranean diet meal (59) or in healthy adults after consuming 250 ml of red wine (60) or 50 ml of olive oil (61). However, in spite of this literature, only two of the studies we reviewed tested if the MBIs had any influence on lifestyle (e.g., sleep, diet, and exercise) that may have explained gene expression changes.

How about taking tango lessons instead? You would at least learn dance steps, get exercise, and decrease any social isolation. And so what if there were more benefits than taking up these other activities?

 

 

Complex PTSD, STAIR, Social Ecology and lessons learned from 9/11- a conversation with Dr. Marylene Cloitre

Dr. Marylene Cloitre is the Associate Director of Research of the National Center for PTSD Dissemination and Training Division and a research Professor of Psychiatry and Child and Adolescent Psychiatry at the New York University, Langone Medical Center in New York City. She is a recipient of several honors related to her service in New York City following 9-11 and was an advisory committee member for the National September 11 Memorial Museum. She has specific expertise in complex PTSD and for the development and dissemination of STAIR (Skills Training in Affective and Interpersonal Regulation), a psychological therapy designed to help survivors of trauma.

Dr. Jain: What exactly is complex PTSD?

Dr. Cloitre:
Complex PTSD has a very long history, really pushed primarily by clinicians who looked at their patients and thought there’s something more going on here than PTSD.
In DSM-4, complex PTSD was recognized in the additional features where there is a mix of problems related to emotion regulation, self-concept and interpersonal relationships. After that, there was really no funding around investigating this further and the research for it has been spotty and it was sort of dying on the vine.

But with the development of a new version of ICD-11, there was an opportunity really to refresh consideration about complex PTSD. I was part of a work group that started in 2012, we looked at the literature and thought there seems to be enough data to support two different forms of PTSD , the classic fear circuitry disturbance and then this more general kind of disturbance in these three core areas of emotion regulation, self-concept and interpersonal relationships.

We proposed that there should be two distinct disorders: PTSD and complex PTSD and it looks like it’s been accepted and it will part of the ICD-11 coming out in the 2018.

Since the initial proposal, I’ve been working with many people, mostly Europeans, where ICD is more prominent than in the United States and there are now about nine published papers providing supporting evidence that these two distinct disorders.

Dr. Jain:
Can you summarize in which ways they’re distinct? So on a clinical level what would you see in complex PTSD?

Dr. Cloitre: Mostly we’ve been looking at latent class analysis which is a newish kind of data analytic technique which looks at how people cluster together and you look at their symptom profile. There are a group of people who very distinctly have PTSD in terms of re-experiencing, avoidance and hyperarousal and then they’re fine on everything else. Then you have another group of people who have these problems as well as problems in these three other areas.And then there are another group of people who, despite exposure to trauma, do fairly well.

What we’ve been seeing are these three groups in clinical populations as well as in community populations and adults as well as in children.

Overall, these latent class analyses are really showing that people cluster together in very distinctly different ways. I think the important thing about this distinction is, what’s next? Perhaps there are different clinical interventions that we want to look at to maximize good outcome. Some people may do very well with exposure therapy. I would say the PTSD clustered folks will do very well and have optimal outcome because that’s all that bothers them. For the other folks, they have a lot of other problems that really contribute to their functional impairment.

For me as a clinician as well as a researcher, I’ve always been worried not so much about the diagnosis of the person in front of me but about how well they’re functioning in the world. What I have noticed is you can get rid of the PTSD symptoms, for people with complex PTSD, but they’re still very impaired.
My motivation for thinking about a different diagnosis and different treatment is to identify these other problems and then to provide interventions, that target these other problems, for the goal of improving our day to day life functioning. If you don’t have ability to relate well to people because you mistrust them or are highly avoidant or if you think poorly about yourself these are huge issues then we need to target these issues in treatment.

Dr. Jain
Have you noticed that different types of trauma contribute to PTSD v complex PTSD?

Dr. Cloitre Yes, it does and it kind of makes sense that people who have had sustained and repeated trauma (e.g. multiple and sustained trauma doing childhood) are the ones who have complex PTSD.

Dr. Jain: Can you tell us a little bit about the fundamental philosophy that drove you to come up with STAIR and what evidence is there for it’s effectiveness?

Dr. Cloitre I came to develop STAIR as a result of paying attention to what my patients were telling me they wanted help with, that was the driving force. It wasn’t a theoretical model, it was that patients came and said,” I’m really having problems with my relationships and that’s what I need help with” or “I really have problems with my moods and I need help with that”.

So, I thought, why don’t we start there? That is why I developed STAIR and developed it as a sequence therapy while respecting the importance of getting into the trauma and doing exposure based work, I also wanted to engage the patient and respect their presented needs. That what it’s all about for me.
Overtime I saw a secondary benefit, that an improved sense of self and improved emotion regulation could actually impact the value of exposure therapy in a positive way.

In my mind, the real question is: What kind of treatments work best for whom? That is the question. There will be some people for whom going straight to exposure therapy is the most effective and efficient way to get them functioning and they’ll be happy with three or four sessions, just like some 9/11 survivors I saw. They only needed three or four sessions.

Other people might do better with combination therapies .

Dr. Jain The studies that you’ve done with STAIR can you summarize the populations you have used it for?

Dr. Cloitre: I began using STAIR + exposure with the population I thought would most need it which is people with histories of childhood abuse. In fact, our data show that the combination of skills training, plus exposure was significantly better than skills alone or exposure alone. So that’s very important. It also reduced dropout very significantly as compared to exposure, which is a continuing problem with exposure therapy especially for this population

Dr. Jain Can you speak to the social ecology/social bonds and PTSD, what the research world can tell us about the social dimensions of PTSD and how we can apply this to returning military members and veterans?

Dr. Cloitre: I think that social support is critical to the recovery of people who have been exposed to trauma and who are vulnerable to symptoms .We have enough studies showing that it’s the critical determinant of return to health.

I think we have done a very poor job of translating this observation into something meaningful for returning veterans. There is general attention that families are part of the solution and communities are part of the solution but it is vague –there isn’t really a sense of what are we going to do about it.

I think these wars (Afghanistan and Iraq) are very different than Vietnam, that’s where soldiers came back and they were called baby killers and had tomatoes and garbage thrown at them. You can really understand why a vulnerable person would spiral downwards into pretty significant PTSD and substance abuse.

I think we need to be more thoughtful and engage veterans in discussions about what’s most helpful in the reintegration progress, because there are probably really explicit things like, being welcomed home, but also very subtle things that we haven’t figured out about the experience.
I think on a community or family level, there’s a general awareness but we haven’t really gotten clear thinking or effective about what to do. I think that’s our next step. The parade and the welcome home signs are not enough.

I’ll give an example of what I witnessed after 9/11. The community around survivors feels awkward and doesn’t know what to do, so they start moving away. Combine this with the survivor who is sad or being irritable and so not the most attractive person to engage with. I say to patients sometimes, it’s a really unfair and unfortunate circumstance, that in a way, not only are you suffering but you’re also kind of responsible for making people around you comfortable with you.

I used to do STAIR because patients ask for it and also I thought,” oh well some people never had these social skills in the first place, which is why they are vulnerable with PTSD” but then I noticed that STAIR was useful for everybody with PTSD because I think the traumatized patient has an unfair burden to actually reach out to people in a process of re-engagement because the community and the family is confused. Others, strangers or say employers are scared. So they have to kind of compensate for the discomfort of others, which is asking a lot.

I think in our therapies we can say look, it’s not fair, but people feel uncomfortable around the veteran. They don’t know how to act and in a way you not only have to educate yourself about your circumstance but, in the end, educate others.

Dr. Jain Survivor perception of social support really matters. If you take a group of disaster survivors, we may feel well we’re doing this for them and we’re doing that for them but if the survivors, for whatever reason, don’t perceive it as being helpful it doesn’t matter. When I think about marginalized populations in our society, I don’t think to communicate to others about how to help you or how to support you is that simple.

Dr. Cloitre It’s very complicated because it is a dynamic. I think we need to talk to trauma survivors and understand what their needs are so that the community can respond effectively and be a match. Not everybody wants the same thing. That’s the real challenge. I think if survivors can be a little bit more compassionate, not only towards themselves for what they have been through but to others who are trying to communicate with them and failing.

Dr. Jain That can be hard to do when you’re hurting. The social ecology of PTSD is really important but it’s really complicated and we are not there, in terms of harnessing social ecology to improve lives.

Dr. Cloitre No. I think we’re just groping around in the dark, in a room that says the social ecology of PTSD is important. We don’t know how to translate that observation into actionable plans either in our individual therapies or in our family therapies and then in our community actions or policies.
But I do think that, in the individual therapy, recognizing the importance of trying to enhance perception of support where they’re real. Secondly, recognizing the burden that they have that’s unfair and try to enhance skills for communicating with people. Thirdly, having compassion for people who are out there who are trying to communicate but failing.
I have had a lot of patients who come, into therapy, and say,
” This is so ridiculous. They’re saying stupid things to me”.
And, I say,
“well at least they’re trying”
I think it’s important for the affected community to have the voice and take leadership, instead of people kind of smothering them with social support that they may or may not need.

Dr. Jain
I know you’re a native New Yorker and you provided a lot of service to New York City following 9/11. Can you speak about that work? And in particular what I’m really interested in is that body of research that emerged after 9/11 because I feel like that has helped us understand so much about disaster related PTSD.

Dr. Cloitre We found out was most people are very resilient. We were able to get prevalence rates of PTSD following 9/11, that in of itself was very important. I think that’s the strongest research that came out.

I think on a social level it broke open awareness, in this country and maybe globally, about the impact of trauma and about PTSD because it came with very little shame or guilt.
Some people say what was so different about 9/11? Well, because it happened to the most powerful country and the most powerful city then if it could happen to them it could happen anywhere. That was the response, there was not this marginalization, ”Well this is a victim circumstance and it couldn’t happen to me and they must have done something to bring it on themselves”.
There was a hugely different response and that was so key to the shift in recognition of the diagnosis of PTSD which then led to more general research about it. I think that that was huge.
Before 9/11, I would say I do research in PTSD and people would say, what is that? Now I say I do research in PTSD, not a single person ever asks me what that is. I mean I’m sure they don’t really know what it is but they never looked confused. It’s a term that is now part and parcel of American society.
9/11 revolutionized the awareness of PTSD and also the acceptability of adverse effects, as a result of trauma. There was new knowledge gained and also just a transformation in awareness that was national and probably global because the impact it had and the ripple effects on another countries.
I think those are the two main things.
I don’t think it’s really done very much for our thinking about treatment. I think we continue to do some of our central treatments and we didn’t get too far in really advancing or diversifying.
For me personally, I learned a lot about the diversity of kinds of trauma survivors. Very different people, very different reactions.
I think probably the other important academic or scholarly advance, was in the recognition of this blend of loss and trauma and how they come together. That people’s responses to death ,under circumstances of unexpected and violent death, has also advanced. In fact now ICD-11 there will be a traumatic grief diagnosis, which I think has moved forward because of 9/11. That’s pretty big.