Misleading systematic review of mindfulness studies used to promote Bensen Institute for Mind-Body Medicine services

A seriously flawed overview “systematic review “ of systematic reviews and meta-analyses of the effects of mindfulness on health and well-being alerts readers how they need to be skeptical of what they are told about the benefits of mindfulness.

Especially when the information comes those benefiting enormously from promoting the practice.

The glowing evaluation of the benefits of mindfulness presented in a PLOS One review is contradicted by a more comprehensive and systematic review which was cited but summarily dismissed. As we will see, the PLOS One article sidesteps substantial confirmation bias and untrustworthiness in the mindfulness literature.

The review was prepared by authors associated with the Benson-Henry Institute for Mind-Body Medicine, which is tied to Massachusetts General Hospital and Harvard Medical School. The institute directly markets mindfulness treatment to patients and training to professionals and organizations.  Its website provides links to research articles such as this one, which are used to market a wide range of programs –

being calm

Recently PLOS One published corrections to five articles from this group concerning previous statements about the authors having no conflicts of interest to declare. The corrections acknowledged extensive conflicts of interest.

The Competing Interests statement is incorrect. The correct Competing Interests statement is: The following authors hold or have held positions at the Benson-Henry Institute for Mind-Body Medicine at Massachusetts General Hospital, which is paid by patients and their insurers for running the SMART-3RP and related relaxation/mindfulness clinical programs, markets related products such as books, DVDs, CDs and the like, and holds a patent pending (PCT/US2012/049539 filed August 3, 2012) entitled “Quantitative Genomics of the Relaxation Response.”

While the review we will be discussing was not corrected, it should have been.

The same conflicts of interest should have been disclosed to readers evaluating the trustworthiness of what is being presented to them.

Probing this review will demonstrate just how hard it is to uncover the bias and distortions that routinely is provided by promoters of mindfulness wanting to demonstrate the evidence base for what they offer.

The article is

Gotink, R.A., Chu, P., Busschbach, J.J., Benson, H., Fricchione, G.L. and Hunink, M.M., 2015. Standardised mindfulness-based interventions in healthcare: an overview of systematic reviews and meta-analyses of RCTs. PLOS One, 10(4), p.e0124344.

The abstract offers the conclusion:

The evidence supports the use of MBSR and MBCT to alleviate symptoms, both mental and physical, in the adjunct treatment of cancer, cardiovascular disease, chronic pain, depression, anxiety disorders and in prevention in healthy adults and children.

This evaluation is more emphatically stated near the end of the article:

This review provides an overview of more trials than ever before and the intervention effect has thus been evaluated across a broad spectrum of target conditions, most of which are common chronic conditions. Study settings in many countries across the globe contributed to the analysis, further serving to increase the generalizability of the evidence. Beneficial effects were mostly seen in mental health outcomes: depression, anxiety, stress and quality of life improved significantly after training in MBSR or MBCT. These effects were seen both in patients with medical conditions and those with psychological disorders, compared with many types of control interventions (WL, TAU or AT). Further evidence for effectiveness was provided by the observed dose-response relationship: an increase in total minutes of practice and class attendance led to a larger reduction of stress and mood complaints in four reviews [18,20,37,54].

Are you impressed? “More than ever before”? “Generalizability of the evidence”? Really?

And in wrap up summary comments:

Although there is continued scepticism in the medical world towards MBSR and MBCT, the evidence indicates that MBSR and MBCT are associated with improvements in depressive symptoms, anxiety, stress, quality of life, and selected physical outcomes in the adjunct treatment of cancer, cardiovascular disease, chronic pain, chronic somatic diseases, depression, anxiety disorders, other mental disorders and in prevention in healthy adults and children.

Compare and contrast these conclusions with a more balanced and comprehensive review.

The US Agency for Healthcare Research and Quality (AHCRQ) commissioned a report from Johns Hopkins University Evidence-based Practice Center.

The 439 page report is publicly available:

Goyal M, Singh S, Sibinga EMS, Gould NF, Rowland-Seymour A, Sharma R, Berger Z, Sleicher D, Maron DD, Shihab HM, Ranasinghe PD, Linn S, Saha S, Bass EB, Haythornthwaite JA. Meditation Programs for Psychological Stress and Well-Being. Comparative Effectiveness Review No. 124. (Prepared by Johns Hopkins University Evidence-based Practice Center under Contract No. 290-2007-10061–I.) AHRQ Publication No. 13(14)-EHC116-EF. Rockville, MD: Agency for Healthcare Research and Quality; January 2014.

A companion, less detailed article was also published in JAMA: Internal Medicine:

Goyal, M., Singh, S., Sibinga, E.M., Gould, N.F., Rowland-Seymour, A., Sharma, R., Berger, Z., Sleicher, D., Maron, D.D., Shihab, H.M. and Ranasinghe, P.D., 2014. Meditation programs for psychological stress and well-being: a systematic review and meta-analysis. JAMA Internal Medicine, 174(3), pp.357-368.

Consider how conclusions of this article were characterized in the Bensen-Henry PLOS One article. The article is briefly mentioned without detailing its methods and conclusions.

Recently, Goyal et al. published a review of mindfulness interventions compared to active control and found significant improvements in depression and anxiety[7].

And

A recent review compared meditation to only active control groups, and although lower, also found a beneficial effect on depression, anxiety, stress and quality of life. This review was excluded in our study for its heterogeneity of interventions [7].

What the Goyal et JAMA: Internal Medicine actually said:

After reviewing 18 753 citations, we included 47 trials with 3515 participants. Mindfulness meditation programs had moderate evidence of improved anxiety (effect size, 0.38 [95% CI, 0.12-0.64] at 8 weeks and 0.22 [0.02-0.43] at 3-6 months), depression (0.30 [0.00-0.59] at 8 weeks and 0.23 [0.05-0.42] at 3-6 months), and pain (0.33 [0.03- 0.62]) and low evidence of improved stress/distress and mental health–related quality of life. We found low evidence of no effect or insufficient evidence of any effect of meditation programs on positive mood, attention, substance use, eating habits, sleep, and weight. We found no evidence that meditation programs were better than any active treatment (ie, drugs, exercise, and other behavioral therapies).

The review also notes that evidence of the effectiveness mindfulness interventions is largely limited to trials in which it is compared to no treatment, wait list, or a usually ill-defined treatment as usual (TAU).

In our comparative effectiveness analyses (Figure 1B), we found low evidence of no effect or insufficient evidence that any of the meditation programs were more effective than exercise, progressive muscle relaxation, cognitive-behavioral group therapy, or other specific comparators in changing any outcomes of interest. Few trials reported on potential harms of meditation programs. Of the 9 trials reporting this information, none reported any harms of the intervention.

This solid JAMA: Internal Medicine review explains why its conclusions may differ from past reviews:

Reviews to date report a small to moderate effect of mindfulness and mantra meditation techniques in reducing emotional symptoms (eg, anxiety, depression, and stress) and improving physical symptoms (eg, pain).7– 26 These reviews have largely included uncontrolled and controlled studies, and many of the controlled studies did not adequately control for placebo effects (eg, waiting list– or usual care–controlled studies). Observational studies have a high risk of bias owing to problems such as self-selection of interventions (people who believe in the benefits of meditation or who have prior experience with meditation are more likely to enroll in a meditation program and report that they benefited from one) and use of outcome measures that can be easily biased by participants’ beliefs in the benefits of meditation. Clinicians need to know whether meditation training has beneficial effects beyond self-selection biases and the nonspecific effects of time, attention, and expectations for improvement.27,28

Basically, this article insists that mindfulness be evaluated in a  head-to- head comparison to an active treatment. Failure to provide such a comparison means not being able to rule out that apparent effects of mindfulness are nonspecific, i.e.,  not due to any active ingredient of the practice.

An accompanying editorial commentary raised troubling issues about the state of the mindfulness literature. It noted that limiting inclusion to RCTs with an active control condition and a patient population experiencing mental or physical health problems left only 3% (47/18,753 of the citations that had been retrieved. Furthermore:

The modest benefit found in the study by Goyal et al begs the question of why, in the absence of strong scientifically vetted evidence, meditation in particular and complementary measures in general have become so popular, especially among the influential and well educated…What role is being played by commercial interests? Are they taking advantage of the public’s anxieties to promote use of complementary measures that lack a base of scientific evidence? Do we need to require scientific evidence of efficacy and safety for these measures?

How did the Bensen-Henry review arrive at a more favorable assessment?

The issue that dominated the solid Goyal et al systematic review and meta analysis is not prominent in the Bensen-Henry review. The latter article hardly mentions the importance of whether mindfulness is compared to an active treatment. It doesn’t mention if any difference in effect size for mindfulness can be expected when the comparison is an active treatment.

The Bensen-Henry review stated that it excluded systematic reviews and meta analyses if they did not focus on MBCT or MBSR. One has to search the supplementary materials to find that Goyal et al was excluded because it did not calculate separate effect sizes for mindfulness-based stress reduction (MBSR).

However, Bensen-Henry review included narrative systematic reviews that did not calculate effect sizes at all. Furthermore, the excluded Goyal et al JAMA: Internal Medicine article summarized MBSR separate from other forms of meditation and the more comprehensive AHCQR report provided detailed forest plots of effect sizes for MBSR with specific outcomes and patient populations.

Hmm, keeping out evidence that does fit with the sell-job story?

We need to keep in mind the poor manner in which MBSR was specified, particularly in the early studies that dominate the reviews covered by the Bensen – Henry article. Many of the treatments were not standardized and certainly not manualized. They sometimes, but not always incorporate psychoeducation, other cognitive behavioral techniques, and varying types of yoga.

The Bensen-Henry authors claimed to have performed quality assessments  of the reviews  included using a checklist based on the validated PRISMA guidelines. However, PRISMA evaluates the quality of reporting in reviews, not the quality of how the review was done. The checklist used by the Bensen-Henry authors was highly selective in terms of which PRISMA items it chose to include, left unvalidated, and simply eccentric. For instance, one item evaluated a review favorably if it interpreted studies “independent of funding source.”

A lack of independence of a study from its funding source is generally considered a high risk of bias.  There is ample documentation of  industry-funded studies and reviews exaggerating the efficacy of interventions supported by industry.

Our group received the Bill Silverman Prize from the Cochrane Collaboration for our identifying funding source as an overlooked source of bias in many meta analyses and, in particular, in Cochrane reviews. The Bensen-Henry checklist scores a review ignoring funding source as a virtue, not a vice! These authors are letting trials and reviews from promoters of mindfulness off the hook for potential conflict of interest, including their own studies and this review.

Examination of the final sample of reviews included in the Bensen-Henry analysis reveals that some are narrative reviews and could not contribute effect sizes. Some are older reviews that depend on a less developed literature. While optimistic about the promise of mindfulness, authors of these reviews frequently complained about the limits on the quantity and quality of available studies, calling for larger and better quality studies. When integrated and summarized by the Bensen-Henry authors, these reviews were given a more positive glow than the original authors conveyed.

Despite claims of being an “overview of more trials than ever before”, Bensen-Henry excluded all but 23 reviews. Some of those included do not appear to be recent or rigorous, particularly when contrasted with the quality and rigor of the excluded Goyal et al:

MJ, Norris RL, Bauer-Wu SM (2006) Mindfulness meditation for oncology patients: A discussion and critical review. Integr Cancer Ther 5: 98–108. pmid:16685074

Shennan C, Payne S, Fenlon D (2011) What is the evidence for the use of mindfulness-based interventions in cancer care? A review. Psycho-Oncology 20: 681–697.

Veehof MM, Oskam MJ, Schreurs KMG, Bohlmeijer ET (2011) Acceptance-based interventions for the treatment of chronic pain: A systematic review and meta-analysis. Pain 152: 533–542

Coelho HF, Canter PH, Ernst E (2007) Mindfulness-Based Cognitive Therapy: Evaluating Current Evidence and Informing Future Research. J Consult Clin Psychol 75: 1000–1005.

Ledesma D, Kumano H (2009) Mindfulness-based stress reduction and cancer: A meta-analysis. Psycho-Oncology 18: 571–579.

Ott MJ, Norris RL, Bauer-Wu SM (2006) Mindfulness meditation for oncology patients: A discussion and critical review. Integr Cancer Ther 5: 98–108.

Burke CA (2009) Mindfulness-Based Approaches with Children and Adolescents: A Preliminary Review of Current Research in an Emergent Field. J Child Fam Stud.

Do we get the most authoritative reviews of mindfulness from  Holist Nurs Pract, Integr Cancer Ther, and Psycho-Oncology?

To cite just one example of the weakness of evidence being presented as strong, take the bold Bensen-Henry conclusion:

Further evidence for effectiveness was provided by the observed dose-response relationship: an increase in total minutes of practice and class attendance led to a larger reduction of stress and mood complaints in four reviews [18,20,37,54].

“Observed dose-response relationship”? This claim is  based [check out with respect to the citations just above] on Ott et al, 18, Smith et al 20, Burke 37 and Proulx 54, which makes the evidence neither recent nor systematic. I am confident that other examples will not hold up if scrutinized.

Further contradiction of the too perfect picture of mindfulness therapy conveyed by the Bensen – Henry review.

A more recent PLOS One review of mindfulness studies exposed the confirmation bias in the published mindfulness literature. It suggested a too perfect picture has been created of uniformly positive studies.

Coronado-Montoya, S., Levis, A.W., Kwakkenbos, L., Steele, R.J., Turner, E.H. and Thombs, B.D., 2016. Reporting of positive results in randomized controlled trials of mindfulness-based mental health interventions. PLOS One, 11(4), p.e0153220.

A systematic search yielded 124 RCTs of mindfulness-based treatments:

108 (87%) of 124 published trials reported >1 positive outcome in the abstract, and 109(88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positivetrials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion.

Furthermore:

None of the 21 registrations, however, adequately specified a single primary outcome (or multiple primary outcomes with an appropriate plan for statistical adjustment) and specified the outcome measure, the time of assessment, and the metric (e.g., continuous, dichotomous). When we removed the metric requirement, only 2 (10%) registrations were classified as adequate.

And finally:

There were only 3 trials that were presented unequivocally as negative trials without alternative interpretations or caveats to mitigate the negative results and suggest that the treatment might still be an effective treatment.

What we have is a picture of trials of mindfulness-based treatment having an excess of positive studies, given the study sample sizes. Selective reporting of positive outcomes likely contributed to this excess of published positive findings in the published literature. Most of the trials were not preregistered and so it’s unclear whether the positive outcomes that were reported were hypothesized to be the primary outcomes of interest. Most of the trials that were preregistered remained unpublished 30 months after the trials were completed.

The Goyal et al. study originally planned to conduct quantitative analyses of publication biases, but abandoned the effort when they couldn’t find sufficient numbers of the 47 studies that that reported most of the outcomes they evaluated.

Conclusion

 The Bensen-Henry review produces a glowing picture of the quality of RCTs evaluating MSBR and the consistency of positive findings across diverse outcomes and populations. This is consistent with the message that they want to promote in marketing their products to patients, clinicians, and institutions. In this blog post I’ve uncovered substantial problems in internal to the Bensen-Henry review in terms of the studies that were included and the manner in which they were evaluated. But now we have external evidence in two reviews without obvious conflicts of interest come into markedly different appraisals of a literature that lacks appropriate control groups and seems to be reporting findings with a distinct confirmation bias.

I could have gone further, but what I found about the Bensen-Henry review seems sufficient for a serious challenge to the validity of its conclusions.  Investigation of the claims made about dose-response relationships between amount of mindfulness practice and outcomes should encourage probing of other specific claims.

The larger issue is that we should not rely on promoters of MSBR products to provide unbiased estimates of their efficacy. This issue recalls very similar problems in the evaluation of Triple P Parenting Programs. Evaluations in which promoters were involved produce markedly more positive results than from independent evaluations. Exposure by my colleagues and me led to over 50 corrections and corrigendum to articles that previously had no conflicts of interest. But the process did not occur without fierce resistance from those whose livelihood was being challenged.

A correction to the Bensen-Henry PLOS One review is in order to clarify the obvious conflicts of interest of the authors. But the problem is not limited to reviews or original studies from Benson-Henry Institute for Mind-Body Medicine. It’s time that authors be required to answer more explicit questions about conflict of interest. Ruling out a conflict of interest should be based on authors having to endorse explicitly no conflicts, rather than on their basis of their not disclosing a conflict and then being able to claim it was an oversight that they did not report one.

Postscript Who was watching at PLOS One to keep out infomercials from promoters associated with Massachusetts General Hospital and Harvard Medical School? The Academic Editor was To avoid the appearance of  a conflict of interest,  should he have recused him from serving as editor?

This is another flawed paper for which I’d love to see the reviews.

eBook_Mindfulness_345x550I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my new website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.

 

But it’s not PTSD! Bad research distorts our understanding of a serious disorder.

1044810_10151728883522755_1792348626_nThe diagnosis of posttraumatic stress disorder (PTSD) was originally limited to persons who had faced a horrific situation outside the range of normal human experience. Their ability to function also  had to be impaired from a distinct clusters of psychological symptoms that they had developed in response to that situation. Eligible experiences included natural and man-made disasters and acts of violence including military combat, atrocities against civilians, and rape.

Acute and chronic physical illnesses and health events were at first excluded from being considered as causes of PTSD. Such events might involve realistic threats of death, but were not presumed to precipitate a serious and persistent mental disorder.

There was also a stubborn conceptual and assessment issue behind excluding illness and health events: A diagnosis of PTSD required that the events that were associated with PTSD be securely in the past. For instance, the World War II veterans whom my colleagues and I studied had survived the Bataan death march with experiences that we could hardly imagine. When we assessed them, the veterans knew well that they no longer had to fear Asians, starvation or poisoning from rotten food, or being forgotten in the jungle. They knew that they could get up in the middle of the night to go bathroom without a guard beating them or being bitten by venomous creatures. Yet, over half century later, they suffered hypervigilance, flashbacks, nightmares, and avoidance of things that reminded them of their experiences.

Medical events such as a diagnosis of cancer or a stroke can convince people that  their life will be shorter than they had thought just before the event. But lives being changed by a medical event is qualitatively different from veterans’ sense of being changed forever by the degradation and dehumanization they faced at the hands of their captors or for a rape victim, at the hands of a rapist. Persons who have just suffered a life threatening medical event are understandably upset immediately afterwards. Vigilance and hypersensitivity to bodily sensations may be quite appropriate and adaptive responses. Some of these bodily sensations are novel and threatening, having had their onset with the medical event. Or they could be signs that they need to promptly seek medical attention. Ignoring them could be difficult or maladaptive.

Over time, most persons having had a medical event will adjust. Their initial emotional reaction subsides, even if bouts of anxiety, as well as realistic concerns and self-monitoring continue. Distinguishing normal from abnormal may make sense in the abstract, but there are sometimes challenges deciding between normal reactions and abnormal reactions warranting mental health interventions. Professionals do not wish to leave persistent and debilitating psychological reactions unaddressed, especially if they would respond to treatment. Yet, there is also a wish to avoid turning normal reactions into a mental disorder or for mental health professionals to interfere with the normal reactions in ways that might be counterproductive and even harmful.

The 1994 revision of the Diagnostic and Statistical Manual and subsequent versions removed the exclusion of acute and chronic illness and health events as possible causes of PTSD. The DSM did not specify that these events were necessarily traumatic, but raised the possibility that research could show that these events could indeed precipitate of PTSD and that mental health treatment reduced symptoms and improve functioning.

Cancer was expected to become the paradigmatic example of a disease which could prove traumatic and precipitate PTSD. Yet, as I showed in a previous blog post, quality research using diagnostic interviews does not reveal particularly high levels of PTSD among persons who have been diagnosed with cancer. Rates of PTSD often differ little from age-matched persons drawn from the general population who have not been diagnosed with cancer.

A diagnosis of PTSD is unique among mental disorders because it requires not only having a list of symptoms, but (a) exposure to an extraordinary event, (b) a reaction of intense fear, helplessness, or horror: (c) a minimal number of symptoms in each of a number of clusters: recurrence, avoidance, and arousal; (d) that these symptoms were not occurring before the event; and (e) the person is impaired in functioning specifically by these symptoms and not by other aspects of the event, like physical injuries. Many of the symptoms that could contribute to a diagnosis of PTSD are quite nonspecific expressions of distress, and could easily be part of another disorder like major depression or even a symptom of a physical condition or side effect of medical treatment. Confirming the diagnosis requires careful interviewing, establishing that all criteria a-e are present, and ruling out alternative explanations of symptoms. Yet, the relaxing of the rules for diagnosis of PTSD yielded lots of poorly conceived studies that did not involve careful assessments of these criteria, but relied only on patient endorsing symptoms and complaints on checklists.

All someone now needed to become a PTSD researcher was simply to have access to a clinical population and a checklist. That does not take many resources, but also does not involve researchers interacting much with patients in ways that might dispel the notion they were suffering from PTSD. What would otherwise be an uninteresting study of short-term distress among cancer patients can be dressed up and dramatized with claims that it is trauma that is being studied. Journals routinely publish insufficiently documented claims that cancer is a trauma and that cancer patients are suffering PTSD or “PTSD-like symptoms.”

The rates of PTSD found in checklist studies of cancer patients exaggerate what is found in better designed interview studies, but there are so many more of poorly designed checklist studies than good ones. An investigator mindlessly combining studies without regard to their quality might conclude that PTSD is rampant. The myth of cancer as traumatic continues to hold sway in psycho-oncology—the study of psychosocial aspects of cancer—where much of the research basically involves misinterpretation of the distress that follows a diagnosis of cancer that is often normal and transient and if not, reflects pre-existing problems.

cbs news ptsdjpgA team of Columbia University investigators seem to be going down the same path as psycho-oncology researchers did, but for cardiovascular events such as acute coronary syndrome (myocardial infarction or angina), transient ischemic attack (TIA), and stroke. These investigators recently published an article in PLOS One that they have followed up with a press release and television and radio interviews declaring cardiac events to be commonly associated with PTSD.  They call for resources for routine screening of cardiac patients and for their research program.

The spot on national TV which you can reach here is carefully crafted. One investigator explains that their meta analysis concluded that PTSD is common among survivors of stroke (1 in 4). Another has prepped the interviewer to ask him about Sam, his grandfather from Alabama, whom the investigator describes as “a mountain of a man with a huge heart.”  This introduces the recommended human element into the promotion without the relevance to PTSD ever really being established. The interview is an example of good marketing of what I will demonstrate in this blog post to be bad science.

Should we revise our understanding of PTSD and expect it to be a common reaction to a stroke? Should we vigilantly monitor survivors of stroke and promptly intervene with the most validated intervention for PTSD, exposure therapy that would involve them re-experiencing  having had a stroke within the safety of a therapy session? These investigators do not make a strong case. There is no evidence that patients would benefit from what could be an uncomfortable therapy experience, and I think many therapists would be uncomfortable with the idea. Maybe provide overly anxious patients after such an event with psychological support and education, but not exposure therapy.

Taking the investigators’ claims seriously could undermine our understanding of a serious and debilitating mental disorder and lead to the offering inappropriate treatment to survivors of stroke do not need it and probably would not benefit from it. What these investigators are attempting to do is more understandable than other researchers who call for marshaling resources to detect and treat PTSD among mothers who have just given birth. But it is just as wrongheaded to allow such “bracket creep” of the diagnosis of PTSD and just as trivializing of the experience of persons who are truly traumatized by combat, acts of terrorism, and rape as when giving birth is declared an experience with a high risk of PTSD.

The Columbia University investigators’ press release states:

Our current results show that PTSD in stroke and TIA survivors may increase their risk for recurrent stroke and other cardiovascular events…Given that each event is life-threatening and that strokes/TIAs add hundreds of millions of dollars to annual health expenditures, these findings are important to both the long-term survival and health costs of these patient populations.

This is a misleading statement because this study did not even investigate whether what they defined as PTSD in stroke and TIA survivors increases the risk for other cardiac events. These investigators are hyping the importance of their work with no evidence to support their claims, only what was known before their study, that strokes and transient ischemic events cost money to treat.

The study found that 23 percent, or roughly one in four, of the patients developed PTSD symptoms within the first year after their stroke or TIA, with 11 percent, or roughly one in nine, experiencing chronic PTSD more than a year later.

And

Surviving a life-threatening health scare can have a debilitating psychological impact, and health care providers should make it a priority to screen for symptoms of depression, anxiety, and PTSD among these patient populations.

The investigators’ article that was the focus of their publicity blitz is available open access here.

What the investigators did

  • Searched the literature and identified 9 studies for inclusion in a meta analysis estimating the prevalence of PTSD in survivors of stroke and TIA.

What they claimed they found

  • That 1 in 4 (23%) survivors of stroke also suffer from  PTSD within one year after the event and 11% after one year.

Where they went wrong

Table 1, reproduced from the paper can be seen below. You can also get a larger, clearer view of the table here.

journal.ponestroke table.t001

As can be seen, studies vary greatly in their estimates of “PTSD,” from low estimates of three, four, and 10% too high estimates of 30 to 37%. Interview studies find only 3 to 10%, with the lowest estimate from careful semi structured diagnostic interviewing. The high estimates of over 30% are found in lower quality studies depending on patients completed checklists. Basically, the better the quality of the study the lower the estimate, and the gap between poor quality studies and high quality studies is so great as to discourage integration of these figures with meta analysis into a single, summary estimate. The investigators’ report of 23% is found in no study, but falls in the considerable gap between the poor and the better quality studies.

One estimate of the prevalence of PTSD depended on responses to the Impact of Events Scale (IES) checklist, which is particularly inappropriate. The IES does not measure PTSD, it was not intended to. The IES does not assess whether a patient reacted with intense fear, helplessness, or horror. It fails to assess symptoms needed for a diagnosis, excluding hyperarousal symptoms altogether and some avoidance and intrusion symptoms. Check out the items here. They are vague and nonspecific and might easily refer to normal responses.  In some circumstances like after a medical event, the items could tap adaptive processes of alternately thinking about how life was now going to be different and deliberately taking a break from focusing on it. The investigators had no business combining estimates of “PTSD” from the IES with estimates from diagnostic interviews.

Another checklist generating estimates over 30%, the Posttraumatic Stress Diagnostic Scale more directly follows symptoms associate with a diagnosis of PTSD. But it as highly correlated with the Beck Depression Inventory (.7) as their respective reliabilities allow. Essentially the PSDS questionnaire is statistically interchangeable with a self-report measure of depressive symptoms and the latter would have produce the same results, which overestimate both the number of clinical diagnoses of depression and PTSD.

As a I noted in a previous blog post

Questionnaires do not provide an opportunity for investigators to explain to patients what is meant by specific questions or to probe patients’ responses in an interview. Most importantly, there is no opportunity to determine the nature of the symptoms or to rule out other disturbance, such as major depression, crucial to the diagnosis of PTSD. Not surprisingly, questionnaires yield a high proportion of false positives when compared to a diagnostic interview.

Regardless, the results obtained with questionnaires suggest that rates of heightened distress among the stroke and TIA patients are no greater than what is typically found in a random sample of primary care waiting room patients, about a third, but most of this distress in patients from both types of settings represents neither major depression nor PTSD.

As for making screening for depression and other mental disorders a priority, the real question is whether it would serve to improve patient outcomes. We examined the question in systematic reviews, one in JAMA, the Journal of the American Medical Association and a follow up in PLOS One.  We found a lack of evidence that introducing depression screening would benefit patients in cardiovascular care.  Recently, the Canadian Task Force on Preventive Health Care retracted its previous recommendations for routine screening with language remarkably similar to another of our systematic reviews of screening for depression, this time for general medical care. The quality of routine care for depression in general medical settings is not good. Improving the quality and intensity of mental health treatment for patients already known to have disorders should be a higher priority than putting more patients into that low quality care.

Despite what these Columbia University investigators claim, the best evidence is that there is not much PTSD among stroke and TIA patients, screening would be inefficient and not likely to have much benefit to patients, and certainly not for the mortality or risk of another event. Much of what these investigators consider PTSD is more accurately described as general psychological distress and not a diagnosable mental disorder. A brief consultation with established PTSD researchers could have explained that to them.

 

 

 

 

 

What I learned as an Academic Editor for PLOS ONE

Open access week is just around the corner, and I thought I’d take the opportunity to share my experience as an Academic Editor for PLOS ONE.

I was invited to join the team following a conversation at Science Online 2010 with I think Steve Koch, who recommended me to PLOS ONE, and before I knew it I was receiving lots of emails asking me to handle a manuscript.

The nice thing about PLOS ONE is that I get to choose which articles I get to handle, and I am very picky. I think that my role is not just to’ handle’ the manuscript but also make sure that the review process is fair. To do this, I need to understand the manuscript myself. I read every article that I take on and write a ‘mini-review’ of it for myself. When I get the external peer reviews I go through every comment they make against the submitted version, compare the different reviews and revisit my first impression of the manuscript. I have learned a lot from the reviewers, they see things I have missed, and they miss things I have detected. It has been a great insight into the peer review process. And I love not having to pull my crystal ball out to determine whether the article is ‘important’ but just having to decide whether it is scientifically solid.

Read/Review
Image by Wiertz Sébastien on Flickr, licenced under CC-BY

If the science is fundamentally good the articles are sent back to the authors for either minor or major changes, and then it falls back into my inbox. I have found it really interesting to see how authors deal with the reviewer’s comments. The re-submission is also a lot of work. I need to compare the original and new version, make sure that the authors have done what they say they have done, make sure that all reviewer’s comments have been addressed. And then I decide if I send it back for re-review or not. One thing that I found interesting in this second phase is when authors respond to the reviewer’s comments in the letter but do not incorporate that into the article. It is almost as if the responses are for my and the reviewer’s benefit only. So back it goes asking them to incorporate that rationale into the actual manuscript. Oh well. That means another round. Luckily this does not happen that often.

And then it is time to ‘accept’ the paper – and so back to the manuscript where I go through commas, colons, paragraphs, spelling mistakes, in text citations, reference lists, formatting, image quality, figure legends, etc. This I normally send to the authors together with their acceptance letter but don’t ask for the article to be re-submitted.

The main challenge I find with the process is time management.

When I get the request to handle an article, I accept or nor based on how much time I have to process the article. That is all good. Except that I cannot predict when the reviews, resubmissions, etc will eventually happen – and many times these articles ‘ready for decision’ show up in my inbox at a time when I cannot give it the full attention it deserves.  Let alone being able to predict when the revised version will be submitted! I find it impossible to plan ahead for this, especially since I have very little control over a lot of my time commitments (like the days I need to lecture, submit exam questions, mark exams). So if an article arrives while I am somewhere at a conference with limited internet connection… How can I plan for this?

Finding reviewers is another challenge. Sometimes they are hard to find. Nothing as discouraging as finding the “reviewer declined…” emails in my inbox indicating that it is back to the system to do something that I thought was done and dusted. The other day someone asked what is a reasonable amount of reviewing one should do a year? My answer was that one should probably at minimum return the number of reviews provided for one’s articles. Say I publish 3 articles a year, each with 3 reviews, then I should not start complaining about reviewing until I have reviewed at least 9 articles. (of course, one can factor in rejection rate, number of authors, etc) but a tit for tat trade-off seems like a fair expectation. So then why is it so hard to find reviewers? Come on people – if it was your paper getting delayed you’d be sending letters to the journal asking how come the article shows as still sitting with the Editor!

And that is the other thing I learned. Editors don’t just sit on papers because they are lazy. There are many reasons why handling an article may take more or less time. In some cases, after receiving the reviews I feel that something has been raised that needs a specialist to look at a specific aspect of the paper. Sometimes I need a second opinion because there is too little agreement between reviewers. Sometimes the reviewers don’t submit in the agreed time. There are many reasons why an article can be delayed, and so what I learned is to be patient with the editors when I send my papers for publication.

But despite the headaches, the stress and the struggle of being an Academic Editor, it is also an extremely rewarding experience. I keep learning more about science because I see a range of articles before they take their final shape, because I get to look into the discussion of what is good and what is weak. And I get to be part of what makes science great: trying to put out the best we can produce.

It is unfortunate that this process is locked up. I think that there is a lot to learn from it. I think that students and early career scientists would really benefit from seeing the process in articles that are not their own, how variable the quality of the reviews are, what dealing well with reviewers comments and suggestions looks like. And the public too would benefit from seeing what this peer review is all about – what the strengths and weaknesses of the process are and what having been peer reviewed really means.

So, back to Open Access week. Access to the final product is really good. Access to the process of peer review can make understanding the literature even better, because it exposes a part of the process of science that is also worth sharing.