Consistently poor coverage of mental health issues in The Guardian

Issuing a readers’ advisory: The Guardian provides misleading, badly skewed coverage of mental health issues vitally important to mental health service users.

Guardian PulitzerStories in The Guardian can confuse and disempower mental health service users seeking information for difficult decisions about choosing and sticking to treatments. Articles labeled Psychology and Health and sometimes Science don’t adhere to the quality that earned The Guardian a Pulitzer Prize.

In this issue of Mind the Brain, I show why there should be a formal readers advisory for mental health information appearing in The Guardian. The excellent watchdog of faulty health coverage in the media, NIH Choices: Behind the Headlines  should routinely monitor stories appearing in The Guardian and provide more balanced analyses.

NHS choices

 

 

 

 

You can compare my assessments to your own evaluation with the links I provide to the stories in The Guardian.

Some recent examples:

At last, a promising alternative to antipsychotics for schizophrenia

Imagine that, after feeling unwell for a while, you visit your GP. “Ah,” says the doctor decisively, “what you need is medication X. It’s often pretty effective, though there can be side-effects. You may gain weight. Or feel drowsy. And you may develop tremors reminiscent of Parkinson’s disease.” Warily, you glance at the prescription on the doctor’s desk, but she hasn’t finished. “Some patients find that sex becomes a problem. Diabetes and heart problems are a risk. And in the long term the drug may actually shrink your brain.”

It is insulting to those who suffer from schizophrenia to have their life-altering experience trivialized and domesticated as simply “feeling unwell for a while.”

The article provides a fright-mongering depiction of the difficult choice that patients with schizophrenia face. Let’s give a critical look at the authors’ claim about drugs shrinking the brain. The sole citation is a PLOS One article. Authors of that article provided a carefully worded press release:

A study published today has confirmed a link between antipsychotic medication and a slight, but measureable, decrease in brain volume in patients with schizophrenia. For the first time, researchers have been able to examine whether this decrease is harmful for patients’ cognitive function and symptoms, and noted that over a nine year follow-up, this decrease did not appear to have any effect.

The UK senior author of the study further clarified:

doesn't appear

 

 

 

 

The study is not a randomized trial in which the amount of antipsychotic medication that patients received was manipulated. It is a small observational study comparing 33 patients with schizophrenia to 71 controls. Causal interpretation depends on statistical manipulation of correlational data. Yet a group of only 33 (!) patients with schizophrenia does not allow reliable multivariate analysis to explore alternative interpretations of the data. One plausible interpretation is that the amount of medication particular patients received is tied to severity of course of their schizophrenia. This would be a classic example of confounding by indication. The authors acknowledge this possibility:

It is conceivable that patients with the most severe illness lose more brain volume over time, reflecting intrinsic aspects of the pathology of schizophrenia, and the fact that severely ill patients receive higher doses of medication.

They further note:

Whilst it is extremely important to determine the causes of loss of brain volume in schizophrenia, an equally important question concerns its clinical significance. Loss of brain volume occurs throughout the majority of adult life in the healthy population, and whilst it might seem trivial that this would be disadvantageous, in some periods of development loss of brain tissue appears to be potentially beneficial [43]*.

Yes, antipsychotic medication poses serious side effects, doesn’t cure schizophrenia, and there are problems with adherence. But The Guardian article fails to note that the longer an episode of schizophrenia goes untreated, the less likelihood that a patient will ever resume a semblance of a normal life. And schizophrenia is associated with a 10% rate of suicide. What alternative does The Guardian article suggest?

A team led by Professor Anthony Morrison at the University of Manchester randomly assigned a group of patients, all of whom had opted not to take antipsychotics, to treatment as usual (involving a range of non-pharmaceutical care) or to treatment as usual plus a course of cognitive therapy (CT). Drop-out rates for the cognitive therapy were low, while its efficacy in reducing the symptoms of psychosis was comparable to what medication can achieve.

You can compare this summary to my critiques [1,2].

  • “Drop out rates..,were low?” The study retained fewer participants receiving cognitive therapy at the end of the study than there were authors.
  • The comparison treatment was ill-defined, but for some patients meant no treatment because they were kicked out of routine care for refusing medication.
  • A substantial proportion of patients assigned to cognitive therapy began taking antipsychotic medication by the end of the study.
  • There was no evidence that the response to cognitive therapy was comparable to that achieved with antipsychotic medication alone in clinical trials.

The authors of the study backed down from this last claim in an exchange of letters [1 and 2] at the Lancet with myself and others. BBC News dropped that claim after initially making it in coverage of the study.

as effective

 

 

Became

moderately effective

 

 

Don’t settle for my interpretation of the literature concerning cognitive therapy for psychosis (CBTp), go to a summary of available evidence in a blog post by Clive Adams, Chair of Mental Health Services Research and Co-Ordinating Editor of the Cochrane Schizophrenia Group at the University of Nottingham.

Adams wraps up with

Where does this leave CBTp?

In the opinion of this writer, having read and thought about the reviews (and others in some detail) it is time to move on.

It is great that there are data for questions around this potentially potent intervention for people with schizophrenia (for many treatments there are no data at all). I just cannot see that this approach (CBTp), on average, is reaping enough benefits for people.

Adams cites

Jones C, Hacker D, Cormac I, Meaden A, Irving CB. Cognitive behavioural therapy versus other psychosocial treatments for schizophrenia. Cochrane Database of Systematic Reviews 2012, Issue 4. Art. No.: CD008712.

Which concludes

Trial-based evidence suggests no clear and convincing advantage for cognitive behavioural therapy over other – and sometime much less sophisticated – therapies for people with schizophrenia.

Mark Taylor chaired the Scottish Intercollegiate Guidelines Network (SIGN) committee that produced the Scottish Guidelines for the Management of Schizophrenia. SIGN is the equivalent to the British National Initiative for Clinical Excellence (NICE). In an editorial in British Journal of Psychiatry he commented on the NICE guidelines’ favoring of cognitive behavioral therapy:

NICE has also taken the bold step of recommending CBT and family therapy alone for people with first-episode psychosis who wish it. The guideline acknowledges that psychosocial interventions are more effective in conjunction with antipsychotic medication, but still suggests this intervention alone for one month or less. This is controversial in view of the lack of robust supportive evidence and could potentially worsen outcomes. A related point is that in the guideline NICE seem oblivious to the fact that many patients with acute schizophrenia have impaired insight into their illness and health needs,5 and thus may not have capacity to consent to their treatment.

And finally, there is a Keith Laws’ carefully documented Science & Politics of CBT for Psychosis.

A Guardian story on mindfulness: New study shows mindfulness therapy can be as effective as antidepressants

Glass half-full readers, of course, will see that the trial results demonstrate that we actually have two similarly effective treatment options for recurrent depression: one involves eight weeks of a psychological therapy, the other relies on taking medication for two years. The challenge now is to make both equally available in treatment services.

I provided a detailed critique of this study. You would never guess from The Guardian article that mindfulness therapy used in this study was not designed to treat depression, only to prevent relapse in patients who had recovered in treatment by other means. And there was no assessment of whether patients assigned maintenance antidepressants were actually adhering to them or receiving adequate, guideline congruent care. You can see my comments on this study at PubMed Commons and leave your own as well.

The lead author of the study who is a colleague of the author of The Guardian went to the trouble of modifying the study registration to clarify that the trial was not designed to compare mindfulness therapy antidepressants for depression.

Feeling paranoid? Your worries are justified but can be helped

In this article The Guardian authors present as mainstream their unconventional views of what “feeling paranoid” represents. One of the authors promotes his own treatment for which he conducts workshops tied to his self-help books about worrying.

The fog machine gets going when the authors merge colloquial use of paranoid with the psychotic symptom. Many people, especially the young use “paranoid” in every speech in a way far removed from professionals discussing the psychotic symptom. Most endorsements of “feeling paranoid” on a checklist would not represent a psychiatric symptom. Even when present, the psychiatric symptom of paranoid is neither necessary nor sufficient for a diagnosis of schizophrenia.

When occurring in the context of a diagnosis of schizophrenia, however, paranoid delusions can be strongly held convictions accompanied by other lack of insight and thought disorder. I know of no evidence that everyday suspiciousness turns into psychotic persecutory delusions in persons who are not otherwise at risk for psychosis.

Think of someone insisting on shifting a conversation about skin cancer to talking about moles. Dropping lung cancer and chronic obstructive pulmonary disease for a more inclusive, but nonspecific “cough.” These are silly moves in a language game that prevent evaluation of health problems in terms of available evidence of necessity tied to more precise language.

The Guardian authors propose:

As we’ve noted previously on Guardian Science, anti-psychotics don’t work for everyone. And their side effects can be so unpleasant that many people refuse to take them. Moreover, there’s compelling evidence to suggest that the concept of “schizophrenia” doesn’t stand up scientifically, operating instead as a catch-all for a variety of distinct and frequently unrelated experiences.

What compelling evidence? Says who? I doubt that the one of these authors who is in the Psychology at Oxford would make such a statement in a formal presentation to his colleagues. But apparently it suffices for a lay audience including mental health services users seeking information about their condition and available treatments.

In general, readers should beware of authors making such sweeping statements in the media without identifying specific sources, degree of scientific consensus, or grade of evidence. The Guardian authors require readers to turn off critical skills and trust them.

This is why scientists have increasingly focused on understanding and treating those experiences in their own right, rather than assuming they’re simply symptoms of some single (albeit nebulous) underlying illness. So what have we discovered by applying this approach to paranoia?

Which “scientists”? Where? Readers are again left trusting the expertise of The Guardian authors.

The authors are getting set to promote the treatment developed by one them for “worry” in patients with paranoid delusions, which is marketed in his workshops, using his self-help book. I previously reviewed this study in detail.

I concluded

  • The treatment was a low intensity variation of a self-help exercise using excerpts from The Guardian authors’ book.
  • The treatment of the control group was ill-defined routine care. Relying on this control group as the only comparison precluded evaluating whether the intervention was any better than a non-branded similar amount of attention and support.
  • The primary outcome was hopelessly confounded with nonspecific worrying or anxiety and inadequate to assess clinically significant changes in psychotic symptoms of paranoid delusions.

I could go on with examples from other articles in The Guardian. But I think these suffice to establish that mental health service users seeking reliable information can find themselves misled by stories in The Guardian. Readers who don’t have the time or feel up to the task of checking out what they read against what is available in the literature would do well to simply ignore what is said in The Guardian about serious mental disorder and its treatment.

insect parts-page-0Readers advisory

Despite The Guardian having won the Pulitzer Prize for science reporting, readers may find stories about mental health that are seriously misleading and of little use in making choices about mental health problems and treatments. Information about these issues are not responsibly vetted or fact checked.

Whatever happened to responsible journalism at The Guardian?

in April 2015,The Guardian announced a Live Question and Answer Session.

How can academics help science reporters get their facts straight?

Academics have never been under more pressure to engage with the public and show the impact of their work. But there’s a problem. The media, one of the key channels for communicating with people outside academia, has a reputation for skewing or clumsily confusing scientific reports.

The session was in response to larger concerns about the accuracy of health and science journalism. With serious cutbacks in funding and layoffs of experienced professional journalists, the media increasingly rely upon copy/pasting exaggerated and inaccurate press releases generated by self-promoting researchers in the universities. What has been lost is the important filter function by which journalists offer independent evaluation of what they are fed by researchers’ public relations machines.

Many readers of The Guardian probably did notice a profound shift from reliance on professional journalists to to blogging provided free by academics. Accessing a link to The Guardian provided by a Google Search or Twitter, readers are given no indication that they will be reading a blog.

A blog post last year by Alastair Taylor identified the dilemma –

Media outlets, such as the Guardian Science Blogs, can present the science direct (and without paying for it) from the experts themselves. Blogging also opens up the potential for the democratisation of science through online debates, and challenges established hierarchies through open access and public peer review. At the same time, can scientists themselves offer the needed reflection on their research that an investigative journalist might do?

In the case of these  authors appearing in The Guardian, apparently not.

The new system has obvious strengths. I look forward to reading regular blog posts by academic sources who have proved trustworthy such as Suzi Gage, Chris Chambers, or many others. They have earned my trust sufficiently for me to recommend them. But unfortunately, appearing in The Guardian no longer necessarily indicates that stories are scientificially accurate and helpful to consumers. We must suspend our trust in The Guardian and be skeptical when encountering stories there about mental health.

I sincerely hope that this situation changes.

NOTE

*The authors of the PLOS One article cite a Nature article for this point, which states

More intelligent children demonstrate a particularly plastic cortex, with an initial accelerated and prolonged phase of cortical increase, which yields to equally vigorous cortical thinning by early adolescence. This study indicates that the neuroanatomical expression of intelligence in children is dynamic [bolding added].

 

BMC Medicine gets caught up in Triple P Parenting promoters’ war on critics and null findings

Undeclared conflicts of interest constitute scientific misconduct.

Why we should be as concerned about conflicts of interest in evaluations of nonpharmacological treatments, like psychotherapy.

whackWhack! Triple P promoters (3P) Cassandra L Tellegen and Kate Sofronoff struck again against critics and null findings, this time in BMC Medicine. As usual, there was an undisclosed financial conflict of interest.

Until recently, promoters of the multimillion-dollar enterpriseNothing_to_Declare controlled perception of their brand of treatment. They authored most reports of implementations and also systematic reviews and meta-analyses. They did not report financial conflicts of interest and denied any conflict when explicitly queried.

The promoters were able to insist on the official website:

No other parenting program in the world has an evidence base as extensive as that of Triple P. It is number one on the United Nations’ ranking of parenting programs, based on the extent of its evidence base.

At least two of the developers of 3P and others making money from it published a systematic review and meta-analysis they billed as comprehensive:

Sanders, M. R., Kirby, J. N., Tellegen, C. L., & Day, J. J. (2014). The Triple P-Positive Parenting Program: A systematic review and meta-analysis of a multi-level system of parenting support. Clinical Psychology Review, 34(4), 337-357.

Promoters of 3P are still making extravagant claims, but there has been noticeable change in the view from elsewhere. An independently conducted meta-analyses in BMC Medicine  demonstrated that previous evaluations depended heavily on flawed, mostly small studies that very often had undeclared conflicts of interest. I echoed and amplified the critique of the 3P Parenting literature, first in blog posts [1 , 2]  and then in an invited commentary in BMC Medicine.

The sordid history of the promoters’ “comprehensive” meta-analysis was revealed  and its overwhelming flaws were scrutinized.

Over 30 errata, addenda, and  corrigenda have been attached to previously published 3P articles and more keep accumulating. Just try Google scholar with “triple P parenting” and “erratum” or “addendum” or “corrigendum.” We will be seeing more errata as more editors are contacted.

Please click to enlarge
Please click to enlarge
Please click to enlarge
Please click to enlarge

There were reports in social media of how studies with null findings have been previously sandbagged in anonymous peer review or how authors were pressured by peer reviewers to spin results. Evidence surfaced of 3P founder Matt Sanders attempting to influence the reporting of a supposedly independently conducted evaluation. It is unclear how frequently this occurs, but represents a weakening of the important distinction between independent evaluations and those with conflicts of interest.

The Belgian government announced defunding of 3P programs. Doubts whether 3P was the treatment of choice were raised in 3P’s home country. 3p is a big ticket item in Australia, with New South Wales alone spending $6.6 million on it.

A detailed critique called into question the positive results claimed for one of the largest and influential population-based 3P interventions, and the non-disclosed conflicts of interest of the authors and the editorial board of the journal in which it appeared – Prevention Sciencewere exposed.

Are we witnessing the decline effect  in the evaluation of 3P? Applied to intervention studies, the term refers to the recurring pattern when weaker results accumulate from larger, more sophisticated studies not conducted by promoters of the intervention who initially had produced glowing reports of efficacy and effectiveness.

But the 3P promoters viciously and unethically fought back. Paid spokespersons took to the media to denounce independently conducted negative evaluations. Critics were threatened in their workplace, letters of complaint were written to their universities. Programs threatened with withdrawal of 3P resources if the critics weren’t silenced. Publications with undisclosed conflicts of interest authored by paid promoters of 3P continue to appear, despite the erratum and addendum apologizing for what had occurred in the past.

In this issue of Mind the Brain, I review the commentary in BMC Medicine. I raise the larger issue of whether the promoters of 3P’s recurring undeclared conflicts of interests represents actionable scientific misconduct. And I deliver a call to action.

My goal is to get BMC Medicine to change its policies concerning disclosure of conflict of interest and its sanctions for nondisclosure. I am not accusing the editorial board of BMC Medicine of wrongdoing.

The journal was the first to publish serious doubts about the effectiveness of 3P. Scottish GP  Phil Wilson and colleagues went there after his meta analysis was trashed in anonymous peer review at Elsevier’s Clinical Psychology Review (CPR). He faced retaliation from the workplace after he was contacted directly by the founder of 3P immediately after his submission to CPR. Matt Sanders sent him papers published after the end date Wilson had set for the papers included in his meta analysis. Bravo for BMC Medicine for nevertheless getting Wilson’s review into print. But the BMC Medicine editors have been repeatedly duped by 3P promoters and they now have the opportunity to serve as a model for academic publishing in mounting an effective response.

Stepping Stones Triple P: the importance of putting the findings into context

The BMC Medicine commentary by Tellegen and Sofronoff  is available here. The commentary first appeared without a response from the authors who were being criticized, but that has now been rectified.

Tellegen and Sofronoff chastised  the authors of a recent randomized trial [d], also published in BMC Medicine that evaluated the interventions with parents of children with Borderline to Mild Intellectual Ability (BMD).

Firstly, the authors present a rationale for conducting the study that does not accurately represent the current state of evidence for SSTP. Secondly, the authors present an impoverished interpretation of the findings within the paper.

The “current state of evidence for SSTP” about which Tellegen and Sofronoff complain refers to a systematic review and meta-analysis authored by Tellegen and Matt Saunders. I previously told how

  • An earlier version of this review was circulated on the Internet labeled as under review at Monographs of the Society of Research in Child Development. It’s inappropriate to distribute manuscripts indicating that they are “under review” at particular journals. APA guidelines explicitly forbid it. This may have led to the manuscript’s rejection.
  • The article nonetheless soon appeared in Clinical Psychology Review in a version that differed little from the manuscript previously available on the Internet, suggesting weak peer-review.
  • The article displays numerous instances of meta analysis malpractice. It is so bad and violates so many standards, that I recommend its use in seminars as an example of bad practices.
  • This article had no declared conflicts of interests.

Tellegen and Sofronoff’s charge of ”impoverished interpretation of the findings within the paper” refers to the investigators failing to cite 4 quite low quality studies that were not randomized trials but were treated as equivalent to RCTs in Tellegen and Sanders own meta-analyses.

In their response to the commentary from 3P, three of the authors – Sijmen A Reijneveld, Marijke Kleefman, and Daniëlle EMC Jansen of the original trial calmly and effectively dismissed these criticisms. They responded a lot more politely than I would have.

is youThe declarations of conflict of interest of 3P promoters in BMC Medicine: Is you is or ain’t you is making money?

An earlier commentary in BMC Medicine whose authors included 3P developer Matt Sanders and Kate Sofronoff – an author of the commentary under discussion – stated in the text:

Triple P is not owned by its authors, but by The University of Queensland. Royalty payments from dissemination activities, principally the sale of books, are paid by the publisher (Triple P International) to the University of Queensland’s technology transfer company (UniQuest), and distributed to the university’s Faculty of Social and Behavioural Sciences, School of Psychology, Parenting and Family Support Centre and contributory authors in accordance with the university’s intellectual property policy. None of the program authors own shares in Triple P International, the company licensed by the University of Queensland to disseminate the program worldwide.

What is one to make of this? It seems to answer “no” to the usual question of whether authors own stock or share ownership in a company. It doesn’t say directly about what happens to the royalties from the sale of books. Keep in mind, that the multimillion dollar enterprise of 3P involves selling lots of books, training materials, workshops, and government contracts. But a reader would have to go to the University of Queensland’s intellectual property policy to make sense of this disclaimer.

The formal COI statement in the article does not clarify much, but should arouse curiosity and skepticism –

…Royalties stemming from this dissemination work are paid to UniQuest, which distributes payments to the University of Queensland Faculty of Social and Behavioural Sciences, School of Psychology, Parenting and Family Support Centre, and contributory authors in accordance with the University’s intellectual property policy.

No author has any share or ownership in Triple P International. MS is the founder and lead author of the Triple P-Positive Parenting Program, and is a consultant to Triple P International. JP has no competing interests. JK is a co-author of Grandparent Triple P. KT is a co-author of many of the Triple P interventions and resources for families of children up to 12 years of age. AM is a co-author of several Triple P interventions for young children including Fuss-Free Mealtime Triple P. TM is a co-author of Stepping Stones Triple P for families of children with disabilities. AR is a co-author of Teen Triple P for parents of adolescents, and is Head of Training at Triple P International. KS has no competing interests.

omgThe authors seem to be acknowledging receiving money as “contributory authors” but there is still a lot of beating around the bush. Again, one needs to know what more about the university’s intellectual properties policy. Okay, take the trouble to go to the website for the University of Queensland to determine just how lucrative the arrangements are. You will surely say “Wow!” If you keep in mind the multimillion dollar nature of the 3P enterprise.

Please click to expand
Please click to expand

The present commentary in BMC Medicine seems to improve transparency –

The Triple P – Positive Parenting Program is owned by The University of Queensland (UQ). The University through its main technology transfer company, UniQuest Pty Ltd, has licensed Triple P International Pty Ltd to publish and disseminate the program worldwide. Royalties stemming from published Triple P resources are distributed to the Faculty of Health and Behavioural Sciences at UQ, Parenting and Family Support Centre, School of Psychology at UQ, and contributory authors. No author has any share or ownership in Triple P International Pty Ltd. Cassandra Tellegen and Kate Sofronoff are employees of the UQ and members of the Triple P Research Network

But the disclosure remains evasive and misleading. One has to look elsewhere to find out that there is only a single share of Triple P International Pty Ltd, owned by Mr Des McWilliam. He was awarded a 2009 honorary doctorate by the University of Queensland in 2009. The citation … acknowledged that

Mr McWilliam’s relationship with Triple P had provided grant leveraging, both nationally and internationally, for ongoing research by the PFSC and had supported ongoing international trials of the program.

another wedding photoInteresting, but there is still an undeclared COI that is required for adherence to the International Committee of Medical Journal Editors (ICMJE) to which BMC Medicine subscribes. Just as Matt Sanders is married to Patricia Sanders, Cassandra L Tellegen is married to James Kirby, a psychologist who has written at least 12 articles with Sanders on 3 P and a 3P workbook for grandparents. Aha, both Sanders and Tellegen are married to persons financially benefiting from 3P programs. All in the family. And spousal relationships are reportable conflicts of interest.

I don’t know about you, but I’m getting damn sick and tired of all the shuck ‘n jiving from triple P parenting when they’re required to disclose conflicts of interest.

shark-life-guardWhy get upset about conflict of interests in evaluations of nonpharmacological trials and reviews?

My colleagues and I played a role in improving the tracking of conflicts of interest going from industry-supported clinical trials to inclusion in meta-analyses. Our criticism prompted Cochrane Collaboration to close a loophole in investigator conflict of interest not having been identified as a formal risk of bias. Prior to the change, results of an industry sponsored pharmacological trial could be entered into a meta-analysis where the origins were no longer apparent. The collaboration awarded us the Bill Silverman Award for pointing out the problem.

It’s no longer controversial that in the evaluation of pharmacological interventions involving financial conflicts of interest are associated with inflated claims for efficacy. But the issue is ignored in evaluating nonpharmacological interventions, like psychotherapies or social programs like 3P.

Undeclared conflicts of interest in nonpharmacological trials threaten the trustworthiness of the psychological literature.

Readers are almost never informed about conflicts of interest in the trials evaluating psychotherapy evaluations and their integration in meta-analyses. Yet, “investigator allegiance” a.k.a. undeclared conflict of interest is one of the most robust predictors of effect size. Indeed, knowing the allegiance of an investigator more reliably predicts the direction of results than the particular psychotherapy being evaluated.

As reviewed in my numerous blog posts  [1,2,3], there are no doubts that evaluations of 3P are inflated with a strong confirmation bias associated with undeclared complex of interest.

But the problem is bigger than that when it comes to 3P. Millions of dollars are being invested in on claims that improvement in parenting skills resulting from parents’ participation in 3P are a solution for pressing larger social problems. The money that could be being wasted on 3P is diverted from other solutions. And participation of parents in 3P programs is often not voluntary. They participate to avoid other adverse outcomes like removal of the children from their home by enrollment in 3P. That’s not a fair choice, when 3P may not provide them any other benefit and certainly not what it is advertised as providing.

HMarriage-image2We should learn from the results of President George W. Bush committing hundreds of millions of dollars to promote stable and healthy marriages. The evidence for the programs selected for implementation were almost entirely from small-scale, methodologically flawed studies conducted by their developers who typically did not publish with declared conflicts of interest. Later evaluations showed the programs to be grossly ineffective. An independent evaluation  showed positive findings of the particular programs did not occurred more than would be expected by chance. What a waste, but I doubt President Bush cared. As part of a larger package, he was able to slash welfare payments to the poor and shorten the allowable time for unemployment payments.

Politicians will accept ineffective social programs if they are in the service of being able to claim that they are not just doing nothing, they are offering solutions. And the ineffective social programs are particularly attractive when they cost less than a serious effort to address the social problems.

must declare
Please click to enlarge

goods to declare2pgWhat I’m asking of BMC Medicine: A model response

  • Consistent with Committee on Publication Ethics (COPE) recommendations, persons with conflict of interest should not be invited to write commentaries. I’m not sure that wanting to respond to null findings for their prized product is a justifiable override of this restriction. But if a commentary is deemed justified, there needs to be no ambiguity about the declaration of conflict of interest by the authors.
  • If journals have a policy of commentaries not undergoing peer review, it should be indicated at each and every commentary that is the case. That would be consistent with COPE recommendations concerning non-peer-reviewed papers in journals identifying themselves as peer-reviewed.
  • Consistent with the opinion of many universities, failure to declare conflicts of interest constitutes scientific misconduct.
  • Scientific misconduct is grounds for retraction. Saying “Sorry, we forgot” in an erratum is an inadequate response. We need some sort of expanded pottery barn rule by which journals don’t just allow author to publish an apology when the journal discovers an undeclared conflict of interest.
  • Articles for which authors declare conflicts of interest should be subject to particular editorial scrutiny, given the common association of conflicts of interest and spinning of results and other confirmatory bias.
  • Obviously, 3P promoters have had problems figuring out what conflicts of interest they have to declare. How about requiring all articles to require a statement that I first saw in a BMJ article, something like

I have read all ICMJE standards and on that basis declare the following:

If authors are going to lie, let’s make it obvious and more actionable.

Please listen Up, PLOS One

I am grateful to PLOS One for carefully investigating my charges that the authors of an article had substantial undeclared conflicts of interest.

The situation was outrageous. Aside from the conflicts of interest, the article was – as I documented in my blog post – neurobalm. The appearance of positive results was obtained by selective reporting of the data from analyses redone after previous analyses did not produce positive results. A misleading video was released on the internet accompanied by soft music and claims to demonstrate scientific evidence in PLOS One that a particular psychotherapy “soothed the threatened brain.” Yup, that was also in the title of the PLOS One article. The highly spun article was part of a marketing of workshops to psychotherapists who likely had little or no research training.

I volunteer as an Academic Editor for PLOS One and I resent the journal being caught up in misleading clinicians – and the patients they treat.

Upon investigation, the journal added an elaborate conflict of interest statement to the article. I’m impressed with the diligence with which the investigation was conducted.

Yet, the absence of a previous statement meant that the authors had denied any conflicts of interest in response to a standard query from the journal during the submission process.I think their failure to make an appropriate disclosure is scientific misconduct. Retraction should be considered.

Given the strong association between conflicts of interests or investigator allegiance in outcomes of psychosocial research, revelation of the undisclosed conflict of interest should have at least precipitated a careful re-review with heightened suspicion of spin and bias. And not by an editor who had not been informed of the conflict of interest and had missed the flaws the first time the article was reviewed. Editors are humans, they get defensive when embarrassed.

Disclaimer: The opinions I express here are my own, and not necessarily those of the PLOS One or other members of the editorial board. Thankfully, at Mind the Brain, bloggers are free to speak out for themselves without censorship or even approval from the sponsoring journal. Remember what happened at Psychology Today and how I came to blog here.

 

 

Positive psychology interventions for depressive symptoms

POSITIVE THINKINGI recently  talked with a junior psychiatrist about whether she should undertake a randomized trial of positive psychology interventions with depressed primary care patients. I had concerns about whether positive psychology interventions would be acceptable to clinically depressed primary care patients or offputting and even detrimental.

Going back to my first publication almost 40 years ago, I’ve been interested in the inept strategies that other people adopt to try to cheer up depressed persons. The risk of positive psychology interventions is that depressed primary care patients would perceive the exercises as more ineffectual pressures on them to think good thoughts, be optimistic and snap out of their depression. If depressed persons try these exercises without feeling better, they are accumulating more failure experiences and further evidence that they are defective, particularly in the context of glowing claims in the popular media of the power of simple positive psychology interventions to transform lives.  Some depressed people develop acute sensitivity to superficial efforts to make them feel better. Their depression is compounded by their sense of coercion and invalidation of what they are so painfully feeling. This is captured in the hilarious Ren & Stimpy classic

happy happy 2

 

Happy Helmet Joy Joy song video

 

Something borrowed, something blue

By positive psychology interventions, my colleague and I didn’t have in mind techniques that positive psychology borrowed from cognitive therapy for depression. Ambitious positive psychology school-based interventions like the UK Resilience Program incorporate these techniques. They have been validated for use with depressed patients when part of Beck’s cognitive therapy, but are largely ineffective when used with nonclinical populations that are not sufficiently depressed to register an improvement. Rather, we had had in mind interventions and exercises that are distinctly positive psychology.

Dr. Joan Cook, Dr.Beck, and Jim Coyne
Dr. Joan Cook, Dr.Beck, and Jim Coyne

I surveyed the positive psychology literature to get some preliminary impressions, forcing myself to read the Journal of Positive Psychology and even the Journal of Happiness Studies. I sometimes had to take breaks and go see dark movies as an antidote, such as A Most Wanted Man and The Drop, both of which I heartily recommend. I will soon blog about the appropriateness of positive psychology exercises for depressed patients. But this post concerns a particular meta-analysis that I stumbled upon. It is open access and downloadable anywhere in the world. You can obtain the article and form your own opinions before considering mine or double check mine:

Bolier, L., Haverman, M., Westerhof, G. J., Riper, H., Smit, F., & Bohlmeijer, E. (2013). Positive psychology interventions: a meta-analysis of randomized controlled studies. BMC Public Health, 13(1), 119.

I had thought this meta analysis just might be the comprehensive, systematic assessment of the literature for which I searched. I was encouraged that it excluded positive psychology interventions borrowed from cognitive therapy. Instead, the authors sought studies that evaluated

the efficacy of positive psychology interventions such as counting your blessings [29,30], practicing kindness [31], setting personal goals [32,33], expressing gratitude [30,34] and using personal strengths [30] to enhance well-being, and, in some cases, to alleviate depressive symptoms [30].

But my enthusiasm was dampened by the wishy-washy conclusion prominently offered in the abstract:

The results of this meta-analysis show that positive psychology interventions can be effective in the enhancement of subjective well-being and psychological well-being, as well as in helping to reduce depressive symptoms. Additional high-quality peer-reviewed studies in diverse (clinical) populations are needed to strengthen the evidence-base for positive psychology interventions.

Can be? With apologies to Louis Jordan, is they or ain’t they effective? And just why is additional high-quality research needed to strengthen conclusions? Because there are only a few studies or because there are many studies, but mostly of poor quality?

I’m so disappointed when authors devote the time and effort that meta-analysis requires and then beat around the bush such wimpy, noncommittal conclusions.

A first read alerted me to some bad decisions that the authors had made from the outset. Further reads showed me how effects of these decisions were compounded by the poor quality of the literature of which they had to make sense.

I understand the dilemma the authors faced. The positive psychology intervention literature  has developed in collective defiance of established standards for evaluating interventions intended to benefit people and especially interventions to be sold to people who trust they are beneficial. To have something substantive to say about positive psychology interventions, the authors of this meta analysis had to lower their standards for selecting and interpreting studies. But they could have done a better job of integrating acknowledgement of problems in the quality of this literature into their evaluation of it. Any evaluation should come with a prominent warning label about the poor quality of studies and evidence of publication bias.

The meta-analysis

Meta-analyses involve (1) systematic searches of the literature; (2) selection of studies meeting particular criteria; and (3) calculation of standardized effect sizes to allow integration of results of studies with different measures of the same construct. Conclusions are qualified by (4) quality ratings of the individual studies and by (5) calculation of the overall statistical heterogeneity of the study results.

The authors searched

PsychInfo, PubMed and the Cochrane Central Register of Controlled Trials, covering the period from 1998 (the start of the positive psychology movement) to November 2012. The search strategy was based on two key components: there should be a) a specific positive psychology intervention, and b) an outcome evaluation.

They also found additional studies by crosschecking references of previous evaluations of positive psychology interventions.

To be selected, a study had to

  • Be developed within the theoretical tradition of positive psychology.
  • Be a randomized controlled study.
  • Measure outcomes of subjective well-being (such as positive affect), personal well-being (such as hope), or depressive symptoms (Such as Beck Depression Inventory).
  • Have results reported in a peer-reviewed journal.
  • Provide sufficient statistics to allow calculation of standardized effect sizes.

I’m going to focus on evaluation of interventions in terms of their ability to reduce depressive symptoms. But I think my conclusions hold for the other outcomes.

The authors indicated their way of assessing the quality of studies (0 to 6) was based on a count derived from an adaptation of the risk of bias items of the Cochrane collaboration. I’ll discuss their departures from the Cochrane criteria later, but these authors’ six criteria were

  • Adequacy of concealment of randomization.
  • Blinding of subjects to which condition they had been assigned.
  • Baseline comparability of groups at the beginning of the study.
  • Whether there was an adequate power analysis or  at least 50 participants in the analysis.
  • Completeness of follow up data: clear attrition analysis and loss to follow up < 50%.
  • Handling of missing data: the use of intention-to-treat analysis, as opposed to analysis of only completers.

The authors used two indicators to assess heterogeneity

  • The Q-statistic. When significant it calls for rejection of null-hypothesis of homogeneity and indicates that the true effect size probably does vary from study to study.
  • The  I2-statistic, which is a percentage indicating the study-to-study dispersion of effect sizes due to real differences, beyond sampling error.

[I know, this is getting technical, but I will try to explain as we go. Basically, the authors estimated the extent to which the effect size they obtained could generalize back to the individual studies. When individual studies vary very much, an overall effect size for a set of studies can be very different  from any for an individual intervention. So without figuring out the nature of this heterogeneity and resolving it, the effect sizes do not adequately represent individual studies or interventions.]

One way of reducing heterogeneity is to identify outlier studies that have much larger or smaller effect sizes than the rest. These studies can simply be removed from consideration or sensitivity analyses can be conducted, in which analyses are compared that retain or remove outlier studies.

The authors expected big differences across the studies and so adopted the criteria for keeping a study  of Cohen’s d (standardized difference) between intervention and control group of 2.5 standard deviations. That is huge. The average psychological intervention for depression differs from a waitlist or no treatment group by .62, but from another active treatment by only d = .20. How could these authors think that even an effect size of 1.0 with largely nonclinical populations could be expected for positive psychology interventions? They are at risk of letting in a lot of exaggerated and nonreplicable results. But stay tuned.

The authors also examined the likelihood that there was a publication bias in the studies that they were able to find, using funnel plots, the Orwin’s fail-safe number and the Trim and Fill method. I will focus on the funnel plot because it is graphic, but the other approaches provide similar results.  The authors of this meta analysis state

A funnel plot is a graph of effect size against study size. When publication bias is absent, the observed studies are expected to be distributed symmetrically around the pooled effect size.

Hypothetical funnel plot indicating bias CLICK TO ENLARGE
Hypothetical funnel plot indicating bias
CLICK TO ENLARGE

 

Results

At the end of the next two sections, I will conclude that the authors were overly generous in their evaluation of positive psychology interventions. The quality of the available studies precludes deciding whether positive psychology interventions are effective. But don’t accept this conclusion without me having to document my reasons for it. Please read on.

Click to enlarge
Click to enlarge

The systematic search identified 40 articles presenting results of 39 studies. The overall quality ratings of the studies were quite low [See Table 1 in the article]. There was a mean score of 2.5 (SD = 1.25). Twenty studies were rated of low quality (<3), 18 of medium quality (3-4), one received a rating of 5. The studies with the lowest quality had the largest effect sizes (Table 4).

Fourteen effect sizes were available for depressive symptoms. The authors report an overall small effect size of positive psychology interventions on depressive symptoms of .23. Standards for evaluating effect sizes are arbitrary, but this one would generally be considered small.

There was multiple indications  of publication bias, including  funnel plots of these effect sizes, and it was estimated that 5 negative findings were missing. According to the authors

Funnel plots were asymmetrically distributed in such a way that the smaller studies often showed the more positive results (in other words, there is a certain lack of small insignificant studies).

When the effect sizes for the missing studies were imputed (estimated), the adjusted overall effect size for depressive symptoms was reduced to a nonsignificant .19.

To provide some perspective, let’s examine the statistics for approximately the effect size of .20. There is a 56% probability (as opposed to a 50/50 probability) that a person assigned to a positive psychology intervention would be better off than a person assigned to the control group.

Created by Kristoffer Magnusson. http://rpsychologist.com/d3/cohend/
Created by Kristoffer Magnusson. http://rpsychologist.com/d3/cohend/

But let’s give a closer look to a forest plot of the studies with depressive symptoms as an outcome.

As can be seen in the figure below, each study has a horizontal line in the forest plot and most have a square box in the middle. The line represents the 95% confidence interval for the standard mean difference between the positive psychology intervention and its control group, and the darkened square is the mean difference.

forest plot
Click to enlarge

Note that two studies, Fava (2005) and Seligman, study 2 (2006) have long lines with an arrow at the right, but no darkened squares. The arrow indicates the line for each extends beyond what is shown in the graph. The long line for each indicates wide confidence intervals and imprecision in the estimated effect. Implications? Both studies are extreme outliers with large, but imprecise estimates of effect sizes. We will soon see why.

There are also vertical lines in the graph. One is marked 0,00 and indicates no difference between the intervention and control group. If the line for an individual study crosses it, the difference between the intervention and control group was not significant.

Among the things to notice are:

  • Ten of the 14 effect sizes available for depressive symptoms across the 0,00 line indicating that individual effect sizes were not significant.
  • The four lines that don’t cross this line and therefore had significant effects were Fava (2005), Hurley, Mongrain, Seligman (2006, study 2).

Checking Table 2 for characteristics of the studies, we find that Fava compared 10 people receiving the positive psychology intervention to a control group of 10. Seligman had 11 people in the intervention group and 9 in the control group. Hurley is listed as comparing 94 people receiving the intervention to 99 controls. But I checked the actual study and these numbers represent a substantial loss of participants from the 151 intervention and 164 control participants who started the study. Hurly lost 39% of participants from the Time 2 assessment and analyzed only completers, without intent to treat analyses or imputation (which would have been inappropriate anyway because of the high proportion of missing data).

I cannot make sense of Mongrain’s studies being counted as positive. A check with Table 1 indicates that 4 studies with Mongrain as an author were somehow combined. Yet, when I looked them up, one  study reports no significant differences between intervention and control conditions for depression, with the authors explicitly indicated that they failed to replicate Seligman et al (2006). A second study reports

In terms of depressive symptoms, no significant effects were found for time or time x condition. Thus, participant reports of depressive symptoms did not change significantly over time, or over time as a function of the condition that they were assigned to.

A third study reported significant effects for completers, but nonsignificant effects in multilevel modeling analyses that attempted to compensate for attrition. The fourth study  again failed to find that depressive symptoms’ decline over time was a function of which group to which participants were assigned, in multilevel analyses attempting to compensate for attrition.

So, Mongrain’s studies should not be counted as having a positive effect size for depressive symptoms unless perhaps we accept a biased completer analysis over multilevel modeling. We are left with Fava and Seligman’s quite small studies and Hurley’s study relying on completer analyses without adjustment for substantial attrition.

By the authors’ ratings, the quality of these studies was poor. Fava score and Seligman both scored 1 out of 6 in the quality assessments. Hurley scored 2.  Mongrain scored 4 and the other negative studies had a mean score of 2.6. So, any claim from individual studies of positive psychology interventions have an effect on depressive symptoms depend on two grossly underpowered studies and another study with analysis of only completers in the face of substantial attrition. And the positive studies tend to be of lower quality.

worse than itBut the literature concerning positive psychology interventions is worse than it first looks.

The authors’ quality ratings are too liberal.

  • Item 3, Baseline comparability of groups at the beginning of the study, is essential if effect sizes are to be meaningful. But it becomes meaningless if such grossly underpowered studies are included. For instance, it would take a large difference in baseline characteristics of Fava’s 8 intervention versus 8 control participants to be significant. That there were no significant differences in the baseline characteristics is very weak as assurance that individual or combined baseline characteristics did not account for any differences that were observed.
  • Item 4, Whether there was an adequate power analysis or at least 50 participants in the analysis can be met in either of 2 ways. But we don’t have evidence that the power analyses were conducted prior to the conduct of the trial and having at least 50 participants does not reduce bias if there is substantial attrition.
  • Item 5, Completeness of follow up data: clear attrition analysis and loss to follow up < 50%, allows studies with substantial loss to follow up to score positive. Hurly’s loss of over a third of participants who were randomized rules out generalization of results back to the original sample, much less an effect size that can be integrated with other studies that did not lose so many participants.

The authors of this meta analysis chose to “adapt,” rather than simply accept the validated Cochrane Collaboration risk of bias assessment. Seen here, one Cochrane criterion is whether the randomization procedure is described in sufficient detail to decide that the intervention and control group would be comparable except for group assignment. These studies typically did not provide sufficient details of any care having been taken to ensure this or any details whatsoever except that the study was randomized.

Another criterion is whether there is evidence of selective outcome reporting. I would not score any of these studies as demonstrating that all outcomes were reported. The issue is that authors can assess participants with a battery of psychological measures, and then pick those that differed significantly between groups to be highlighted.

The Cochrane Collaboration includes a final criterion, “other sources of bias.” In doing meta analyses of psychological intervention studies, consider investigator allegiance is crucial because the intervention for which the investigator is rooting almost always does better.  My group’s agitation about financial conflicts of interest has won us the Bill Silverman award from the Cochrane Collaboration. The collaboration is now revising its other sources of bias critirion so that conflicts of interest are to be taken into account. Some authors of articles about positive psychology interventions profit immensely from marketing positive psychology merchandise. I am not aware of any of the studies included in the meta analysis having disclosures of conflict of interest.

If you think I am being particularly harsh in my evaluation of positive psychology interventions, you need only to consult my numerous other blog posts about meta analyses and see the consistency with which I apply standards. And I have not even gotten to my pet peeves in evaluating intervention research – overly small cell size and “control groups” that are not clear on what is being controlled.

The number of participants some of these studies is so small that the intended effects of randomization cannot be assured and any positive findings are likely to be false positives. If the number of participants in either the intervention or control group is less than 35, there is less than 50% probability of detecting a moderate sized positive effect, even if it is actually there. Put differently, there is more than 50% probability that any significant finding will be false positive. Inclusion of studies with so few participants undermines the validity of other quality ratings. We cannot tell why Fava or Seligman did not have one more or one fewer participant. These are grossly underpowered studies and adding or dropping a single participant in either group could substantially change results.

Then there is the question of control groups. While some studies simply indicate waitlist, others had an undefined treatment as usual, or no treatment, and a number of others indicate “placebo,” apparently following Seligman et al’s  (2005):

Placebo control exercise: Early memories. Participants were asked to write about their early memories every night for one week.

As Mongrain correctly noted, this is not a “placebo.” Seligman et al. and the studies modeled after it failed to include any elements of positive expectation, support, or attention that are typically provided in conditions labeled “placebo.” Mongrain and her colleagues attempted to provide such elements in their control condition, and perhaps this contributed to their negative findings.

A revised conclusion for this meta-analysis

Instead of the wimpy conclusion of the authors presented in their abstract, I would suggest acknowledgment that

The existing literature does not provide robust support for the efficacy of positive psychology interventions for depressive symptoms. The absence of evidence is not necessarily evidence of an absence of an effect. However, more definitive conclusions await better quality studies with adequate sample sizes and suitable control of possible risk of bias. Widespread dissemination of positive psychology interventions, particularly with glowing endorsements and strong claims of changing lives, is premature in the absence of evidence they are effective.

Can the positive psychology intervention literature be saved from itself?

Studies of positive psychology interventions are conducted, published, and evaluated in a gated community where vigorous peer review is neither sought nor apparently effective in identifying and correcting major flaws in manuscripts before they are published. Many within the positive psychology movement  find this supportive environment an asset, but it has failed to produce a quality literature demonstrating positive interventions can indeed contribute to human well-being. Positive psychology intervention research has been insulated from widely accepted standards for doing intervention research. There is little evidence that any of manuscripts reporting the studies were submitted with completed CONSORT checklists, which are now required by most journals. There’s little evidence of awareness of Cochrane risk of bias assessment or of steps been taking to reduce bias.

In what other area of intervention research are claims for effectiveness so dependent on such small studies of such low methodological quality published in journals in which there is only limited independent peer review and such strong confirmatory bias?

As seen on its Friends of Positive Psychology listserv, the positive psychology community is averse to criticism, even constructive criticism from within its ranks. There is dictatorial one-person rule on the listserv. Dissenters routinely vanish without any due process or notice to the rest of the listserv community, much like under disappearances under a Latin American dictatorship.

There are many in the positive psychology movement who feel that that the purpose of positive psychology research is to uphold the tenets of the movement and show, not test the effectiveness of its interventions for changing lives. Investigators who want to evaluate positive psychology interventions need to venture beyond the safety and support of Journal of Positive Psychology and Journal of Happiness Studies to seek independent peer review, informed by widely accepted standards for evaluating psychological interventions.