How Understanding Psychosis could have been more credible and trustworthy

hanging out the truth
Click to enlarge

As promised, this issue of Mind the Brain explains how the British Psychological Society Division of Clinical Psychology ’s Understanding Psychosis could have been much more credible and trustworthy.

I point to well-founded skepticism about like-minded, self-selected groups representing single professions lacking any cultural diversity trying to tell clinicians and policymakers about how health services ought be re-organized. The folly of Division of Clinical Psychology’s way of doing things is compounded by excluding key consumer stakeholders. I will provide some standards and procedures that were blatantly ignored in the writing and dissemination of their recommendations.

The Division of Clinical Psychology is preparing a companion document about depression. I hope there is time for their adopting international standards. But they would have to open themselves to diversity and desegregate, allowing ethnic minorities, especially African and British Blacks a seat at the table. I will explain why their systematic exclusion of this group from the deliberations is particularly egregious, given gross inequalities in the services they receive, often from almost uniformly white clinical psychologists.

I take seriously the authors claim that they wanted to be provide an authoritative source of information for to mental health service users, their family members, and other professionals and policymakers and members of the community attempting to decide what the best policies for dealing with persons described as suffering from psychosis or schizophrenia. But I don’t accept the document simply because the authors claim they are experts or that they are creating a paradigm shift.

Skepticism should be raised when professional groups crow too loudly about their expertise and creating a shift in paradigm. Rhetorically, professionals fare better when they show what they have to offer and leave for others to decide whether they should be labeled “experts” or that they are causing a paradigm shift.

paradigm-shift-cartoon

I recall Dan Haller, former editor of Journal of Clinical Oncology poking fun at authors submitting manuscripts that they claimed represented paradigm shifts. Maybe in hindsight, Galileo, and, over Einstein deserve that label, but no paper he had ever reviewed about chemotherapy, radiation treatment, for immunotherapy earned it. Haller felt authors’ claim of making a paradigm shift simply embarrassed themselves.

When I subjected the 180 page document to my usual skeptical, critical scrutiny, its credibility and trustworthiness simply didn’t hold up. It seemed to be a collection of carefully selected and edited quotes and minimal, but unsystematic reference to the literature. It seemed to crassly sacrifice the well-being of persons with psychosis and schizophrenia – white, African and British black, and other groups – to professional self-interests of a small group of psychologists.

To an American like me, Understanding Psychosis seems like a bit of old-fashionedBengal-governor-during-British-rule colonial administratorBritish colonial administration. Clad in pith helmets, the British clinical psychologists went out and recruited a few supporters who shared their views and suppressed silenced the rest of service users and their families – pretending they don’t even exist – who would be so affected by their proposals. And as I noted in my last blog post, there are grounds to doubt that a good proportion of the supporters whom they quote are even service users.

When I raised the issues of consensus and process in Understanding Psychosis on Twitter in November 2014, I got an immediate response from the official Division of Clinical Psychology Twitter account

Click to enlarge

To which I replied

Please click to enlarge

My “debunking” – if that’s what@DCP wants to call it – involves systematic gathering relevant evidence and evaluating it by transparent standards that others have agreed upon. When I formally do this in peer-reviewed articles, I typically involve other people as a check on my biases, as well as procedures by which readers can decide for themselves on the validity of my conclusions. When I am flying solo, that needs to be taken into account, and readers should start with greater skepticism. I am more dependent on sufficiently documenting my evidence in order to persuade them.

Some questions are clearly defined enough to proceed with a systematic search for relevant evidence: “the screening for psychological distress improves patient outcomes?”

But many questions like “do we abandon psychiatric diagnosis” or “how do we best organize services to ensure better patient outcomes?” involve potentially controversial decisions about how to sharpen the questions in order to gather relevant evidence. It’s best to get a diversity of opinions of both professionals and service consumers to define the range of possibilities. There need to be some checks on biases, with the hope that these can be overcome by some consensus process among people starting with clear differences of opinion. That is not just an ideal, that’s a necessity if some professional group is going to claim authority for its recommendations. I am typically not operating in that context, and so the credibility of what I in my co-authors come up with the strength of evidence, and we leave for others decisions about how or whether  recommendations will be implemented.

There are some widely accepted standards for bringing relevant stakeholders together, reviewing available evidence, and formulating recommendations. There is lots of evidence about the consequences when these procedures are followed.

But before getting into them, let me describe how I came to be appreciative of both the necessity for the standards for professional organizations formally making policy recommendations and the existence of rules by which they should proceed and be evaluated.

Our 2008 JAMA systematic review and meta-analysis of screening for depression in cardiac patients and reactions from the American Psychiatric Association.

Our paper was

Thombs, B. D., de Jonge, P., Coyne, J. C., Whooley, M. A., Frasure-Smith, N., Mitchell, A. J., … & Ziegelstein, R. C. (2008). Depression screening and patient outcomes in cardiovascular care: a systematic review. JAMA, 300(18), 2161-2171.

Our international group of authors had published key papers and chapters in a book onscreening for depression the topic of screening for depression, as well as the role of depression in cardiovascular disease (CVD). We neither proclaimed ourselves “experts” nor had the endorsement of a professional organization backing up our conclusions. But we identified and followed well defined standards for turning clinical and policy issues into topics for systematic review and meta-analysis. And we were quite transparent in what we did and how it conformed to international standards.

Our conclusion was

The high prevalence of depression in patients with CVD, the adverse health care outcomes associated with depression, and the availability of easy-to-use case-finding instruments make it tempting to endorse widespread depression screening in cardiovascular care. However, the adaptation of depression screening in cardiovascular care settings would likely be unduly resource intensive and would not be likely to benefit patients in the absence of significant changes in current models of care.

The JAMA editors liked the paper enough to invite some of the authors to participate in a live webinair with participants able to telephone and email questions.. The editors of BMJ nominated the paper is one of the eight top papers of the year to be considered in a competition for the top paper.

I was caught off guard when just a few weeks later a paper appeared on the Internet labeled as a American Heart Association Science Advisory with a list of impressive committees signing on to its conclusions and the American Psychiatric Association prominently listed as endorsing the advisory.

The recommendations directly contradicted ours:

Although there is currently no direct evidence that screening for depression leads to improved outcomes in cardiovascular populations, depression has been linked to increased morbidity and mortality, poorer risk factor modification, lower rates of cardiac rehabilitation, and reduced quality of life.  Therefore, it is important to assess depression in cardiac patients with the goal of targeting those most in need of treatment and support services.

And

In summary, the high prevalence of depression in patients with CHD supports a strategy of increased awareness and screening for depression in patients with CHD.

Politics versus rules of making evidence-based decisions

Our conclusions were based on best evidence and transparent rules for evaluating that evidence. The AHA Science Advisory was based on a consensus of professionals – psychologists and psychiatrists – who had vested interests in promoting screening because it would increase their professional opportunities in cardiology settings.

Although publicity for our article had some momentum, the promoters of the AHA Science Advisory jumped into the media with a lot of political power to counter our conclusions, while usually failing to acknowledge who we were and where we had published. The American Psychiatric Association actually assigned a pediatric psychiatrist to become a media contact for their point of view.

I had naïvely thought that best evidence would trump consensus of professionals with obvious self-interests at stake. The weight of evidence was clearly on our side. But one of our cardiologist co-authors, Roy Zigelstein was not at all surprised by the carefully orchestrated reaction.

Roy negotiated us an opportunity with American Heart Journal and Journal of the American Academy of Cardiology to explain the differences between us and the AHA Science Advisory. Although we were up against strong vested interests, cardiologists themselves were not necessarily in agreement with the science advisory.  Actually, the American Heart Association continually updates its evaluations of factors correlated with cardiovascular outcomes as causal factors. To this day, it still has not accepted depression as a causal factor, only a risk marker. The implication is that making changes in depression may not necessarily affect cardiac outcomes.

In our commentary at American Heart Journal, we noted the discrepancy between the results of our meta-analysis and systematic review versus the conclusions of the AHA Science Advisory. We also noted that we were not alone in expressing concern about guidelines issued by the American Heart Association increasingly being based on simple professional consensus and not a systematic review of the evidence. Consequently many of them were not “best evidence.”

“In guidelines we cannot trust”

Our skirmishing with the AHA Science Advisory and American Psychiatric Association occurred at a time when recognition was already growing that the recommendations of professional organizations were untrustworthy. There was documentation of numerous instances in which they were often not evidence-based, but served their self-interests, often at the expense of patient outcomes. Many of the recommendations were for billable procedures from the professional groups who created them that were unnecessary and even harmful to patients.

The title of a later article captured the rampant skepticism of the time:

Shaneyfelt T. In guidelines we cannot trust. Arch Intern Med 2012;172:1633-1634.

There were lots of proposals for reform, like a series that included

Fretheim A, Schunemann HJ, Oxman AD. Improving the use of research evidence in guideline development: 5. Group processes. Health Res Policy Syst 2006;4:17.

guidelines we can trustBut discontent gut all the way to the U.S. Congress, which authorized that the Institute of Medicine (IOM) be given the resources to organize a panel with wide representation to come up with, as the final 250 page document was titled, Clinical Guidelines We Can Trust. You can download a free PDF here.

The rationale for specific procedures spelled out, but in terms of the final product:

To be trustworthy,  guidelines should

  • Be based on a systematic review of the existing evidence.
  • Be developed by a knowledgeable, multidisciplinary panel of experts and representatives from key affected groups.
  • Consider important patient subgroups and patient preferences, as appropriate.
  • Be based on an explicit and transparent process that minimizes distortions, biases, and conflicts of interest.
  • Provide a clear explanation of the logical relationships between alternative care options and health outcomes, and provide ratings of both the quality of evidence and the strength of the recommendations.
  • Be reconsidered and revised as appropriate when important new evidence warrants modifications of recommendations.

understand coverThe standards seem eminently reasonable in the deliberations by which they were reached is carefully documented. Yet Understanding Psychosis fails miserably as a set of credible policy recommendations by not meeting any of them. It’s because the process of writing the document was so flawed:

  • The British Psychological Society Division of Clinical Psychology professionals did not engage other professionals with complementary viewpoints and expertise.
  • Key stakeholders were simply excluded – primary care physicians, social workers, psychiatrists, , police and corrections personnel who must make decisions about how to deal with disturbed behavior,  and –most importantly- the family members of persons with severe disturbance.
  • There was no clear explicit process to minimize bias and distortion and no transparency as to how the group arrived at particular conclusions.
  • There was no check on the psychologists simply slanting the document to conform to their own narrow professional self-interests.
  • Recommendations were presented without clear grading of the quality of available evidence or strength of recommendations.
  • While there was a carefully orchestrated “show and tell” rollout, it did not involve any opportunities for feedback and modification of recommendations.

In one of a number of passages plagiarized from an earlier paper, Peter Kinderman recently told clinicians from other disciplines to adopt the recommendations of Understanding Psychosis

To return, then, to the issue of communication between professionals; for clinicians, working in multidisciplinary teams, the most useful approach would be to develop individual formulations; consisting of a summary of an individual’s problems and circumstances, hypothesis about their origins and possible therapeutic solutions. As with direct clinical work, such an approach would yield all the benefits of the traditional ‘diagnosis, treatment’ approach without its many inadequacies and dangers. This would require all clinicians— doctors, nurses and other professionals—to adopt new ways of thinking.

Why should these professionals do the bidding of a small group of self-serving psychologists? They were not involved in the process of constructing these recommendations and the psychologists failed to provide appropriate evidence. There is no evidence that this would improve patient outcomes.

A special pleading for marginalized and silenced Black African clients who were getting poor care.

Enter “African” as a search term in the 180 page Understanding Psychosis and you come up with only a brief mention on page 46 that fails to acknowledge the poor outcomes that tradition African Black are disproportionately achieving in outpatient care. Even if there are few, if any black members of the BPS Diision of Psychology and even if there were no blacks involved in the writing of Understanding Psychosis, surely there was some awareness of the gross disparities in outcomes that are achieved in outpatient care for psychosis and schizophrenia. A recent paper added further evidence to what was already known:

  • Early Intervention Services (EIS) have little effect on the much higher admission and retention rates of Black African clients.
  • There are low rates of GP involvement and high rates of police detention.
  • Poor outcomes were most marked in Black African women (7-8x  greater odds than White British women).
  • A post-hoc analysis showed that pathways to care and help-seeking behavior partially explained these differences.

Overall

In an increasingly outcome-driven and evidence-based era, EIS need to demonstrate a significant positive impact on detecting and treating psychosis early, across all groups. Our findings, when compared with UK studies from the pre-EIS era [5], suggest no improvement in the inequality between Black African patients with FEP and White British patients in terms of experiences of admission and detention. The high rates of detention and hospital admission overall are likely to have substantial implications for continuing engagement. The rate of detention is particularly elevated in Black African patients at 60% (Table 2). A disconcerting finding is of even higher rates in certain groups than prior to introduction of EIS, especially in women. While there is overall evidence that the EIS model is a cost-effective [31] means of engaging hard-to-reach young people, it would seem not all groups are being reached in ways that minimise stigma and trauma. Of note, a recent systematic review of initiatives to shorten DUP [32] concluded that establishing dedicated services for people with FEP does not in itself reduce DUP. This is despite evidence that longer DUP is associated with poorer outcomes [33],[34].

“In an increasingly outcome-driven and evidence-based era,” the British Psychological Society Division of Clinical Psychology had better involve a broader and more ethnically diverse range of opinions and more careful consideration of available evidence if they are going to be taking seriously.

Counterpoint from Richard Pemberton, UK Chair of the British Psychological Society Division of Clinical Psychology:

Your approach to debate and tendency to personalise professional differences however means that many senior people don’t take you seriously and/or aren’t willing to get in the same room as you. Describing the very senior and prestigious group of researchers who were co-authors of our recent psychosis publication as either ‘stoned or drunk’ is a case in point? Doing this in private would be testing but putting this out into the public domain certainly breaches UK professional ethical codes. I am sure that you sincerely believe that the report is deeply flawed and highly problematic but I doubt that you actually believe that we are all sitting around under the influence of drugs and alcohol producing 180 page publications.

5 thoughts on “How Understanding Psychosis could have been more credible and trustworthy”

  1. Jim
    Quoting you , as derived from Clinical Guidelines We Can Trust”
    “To be trustworthy,  guidelines should
    • 1)Be based on a systematic review of the existing evidence.
    • 2)Be developed by a knowledgeable, multidisciplinary panel of experts and representatives from key affected groups.
    • 3) Consider important patient subgroups and patient preferences, as appropriate.
    • 4)Be based on an explicit and transparent process that minimizes distortions, biases, and conflicts of interest.
    • 5) Provide a clear explanation of the logical relationships between alternative care options and health outcomes, and provide ratings of both the quality of evidence and the strength of the recommendations.
    • 6)Be reconsidered and revised as appropriate when important new evidence warrants modifications of recommendations.”

    • Concerns 2 & 3 are so difficult to accomplish that holding to them would be insurmountable for attempts at clinical guidance. Contrary references?
    • Concerns 1 & 6 mention the importance of evidence but just what is sufficient to count as evidence?
    I suggest that evidence often progresses thru stages.
    • 1) Unsystematic collection of co-morbidities and therapeutic anecdotes.
    • Insufficient for scientifically guided policy change but can be politically successful. Can be useful initial observations.

    • 2) Systematic naturalistic observations—e.g. sequential case series indicating depression in cardiac patients is common or that certain treatments work.
    • Since “depression” , “cardiac” ,”common” and”outcome” are ambiguous terms , series are improved by constructive replication and useable definitions regarding diagnosis, severity,chronicity ,event dating , correlation ,inter-observer reliability ,etc.

    • 3)Naturalistic description is almost always limited to hypothesis generation ,despite enormous statistical sophistication e.g. epidemiology,brain imaging.

    • 4)Causal issues may,at times,allow experimental clarification and hypothesis testing.
    • Experimentation is an enormous diversity of procedures. In clinical areas usually progresses from pilot trials to definitive trials.
    Your critique of Understanding Psychosis
    was that they never got beyond level 1. That is really sufficient. The other concerns may generate insuperable obstacles ,while their necessity is debatable.
    Cordially,
    Don Klein

    Like

    1. Dear Dr. Klein, I always welcome comments from you and even when we sometimes disagree, you provide food for thought for myself and for the readers of my blog post. And readers should note that we we strongly agree with Understanding Psychosis is not evidence-based and did not involve any systematic attention to the available literature in coming to conclusions that the authors had already publicly voiced years ago.

      The guidelines that I cite were worked out in meetings of a panel deliberately chosen for differences of opinion and were based on a weighing of evidence. Both the process and the evidence and documented in materials that are freely available.

      The Institute of Medicine (IOM) was ordered by Congress to deliberate on this issue because of widespread concerns that guidelines developed by professional organizations had come out of panels deliberately constructed of already like-minded people interested in promoting the services of professional organizations, and without regard to other conflicts of interest.

      Much remains to be done, but I think the value of this effort and the standards can be seen in the growing number of Choosing Wisely Committees that are identifying large numbers of tests and large numbers of ineffective treatments or treatments for which the benefits do not outweigh the cost.These committees include a broad range of health professionals, policymakers, and importantly, consumers. For instance,

      Choosing Wisely® is an initiative of the ABIM Foundation to help providers and patients engage in conversations to reduce overuse of tests and procedures, and support patients in their efforts to make smart and effective care choices.

      A 2014 survey funded by the Robert Wood Johnson Foundation found that three-quarters of physicians say the frequency of unnecessary medical tests and procedures is a very or somewhat serious problem. Originally conceived and piloted by the National Physicians Alliance through a Putting the Charter into Practice grant, the Choosing Wisely campaign calls upon leading medical specialty societies and other non-physician organizations to identify tests or procedures commonly used in their field whose necessity should be questioned and discussed. The resulting lists of “Things Providers and Patients Should Question” are intended to spark discussion about the need—or lack thereof—for many frequently ordered tests or treatments.

      In conjunction with this group, Consumer Reports has now created patient friendly materials regarding overuse of tests and unnecessary treatments.

      A similar group in Canada has provided an updated spreadsheet of 102 recommendations and free apps to health care providers and patients to stimulate discussion about on necessary testing and ineffective procedures.

      Another similar program is now being launched in Australia.

      Like

  2. Great post, important stuff!

    This struck my ear, though (and maybe this is just nit-picking):
    “The intervention and control group initially differed for two of the four outcome variables before they even received the intervention.”

    Do I understand correctly that you’re referring to this table?

    Just wondering if you consider it justified to conclude that there’s a difference with p=0,05 but not with p=0,059?

    Best regards,

    Matti

    Like

    1. For me the interesting point is that the authors are so committed to the importance of significance chasing is that they bother to report p=.059. Of course, there is nothing meaningful about .059 vs .050, but in the authors’ minds, it is worth noting in order to report “missed, but almost got the cigar.”

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s