Using F1000 “peer review” to promote politics over evidence about delivering psychosocial care to cancer patients

The F 1000 platform allowed authors and the reviewers whom they nominated to collaborate in crafting more of their special interest advocacy that they have widely disseminated elsewhere. Nothing original in this article and certainly not best evidence!

 

mind the brain logo

A newly posted article on the F1000 website raises questions about what the website claims is a “peer-reviewed” open research platform.

Infomercial? The F1000 platform allowed authors and the reviewers whom they nominated to collaborate in crafting more of their special interest advocacy that they have widely disseminated elsewhere. Nothing original in this article and certainly not best evidence!

I challenge the authors and the reviewers they picked to identify something said in the F1000 article that they have not said numerous times before either alone or in papers co-authored by some combination of authors and the reviewers they picked for this paper.

F1000 makes the attractive and misleading claim that versions of articles that are posted on its website reflect the response to reviewers.

Readers should be aware of uncritically accepting articles on the F 1000 website as having been peer-reviewed in any conventional sense of the term.

Will other special interests groups exploit this opportunity to brand their claims as “peer-reviewed” without the risk of having to tone down their claims in peer review? Is this already happening?

In the case of this article, reviewers were all chosen by the authors and have a history of co-authoring papers with the authors of the target paper in active advocacy of a shared political perspective, one that is contrary to available evidence.

Cynically, future authors might be motivated to divide their team, with some remaining authors and others dropping off to become nominated as reviewers. They could then suggest content that had already been agreed would be included, but was left off for the purposes being suggested in the review process

F1000

F1000Research bills itself as

An Open Research publishing platform for life scientists, offering immediate publication of articles and other research outputs without editorial bias. All articles benefit from transparent refereeing and the inclusion of all source data.

Material posted on this website is labeled as having received rapid peer-review:

Articles are published rapidly as soon as they are accepted, after passing an in-house quality check. Peer review by invited experts, suggested by the authors, takes place openly after publication.

My recent Google Scholar alert call attention to an article posted on F1000

Advancing psychosocial care in cancer patients [version 1; referees: 3 approved]

 Who were the reviewers?

open peer review of Advancing psychosocial care

Google the names of authors and reviewers. You will discover a pattern of co-authorship; leadership positions in international Psycho-Oncology society, a group promoting the mandating of specially mental health services for cancer patients, and lots of jointly and separately authored articles making a pitch for increased involvement of mental health professionals in routine cancer care. This article adds almost nothing to what is multiply available elsewhere in highly redundant publications

Given a choice of reviewers, these authors would be unlikely to nominate me. Nonetheless, here is my review of the article.

 As I might do in a review of a manuscript, I’m not providing citations for these comments, but support can readily be found by a search of blog posts at my website @CoyneoftheRealm.com and Google Scholar search of my publications. I welcome queries from anybody seeking documentation of these points below.

 Fighting Spirit

The notion that cancer patients having a fighting spirit improves survival is popular in the lay press and in promoting the power of the mind over cancer, but it has thoroughly been discredited.

Early on, the article identifies fighting spirit as an adaptive coping style. In actuality, fighting spirit was initially thought to predict mortality in a small methodologically flawed study. But that is no longer claimed.

Even one of the authors of the original study, Maggie Watson,  expressed relief when her own larger, better designed study failed to confirm the impression that a fighting spirit extended life after diagnosis  of cancer. Why? Dr. Watson was concerned that the concept was being abused in blaming cancer patients who were dying there was to their personal deficiency of not having enough fighting spirit.

Fighting spirit is rather useless as a measure of psychological adaptation. It confounds severity of cancer enrolled dysfunction with efforts to cope with cancer.

Distress as the sixth vital sign for cancer patients

distress thermometerBeware of a marketing slogan posing as an empirical statement. Its emptiness is similar to that of to “Pepsi is the one.” Can you imagine anyone conducting a serious study in which they conclude “Pepsi is not the one”?

Once again in this article, a vacuous marketing slogan is presented in impressive, but pseudo-medical terms. Distress cannot be a vital sign in the conventional sense. Thr  vital signs are objective measurements that do not depend on patient self-report: body temperature, pulse rate, and respiration rate (rate of breathing) (Blood pressure is not considered a vital sign, but is often measured along with the vital signs.).

Pain was declared a fifth vital sign, with physicians mandated  by guidelines to provide routine self-report screening of patients, regardless of their reasons for visit. Pain being the fifth vital sign seems to have been the inspiration for declaring distress as the sixth vital sign for cancer patients. However policy makers declaring pain  as the fifth vital sign did not result in improved patient levels of pain. Their subsequent making intervention mandatory for any reports of pain led to a rise in unnecessary back and knee surgery, with a substantial rise in associated morbidity and loss of function. The next shift to prescription of opioids that were claimed not to be addictive was the beginning of the current epidemic of addiction to prescription opioids. Making pain the fifth vital sign is killed a lot of patients and  turned others into addicts craving drugs on the street because they have lost their prescriptions for the opioids that addicted them.

pain as 5th vital signCDC launches

 Cancer as a mental health issue

There is a lack of evidence that cancer carries a risk of psychiatric disorder more than other chronic and catastrophic illnesses. However, the myth that there is something unique or unusual about cancer’s threat to mental health is commonly cited by mental health professional advocacy groups is commonly used to justify increased resources to them for specialized services.

The article provides an inflated estimate of psychiatric morbidity by counting adjustment disorders as psychiatric disorders. Essentially, a cancer patient who seeks mental health interventions for distress qualifies by virtue of help seeking being defined as impairment.

The conceptual and empirical muddle of “distress” in cancer patients

The article repeats the standard sloganeering definition of distress that the authors and reviewers have circulated elsewhere.

It has been very broadly defined as “a multifactorial, unpleasant, emotional experienceof a psychological (cognitive, behavioural, emotional), social and/or spiritual nature that may interfere with the ability to cope effectively with cancer, its physical symptoms and its treatment and that extends along a continuum, ranging from common normalfeelings of vulnerability, sadness and fears to problems that can become disabling, such as depression, anxiety, panic, social isolation and existential and spiritual crisis”5

[You might try googling this. I’m sure you’ll discover an amazing number of repetitions in similar articles advocating increasing psychosocial services for cancer patients organized around this broad definition.]

Distress is so broadly defined and all-encompassing, that there can be no meaningful independent validation of distress measures except for by other measures of distress, not conventional measures of adaptation or mental health. I have discussed that in a recent blog post.

If we restrict “distress” to the more conventional meaning of stress or negative affect, we find that any elevation in distress (usually 35% or so) associated with onset diagnosis of cancer tends to follow a natural trajectory of decline without formal intervention. Elevations in distress for most cancer patients, are resolved within 3 to 6 months without intervention. A residual 9 to 11% of cancer patients having elevated distress is likely attributed to pre-existing psychiatric disorder.

Routine screening for distress

The slogan “distress is the sixth vital sign” is used to justify mandatory routine screening of cancer patients for distress. In the United States, surgeons cannot close their electronic medical records for a patient and go on to the next patient without recording whether they had screened patients for distress, and if the patient reports distress, what intervention has been provided. Clinicians simply informally asking patients if they are distressed and responding to a “yes” by providing the patient with an antidepressant without further follow up allows surgeons to close the medical records.

As I have done so before, I challenge advocates of routine screening of cancer patients for distress to produce evidence that simply introducing routine screening without additional resources leads to better patient outcomes.

Routine screening for distress as uncovering unmet needs among cancer patients

 Studies in the Netherlands suggest that there is not a significant increase in need for services from mental health or allied health professionals associated with diagnosis of cancer. There is some disruption of such services that patients were receiving before diagnosis. It doesn’t take screening and discussion to suggest that patients that they at some point resume those services if they wish. There is also some increased need for physical therapy and nutritional counseling

If patients are simply asked a question whether they want a discussion of the services (in Dutch: Zou u met een deskundige willen praten over uw problemen?)  that are available, many patients will decline.

Much of demand for supportive services like counseling and support groups, especially among breast cancer patients is not from among the most distressed patients. One of the problems with clinical trials of psychosocial interventions is that most of the patients who seek enrollment are not distressed, and less they are prescreened. This poses dilemma: if you require elevated distress on a screening instrument, we end up rationing services and excluding many of the patients who would otherwise be receiving them.

I welcome clarification from F 1000 just what they offer over other preprint repositories. When one downloads a preprint from some other repositories, it clearly displays “not yet peer-reviewed.” F 1000 carries the advantage of the label of “peer-reviewed, but does not seem to be hard earned.

Notes

Slides are from two recent talks at Dutch International Congress on Insurance Medicine Thursday, November 9, 2017, Almere, Netherlands   :

Will primary care be automated screening and procedures or talking to patients and problem-solving? Invited presentation

and

Why you should not routinely screen your patients for depression and what you should do instead. Plenary Presentation

        

                                  

 

 

 

Should have seen it coming: Once high-flying Psychological Science article lies in pieces on the ground

Life is too short for wasting time probing every instance of professional organizations promoting bad science when they have an established record of doing just that.

There were lots of indicators that’s what we were dealing with in the Association for Psychological Science (APS) recent campaign for the now discredited and retracted ‘sadness prevents us from seeing blue’ article.

sad blueA quick assessment of the press release should have led us to dismiss the claims being presented and convinced us to move on.

Readers can skip my introductory material by jumping down this blog post to [*} to see my analysis of the APS press release.

Readers can also still access the original press release, which has now disappeared from the web, here. Some may want to read the press release and form their own opinions before proceeding into this blog post.

What, I’ve stopped talking about the PACE trial? Yup, at least at Mind the Brain, for now. But you can go here for the latest in my continued discussion of the PACE trial of CBT for chronic fatigue syndrome, in which I moved from critical observer to activist a while ago.

Before we were so rudely interrupted  by the bad science and bad media coverage of the PACE trial, I was focusing on how readers can learn to make quick assessments of hyped media coverage of dubious scientific studies.

In “Sex and the single amygdala”  I asked:

Can skeptics who are not specialists, but who are science-minded and have some basic skills, learn to quickly screen and detect questionable science in the journals and its media coverage?

The counter argument of course is Chris Mooney telling us “You Have No Business Challenging Scientific Experts”. He cites

“Jenny McCarthy, who once remarked that she began her autism research at the “University of Google.”

But while we are on the topic of autism, how about the counter example of The Lancet’s coverage of the link between vaccines and autism? This nonsense continues to take its toll on American children whose parents – often higher income and more educated than the rest – refused to vaccinate them on the basis of a story that started in The Lancet. Editor Richard Horton had to concede

horton on lancet autism failure

 

 

 

If we accept Chris Mooney‘s position, we are left at the mercy of press releases cranked out by the likes of professional organizations like Association for Psychological Science (APS) that repeatedly demand that we revise our thinking about human nature and behavior, as well as change our behavior if we want to extend our lives and live happier, all on the basis of a single “breakthrough” study. Rarely do APS press releases have any follow-up as to the fate of a study they promoted. One has to hope that PubPeer  or PubMed Commons pick up on the article touted in the press release and see what a jury of post-publication peers decides.

As we have seen in my past Mind the Brain posts, there are constant demands on our attention from press releases generated from professional organizations, university press officers, and even NIH alerting us to supposed breakthroughs in psychological and brain science. Few such breakthroughs hold up over time.

Are there no alternatives?

Are there no alternatives to our simply deferring to the expertise being offered or taking the time to investigate for ourselves claims that are likely to prove exaggerated or simply false?

We should approach press releases from the APS – or from its rival American Psychological Association – using prior probabilities to set our expectations. The Open Science Collaboration: Psychology (OSC) article  in Science presented results of a systematic attempt to replicate 100 findings from prestigious psychological journals, including APS’ s Psychological Science and APA’s Journal of Personality and Social Psychology. Less than half of the findings were replicated. Findings from the APS and APA journals fared worse than the others.

So, our prior probabilities are that declarations of newsworthy, breakthrough findings trumpeted in press releases from psychological organizations are likely to be false or exaggerated – unless we assume that the publicity machines prefer the trustworthy over the exciting and newsworthy in the article they selected to promote.

I will guide readers through a quick assessment of APS press release which I started on this post before getting swept up into the PACE controversy. However, in the intervening time, there have been some extraordinary developments, which I will then briefly discuss. We can use these developments to validate my and your evaluation of the press release available earlier. Surprisingly, there is little overlap between the issues I note in the press release and what concerned post-publication commentators.

*A running commentary based on screening the press release

What once was a link to the“feeling blue and seeing blue”  article now takes one only to

retraction press releasee

Fortunately, the original press release can still be reached here. The original article is preserved here.

My skepticism was already high after I read the opening two paragraphs of the press release

The world might seem a little grayer than usual when we’re down in the dumps and we often talk about “feeling blue” — new research suggests that the associations we make between emotion and color go beyond mere metaphor. The results of two studies indicate that feeling sadness may actually change how we perceive color. Specifically, researchers found that participants who were induced to feel sad were less accurate in identifying colors on the blue-yellow axis than those who were led to feel amused or emotionally neutral.

Our results show that mood and emotion can affect how we see the world around us,” says psychology researcher Christopher Thorstenson of the University of Rochester, first author on the research. “Our work advances the study of perception by showing that sadness specifically impairs basic visual processes that are involved in perceiving color.”

What Anglocentric nonsense. First, blue as a metaphor for sad does not occur across most languages other than English and Serbian. In German, to call someone blue is suggesting the person is drunk. In Russian, you are suggesting that the person is gay. In Arabic, if you say you are having a blue day, it is a bad one. But if you say in Portuguese that “everything is blue”, it suggests everything is fine.

In Indian culture, blue is more associated with happiness than sadness, probably traceable to the blue-blooded Krishna being associated with divine and human love in Hinduism. In Catholicism, the Virgin Mary is often wearing blue and so the color has come to be associated with calmness and truth.

We are off to a bad start. Going to the authors’ description of their first of two studies, we learn:

In one study, the researchers had 127 undergraduate participants watch an emotional film clip and then complete a visual judgment task. The participants were randomly assigned to watch an animated film clip intended to induce sadness or a standup comedy clip intended to induce amusement. The emotional effects of the two clips had been validated in previous studies and the researchers confirmed that they produced the intended emotions for participants in this study.

Oh no! This is not a study of clinical depression, but another study of normal college students “made sad” with a mood induction.

So-called mood induction tasks don’t necessarily change actual mood state, but they do convey to research participants what is expected of them and how they are supposed to act. In one of the earliest studies I ever did, we described a mood induction procedure to subjects without actually having them experience it. We then asked them to respond as if they had received it. Their responses were indistinguishable. We concluded that we could not rule out that what were considered effects of a mood induction task were simply demand characteristics, what research participants perceive as instructions as to how they should behave.

It was fashionable way back then for psychology researchers who were isolated in departments that did not have access to clinically depressed patients to claim that they were nonetheless conducting analog studies of depression. Subjecting students to unsolvable anagram task or uncontrollable loud noises was seen as inducing learned helplessness in them, thereby allowing investigators an analog study of depression. We demonstrated a problem with that idea. If students believed that the next task that they were administered was part of the same experiment, they performed poorly, as if they were in a state of learned helplessness or depression. However, if they believed that the second task was unrelated to the first, they would show no such deficits. Their negative state of helplessness or depression was confined to their performance in what they thought was the same setting in which the induction had occurred. Shortly after our experiments. Marty Seligman wisely stopped doing studies “inducing” learned helplessness in humans, but he continued to make the same claims about the studies he had done.

Analog studies of depression disappeared for awhile, but I guess they have come back into fashion.

But the sad/blue experiment could also be seen as a priming  experiment. The research participants were primed by the film clip and their response to a color naming task was then examined.

It is fascinating that neither the press release nor the article itself ever mentioned the word priming. It was only a few years ago that APS press releases were crowing about priming studies. For instance, a 2011 press release entitled “Life is one big priming experiment…” declared:

One of the most robust ideas to come out of cognitive psychology in recent years is priming. Scientists have shown again and again that they can very subtly cue people’s unconscious minds to think and act certain ways. These cues might be concepts—like cold or fast or elderly—or they might be goals like professional success; either way, these signals shape our behavior, often without any awareness that we are being manipulated.

Whoever wrote that press release should be embarrassed today. In the interim, priming effects have not proven robust. Priming studies that cannot be replicated have figured heavily in the assessment that the psychological literature is untrustworthy. Priming studies also figure heavily in the 56 retracted studies of fraudster psychologist Diederik Stapel. He claims that he turned to inventing data when his experiments failed to demonstrate priming effects that he knew were there. Yet, once he resorted to publishing studies with fabricated data, others claimed to replicate his work.

I made up research, and wrote papers about it. My peers and the journal editors cast a critical eye over it, and it was published. I would often discover, a few months or years later, that another team of researchers, in another city or another country, had done more or less the same experiment, and found the same effects.  My fantasy research had been replicated. What seemed logical was true, once I’d faked it.

So, we have an APS press release reporting a study that assumes that the association between sadness and the color blue is so hardwired and culturally universal that is reflected in basic visual processes. Yet the study does not involve clinical depression, only an analog mood induction and a closer look reveals that once again APS is pushing a priming study. I think it’s time to move on. But let’s read on:

The results cannot be explained by differences in participants’ level of effort, attention, or engagement with the task, as color perception was only impaired on the blue-yellow axis.

“We were surprised by how specific the effect was, that color was only impaired along the blue-yellow axis,” says Thorstenson. “We did not predict this specific finding, although it might give us a clue to the reason for the effect in neurotransmitter functioning.”

The researchers note that previous work has specifically linked color perception on the blue-yellow axis with the neurotransmitter dopamine.

The press release tells us that the finding is very specific, occurring only on the blue-yellow axis, not the red-green axes and thatdifferences between are not found in level of effort, attention, or engagement of the task. The researchers did not expect such a specific finding, they were surprised.

The press release wants to convince us of an exciting story of novelty and breakthrough.  A skeptic sees it differently: This is an isolated finding that is unanticipated by the researchers getting all dressed up. See, we should’ve moved on.

The evidence with which the press release wants to convince us is exciting because it is specific and novel. iThe researchers are celebrating the specificity of their finding, but the blue-yellow axis finding may be the only one statistically significant because it is due to chance or an artifact.

And bringing up unmeasured “neurotransmitter functioning” is pretentious and unwise. I challenge the researchers to show that effects of watching a brief movie clip registers in measurable changes in neurotransmitters. I’m skeptical even whether persons drawn from the community or outpatient samples reliably differ from non-depressed persons in measures of the neurotransmitter dopamine.

This is new work and we need to take time to determine the robustness and generalizability of this phenomenon before making links to application,” he concludes.

Claims in APS press releases are not known for their “robustness and generalizability.” I don’t think this particular claim should prompt an effort at independent replication when scientists have so many more useful things to keep them busy.

Maybe, these investigators should have checked robustness and generalizability before rushing into print. Maybe APS should stop pestering us with findings that surprise researchers and that have not yet been replicated.

A flying machine in pieces on the ground

Sadness impairs color perception was sent soaring high, lifted by an APS press release now removed from the web, but that is still available here. The press release was initially uncritically echoed, usually cut-and-paste or outright churnaled  in over two dozen media mentions.

But, alas, Sadness impairs color perception is now a flying machine in pieces on the ground 

Noticing of the article’s problems seem to have started with some chatter of skeptically-minded individuals on Twitter,  which led to comments at PubPeer where the article was torn to pieces. What unfolded was a wonderful demonstration of crowdsourced post-publication peer review in action. Lesson: PubPeer rocks and can overcome the failures of pre-publication peer review to keep bad stuff out of the literature.

You can follow the thread of comments at PubPeer.

  • An anonymous skeptic started off by pointing out an apparent lack of a significant statistical effect where one was claimed.
  • There was an immediate call for a retraction, but it seemed premature.
  • Soon re-analyses of the data from the paper were being reported, confirming the lack of a significant statistical effect when analyses were done appropriately and reported transparently.
  • The data set for the article was mysteriously changed after it had been uploaded.
  • Doubts were expressed about the integrity of the data – had they been tinkered with?
  • The data disappeared.
  • There was an announcement of a retraction.

The retraction notice  indicated that the researchers were still convinced of the validity of their hypothesis, despite deciding to retract their paper.

We remain confident in the proposition that sadness impairs color perception, but would like to acquire clearer evidence before making this conclusion in a journal the caliber of Psychological Science.

so deflatedThe retraction note also carries a curious Editors note:

Although I believe it is already clear, I would like to add an explicit statement that this retraction is entirely due to honest mistakes on the part of the authors.

Since then, doubts about express whether retraction was a sufficient response or whether something more is needed. Some of the participants in the PubPeer discussion drafted a letter to the editor incorporating their reanalyses and prepared to submit it to Psychological Science. Unfortunately, having succeeded in getting the bad science retracted, these authors reduced the likelihood of theirr reanalysis being accepted by Psychological Science. As of this date, their fascinating account remains unpublished but available on the web.

Postscript

Next time you see an APS or APA press release, what will be your starting probabilities about the trustworthiness of the article being promoted? Do you agree with Chris Mooney that you should simply defer to the expertise of the professional organization?

Why would professional organizations risk embarrassment with these kinds of press releases? Apparently they are worth the risk. Such press releases can echo through the conventional and social media and attract early attention to an article. The game is increasing the impact factor of the journal (JIFs).

Although it is unclear precisely how journal impact factors are calculated, the number reflects the average number of citations an article obtains within two years of publication. However, if press releases  promote “early releases” of articles,  the journal can acquire citations before the clock starts ticking for the two years. APS and APA are in intense competition for prestige of their journals and membership. It matters greatly to them which organization can claim the most prestigious journals, as demonstrated by their JIFs.

So, press releases are important from garnering early attention. Apparently breakthroughs, innovations, and “first ever” mattered more than trustworthiness. In the professional organizations hope we won’t remember the fate of past claims.

 

Advocating CBT for Psychosis: “Ultimately it is all political.”

Political… Or just cynical?

Frida Kahlo, “Without Hope”
Frida Kahlo, “Without Hope”

Professor Paul Salkovskis and his colleagues organized a lively, thought-provoking conference at University of Bath “Understanding Psychosis and Schizophrenia: How well do we understand and what should we do to improve how we help?”

Presenters and members of the roundtable discussion panel included a number of authors of the British Psychological Society’s Understanding Psychosis and Schizophrenia. But they noticeably  avoided engaging anyone outside their tight knit group, especially speakers disagreeing with their manifesto. The Understanding Psychosis and Schizophrenia authors appeared glum and dyspeptic throughout lively discussions. The conference nonetheless went on around them. Highlights included presentations by Professors Robin Murray and Clive Adams.

In his “Genes, Social Adversity and Cannabis: how do they interact?” Professor Robin Murray gently chided the authors of the British Psychological Society’s Understanding Psychosis and Schizophrenia for their insensitivity to the suffering, debilitation, and sometimes terror posed by schizophrenia. For me, his talk clarified confusion caused by the authors of Understanding Psychosis repeatedly claiming Professor Robin Murray had endorsed their document. He did not. He is an exceptionally kind and well-mannered person and I think his polite comments at the earlier launch meeting for Understanding Psychosis were misinterpreted. His presentation at the Bath conference left no doubt where he stood.

A diagnosis of schizophrenia encompasses a wide range of conditions that will undoubtedly by sorted into a tighter, more useful categories as we use existing categories to organize the evidence we accumulate. As Joe McCleary summarized in comments on my FB wall, if we use existing – admittedly imperfect and provisional – categories, we can learn about

the nature of the individuals symptoms and experience, the likelihood and time course of improvement, recovery, and/or relapse, persistence of difficulties in particular domains (intellectual, social, emotional, adaptive functioning), which interventions might be most useful to try, what co-occurring disorders and risks are high and low (e.g., suicide, aggression, dissociation), likely levels of dependence vs independence, impacts on family, reliance on family, impacts on society, reliance on society, risk for harm (e.g., being taken advantage of or abused), etc., etc., etc.

These correlates of a diagnosis of schizophrenia check out well when we go to the available literature.

Professor Peter Kinderman who is President-Elect of the British Psychological Society, as well as an author of Understanding Psychosis was a member of the afternoon roundtable panel at Bath But he mostly sat in silence. He rejects the idea that the diagnosis has led to any progress:

Diagnostic systems in psychiatry have always been criticized for their poor reliability, validity, utility, epistemology and humanity.

And

The poor validity of psychiatric diagnoses—their inability to map onto any entity discernable in the real world—is demonstrated by their failure to predict course or indicate which treatment options are beneficial, and by the fact that they do not map neatly onto biological findings, which are often nonspecific and cross diagnostic boundaries.

Kinderman repeats these points in every forum he’s given until he lapses into self-plagiarism. Compare Imagine there’s no diagnosis, it’s easy if you try  to Drop the language of disorder.

What does Kinderman offer in place of diagnosis? That we respond to patients in terms ofmy paradigm their nonspecific distress, which is a “normal, not abnormal, part of human life.” This insight, according to Kinderman, places us on the “cusp of a major paradigm shift in our thinking about psychiatric disorders.”

Kinderman leaves us with sweeping declarations and no evidence to support them. He gets quite fussy when challenged. During the Roundtable Discussion, he went off on one of his usual rants, peppered by a torrent of clichés, allusions to unnamed professionals describing schizophrenia as a genetic disease, and argument by anecdote.

But what if we took seriously his suggestion that we drop diagnosis and substitute a generic distress? He concedes that many patients are helped by antipsychotic medication. But getting the best candidates for this treatment depends on the diagnostic label schizophrenia. And just as importantly, keeping patients who are likely to be poor candidates and for whom it will be ineffective, also depends on using the criteria associated with the label schizophrenia to rule out this treatment is appropriate. Unless Kinderman can come up with something else, it would seem that we risk both undermedication of those who desperately need it and overmedication of those who get more harm than benefit, if we abandon such labels.

And turning to Professor Clive Adam’s presentation organizing the available literature around the diagnostic label of schizophrenia, we can see from Cochrane reviews the likelihood that treatment with cognitive behavior therapy in the absence of medication is likely to be ineffective and not at all based on available evidence.

Clive Adams delivered a take-no-prisoners “CBT-P and medication in the treatment of psychosis: summarising best evidence.”  Adams’ presentation is captured in a blog post but its message can be succinctly stated

I just cannot see that this approach (CBTp), on average, is reaping enough benefits for people.

None of the authors of Understanding Psychosis responded to Adams’ strictly data oriented presentation. They simply mumbled among themselves.

Maybe we should simply accept that when the authors of Understanding Psychosis call for extensive discussion and dialogue, it is not what would be usually meant by those terms. They don’t want their monologue  interrupted by anything but  applause.

What the authors of Understanding Psychosis  get is that with Twitter and blogs, you cannot not engage in a dialogue when you put outrageous claims out there. You can only risk your social media identity being defined what others say.

Let’s examine what Peter Kinderman says in another monologic blog post, strikingly free of any reference to evidence, Three phrases. The post discusses three phrases that organized an international meeting concerning cognitive behavior therapy held in Philadelphia in May, 2015.

It’s probably better to read the outcomes of our discussions in peer-reviewed scientific papers and in the policy documents of our various nations. For me, however, three phrases stood out as we discussed our shared interests.

I can’t wait! But until then we have his blog.

The first phrase “Trauma-informed practice” is described

In all kinds of ways, we’re learning how psychotic experiences can relate to trauma – in childhood and as adults. And we’re learning how the ways in which we purport to care for people – with the labels that we attach to their problems, with the explanations (and non-explanations) that we propose, and especially with the treatments that we use (and occasionally impose, even forcefully) – can potentiate experiences of trauma. So I welcome the fact that there appears to be increasing discussion of how we might base our therapies, and indeed our whole service design philosophy, on an appreciation of the role of trauma, for many people, in the development of their difficulties.

Presumably the forthcoming “peer-reviewed scientific papers” will allow us to evaluate the evidence for the efficacy of “trauma-informed” treatment of schizophrenia. I can’t find it. I don’t see where any of the randomized trials of CBT for psychosis that have been conducted are organized around this concept. Does Kinderman have any sense of the history or usage of “trauma-informed” in the United States and elsewhere?

mindbody connection“Trauma informed practice” typically refers to an approach that is more hermeneutic than scientific. The assumption is made that psychological trauma causes both mental disorder and physical illnesses.

Understanding Psychosis takes for granted that traumatic experiences are at the root of most psychotic disturbance. When they invoke evidence at all, it is the work of one of its authors, Richard Bentall. The literature concerning the role of child adversity in psychotic disturbance is  methodologically flawed, but even if we accepted at face, the effect sizes it generates would not justify the assumptions that trauma is behind all psychotic experiences.

In the United States, evidence-based, research-oriented clinicians are skeptical of the slippery slope whereby calls for “trauma-informed practice” too often lead down to nonsense about trauma being embodied in organs and peripheral tissue, not just the nervous system. Untrained and incompetent therapists insist that conditions like diabetes and asthma are linked to trauma, and if patients cannot report relevant traumatic experiences, there should be an effort to recover their repressed memories. Serious damage was done to a lot of patients and their families before the fad of recovering memories of sexual abuse and participation in devil worshiping cults was put down with legal action.

Kinderman’s second phrase is “CBT-informed practice”

It’s hardly a surprise that the acronym ‘CBT’ means slightly different things to different people.

There’s a valuable debate about ‘fidelity’ (whether a therapist is or is not adherent to the accepted elements of CBT). But there’s also an appreciation that, in the field of psychosocial interventions in mental health care, common therapeutic factors, the fundamental role of a good ‘therapeutic alliance’ (a relationship based on respect) and the heterogeneity of individual experiences means that we are now much more likely to talk about “CBT-informed practice”. Again, for me, this is welcome. I believe that it not only allows for valuable innovation and development of psychosocial interventions, but also permits an appreciation of the uniqueness of each person’s experience.

The retreat from any claim to being evidence-based continues. If a therapy carries the branding of evidence-based, it is assumed that it is delivered with some fidelity to what has been tested in clinical trials. Branding as “evidence-based” cannot be retained unless the innovations and further development are themselves subjected to clinical trials. “Evidence-based, is not a branding they can be casually transferred to new products without testing.

Kinderman’s final phrase is “_ ultimately, it’s all political_.”

The attendees of these meetings are all applied scientists (although some have some influential roles in shaping healthcare policies). But it was interesting that many of our discussions referred back to the social circumstances of those people accessing our services, and on the political decisions taken about how those services are commissioned, planned and delivered We discussed, for instance, the role of social determinants of health generally and mental health in particular. We discussed how different psychological and social problems seem to have similar social determinants (and the implications of this). We talked about how trauma, discrimination, racism, the struggles of undocumented migrants and the pressures on unemployed people can affect their mental health. We discussed how people access high-quality healthcare in different states and nations, and we discussed how political decisions – such as those related to involuntary detention and compulsory treatment, the funding of healthcare and provision of different forms of care – impact on our clients. We also discussed how, as a group of professionals, we are increasingly being asked to contribute to these debates.

So for me, it was a very positive and encouraging trip. I am – I remain – confident that conventional CBT, a form of one-to-one therapy that of course has its limitations, can be very positive for people experiencing psychosis. But, given the views I hold about the fundamental nature of mental heath and wellbeing, the phrases that echo most encouragingly from last week’s meeting are “trauma-informed practice”, CBT-informed practice” and “ultimately, it’s all political.”

I think I finally get it. Kinderman is saying that his followers should hold on to claims of being evidence-based, even in the face of clinical trials and meta-analyses providing evidence to the contrary. And they should incorporate elements of “trauma-based practice.” This is not taking  seriously principles of evidence-based evaluation of best practices. but that is not what Understanding Psychosis is about.

Advocating CBT is political, not evidence-based, but we need the latter label for credibility and controlling credentialing.

This is cynical, not political.

How Understanding Psychosis could have been more credible and trustworthy

hanging out the truth
Click to enlarge

As promised, this issue of Mind the Brain explains how the British Psychological Society Division of Clinical Psychology ’s Understanding Psychosis could have been much more credible and trustworthy.

I point to well-founded skepticism about like-minded, self-selected groups representing single professions lacking any cultural diversity trying to tell clinicians and policymakers about how health services ought be re-organized. The folly of Division of Clinical Psychology’s way of doing things is compounded by excluding key consumer stakeholders. I will provide some standards and procedures that were blatantly ignored in the writing and dissemination of their recommendations.

The Division of Clinical Psychology is preparing a companion document about depression. I hope there is time for their adopting international standards. But they would have to open themselves to diversity and desegregate, allowing ethnic minorities, especially African and British Blacks a seat at the table. I will explain why their systematic exclusion of this group from the deliberations is particularly egregious, given gross inequalities in the services they receive, often from almost uniformly white clinical psychologists.

I take seriously the authors claim that they wanted to be provide an authoritative source of information for to mental health service users, their family members, and other professionals and policymakers and members of the community attempting to decide what the best policies for dealing with persons described as suffering from psychosis or schizophrenia. But I don’t accept the document simply because the authors claim they are experts or that they are creating a paradigm shift.

Skepticism should be raised when professional groups crow too loudly about their expertise and creating a shift in paradigm. Rhetorically, professionals fare better when they show what they have to offer and leave for others to decide whether they should be labeled “experts” or that they are causing a paradigm shift.

paradigm-shift-cartoon

I recall Dan Haller, former editor of Journal of Clinical Oncology poking fun at authors submitting manuscripts that they claimed represented paradigm shifts. Maybe in hindsight, Galileo, and, over Einstein deserve that label, but no paper he had ever reviewed about chemotherapy, radiation treatment, for immunotherapy earned it. Haller felt authors’ claim of making a paradigm shift simply embarrassed themselves.

When I subjected the 180 page document to my usual skeptical, critical scrutiny, its credibility and trustworthiness simply didn’t hold up. It seemed to be a collection of carefully selected and edited quotes and minimal, but unsystematic reference to the literature. It seemed to crassly sacrifice the well-being of persons with psychosis and schizophrenia – white, African and British black, and other groups – to professional self-interests of a small group of psychologists.

To an American like me, Understanding Psychosis seems like a bit of old-fashionedBengal-governor-during-British-rule colonial administratorBritish colonial administration. Clad in pith helmets, the British clinical psychologists went out and recruited a few supporters who shared their views and suppressed silenced the rest of service users and their families – pretending they don’t even exist – who would be so affected by their proposals. And as I noted in my last blog post, there are grounds to doubt that a good proportion of the supporters whom they quote are even service users.

When I raised the issues of consensus and process in Understanding Psychosis on Twitter in November 2014, I got an immediate response from the official Division of Clinical Psychology Twitter account

Click to enlarge

To which I replied

Please click to enlarge

My “debunking” – if that’s what@DCP wants to call it – involves systematic gathering relevant evidence and evaluating it by transparent standards that others have agreed upon. When I formally do this in peer-reviewed articles, I typically involve other people as a check on my biases, as well as procedures by which readers can decide for themselves on the validity of my conclusions. When I am flying solo, that needs to be taken into account, and readers should start with greater skepticism. I am more dependent on sufficiently documenting my evidence in order to persuade them.

Some questions are clearly defined enough to proceed with a systematic search for relevant evidence: “the screening for psychological distress improves patient outcomes?”

But many questions like “do we abandon psychiatric diagnosis” or “how do we best organize services to ensure better patient outcomes?” involve potentially controversial decisions about how to sharpen the questions in order to gather relevant evidence. It’s best to get a diversity of opinions of both professionals and service consumers to define the range of possibilities. There need to be some checks on biases, with the hope that these can be overcome by some consensus process among people starting with clear differences of opinion. That is not just an ideal, that’s a necessity if some professional group is going to claim authority for its recommendations. I am typically not operating in that context, and so the credibility of what I in my co-authors come up with the strength of evidence, and we leave for others decisions about how or whether  recommendations will be implemented.

There are some widely accepted standards for bringing relevant stakeholders together, reviewing available evidence, and formulating recommendations. There is lots of evidence about the consequences when these procedures are followed.

But before getting into them, let me describe how I came to be appreciative of both the necessity for the standards for professional organizations formally making policy recommendations and the existence of rules by which they should proceed and be evaluated.

Our 2008 JAMA systematic review and meta-analysis of screening for depression in cardiac patients and reactions from the American Psychiatric Association.

Our paper was

Thombs, B. D., de Jonge, P., Coyne, J. C., Whooley, M. A., Frasure-Smith, N., Mitchell, A. J., … & Ziegelstein, R. C. (2008). Depression screening and patient outcomes in cardiovascular care: a systematic review. JAMA, 300(18), 2161-2171.

Our international group of authors had published key papers and chapters in a book onscreening for depression the topic of screening for depression, as well as the role of depression in cardiovascular disease (CVD). We neither proclaimed ourselves “experts” nor had the endorsement of a professional organization backing up our conclusions. But we identified and followed well defined standards for turning clinical and policy issues into topics for systematic review and meta-analysis. And we were quite transparent in what we did and how it conformed to international standards.

Our conclusion was

The high prevalence of depression in patients with CVD, the adverse health care outcomes associated with depression, and the availability of easy-to-use case-finding instruments make it tempting to endorse widespread depression screening in cardiovascular care. However, the adaptation of depression screening in cardiovascular care settings would likely be unduly resource intensive and would not be likely to benefit patients in the absence of significant changes in current models of care.

The JAMA editors liked the paper enough to invite some of the authors to participate in a live webinair with participants able to telephone and email questions.. The editors of BMJ nominated the paper is one of the eight top papers of the year to be considered in a competition for the top paper.

I was caught off guard when just a few weeks later a paper appeared on the Internet labeled as a American Heart Association Science Advisory with a list of impressive committees signing on to its conclusions and the American Psychiatric Association prominently listed as endorsing the advisory.

The recommendations directly contradicted ours:

Although there is currently no direct evidence that screening for depression leads to improved outcomes in cardiovascular populations, depression has been linked to increased morbidity and mortality, poorer risk factor modification, lower rates of cardiac rehabilitation, and reduced quality of life.  Therefore, it is important to assess depression in cardiac patients with the goal of targeting those most in need of treatment and support services.

And

In summary, the high prevalence of depression in patients with CHD supports a strategy of increased awareness and screening for depression in patients with CHD.

Politics versus rules of making evidence-based decisions

Our conclusions were based on best evidence and transparent rules for evaluating that evidence. The AHA Science Advisory was based on a consensus of professionals – psychologists and psychiatrists – who had vested interests in promoting screening because it would increase their professional opportunities in cardiology settings.

Although publicity for our article had some momentum, the promoters of the AHA Science Advisory jumped into the media with a lot of political power to counter our conclusions, while usually failing to acknowledge who we were and where we had published. The American Psychiatric Association actually assigned a pediatric psychiatrist to become a media contact for their point of view.

I had naïvely thought that best evidence would trump consensus of professionals with obvious self-interests at stake. The weight of evidence was clearly on our side. But one of our cardiologist co-authors, Roy Zigelstein was not at all surprised by the carefully orchestrated reaction.

Roy negotiated us an opportunity with American Heart Journal and Journal of the American Academy of Cardiology to explain the differences between us and the AHA Science Advisory. Although we were up against strong vested interests, cardiologists themselves were not necessarily in agreement with the science advisory.  Actually, the American Heart Association continually updates its evaluations of factors correlated with cardiovascular outcomes as causal factors. To this day, it still has not accepted depression as a causal factor, only a risk marker. The implication is that making changes in depression may not necessarily affect cardiac outcomes.

In our commentary at American Heart Journal, we noted the discrepancy between the results of our meta-analysis and systematic review versus the conclusions of the AHA Science Advisory. We also noted that we were not alone in expressing concern about guidelines issued by the American Heart Association increasingly being based on simple professional consensus and not a systematic review of the evidence. Consequently many of them were not “best evidence.”

“In guidelines we cannot trust”

Our skirmishing with the AHA Science Advisory and American Psychiatric Association occurred at a time when recognition was already growing that the recommendations of professional organizations were untrustworthy. There was documentation of numerous instances in which they were often not evidence-based, but served their self-interests, often at the expense of patient outcomes. Many of the recommendations were for billable procedures from the professional groups who created them that were unnecessary and even harmful to patients.

The title of a later article captured the rampant skepticism of the time:

Shaneyfelt T. In guidelines we cannot trust. Arch Intern Med 2012;172:1633-1634.

There were lots of proposals for reform, like a series that included

Fretheim A, Schunemann HJ, Oxman AD. Improving the use of research evidence in guideline development: 5. Group processes. Health Res Policy Syst 2006;4:17.

guidelines we can trustBut discontent gut all the way to the U.S. Congress, which authorized that the Institute of Medicine (IOM) be given the resources to organize a panel with wide representation to come up with, as the final 250 page document was titled, Clinical Guidelines We Can Trust. You can download a free PDF here.

The rationale for specific procedures spelled out, but in terms of the final product:

To be trustworthy,  guidelines should

  • Be based on a systematic review of the existing evidence.
  • Be developed by a knowledgeable, multidisciplinary panel of experts and representatives from key affected groups.
  • Consider important patient subgroups and patient preferences, as appropriate.
  • Be based on an explicit and transparent process that minimizes distortions, biases, and conflicts of interest.
  • Provide a clear explanation of the logical relationships between alternative care options and health outcomes, and provide ratings of both the quality of evidence and the strength of the recommendations.
  • Be reconsidered and revised as appropriate when important new evidence warrants modifications of recommendations.

understand coverThe standards seem eminently reasonable in the deliberations by which they were reached is carefully documented. Yet Understanding Psychosis fails miserably as a set of credible policy recommendations by not meeting any of them. It’s because the process of writing the document was so flawed:

  • The British Psychological Society Division of Clinical Psychology professionals did not engage other professionals with complementary viewpoints and expertise.
  • Key stakeholders were simply excluded – primary care physicians, social workers, psychiatrists, , police and corrections personnel who must make decisions about how to deal with disturbed behavior,  and –most importantly- the family members of persons with severe disturbance.
  • There was no clear explicit process to minimize bias and distortion and no transparency as to how the group arrived at particular conclusions.
  • There was no check on the psychologists simply slanting the document to conform to their own narrow professional self-interests.
  • Recommendations were presented without clear grading of the quality of available evidence or strength of recommendations.
  • While there was a carefully orchestrated “show and tell” rollout, it did not involve any opportunities for feedback and modification of recommendations.

In one of a number of passages plagiarized from an earlier paper, Peter Kinderman recently told clinicians from other disciplines to adopt the recommendations of Understanding Psychosis

To return, then, to the issue of communication between professionals; for clinicians, working in multidisciplinary teams, the most useful approach would be to develop individual formulations; consisting of a summary of an individual’s problems and circumstances, hypothesis about their origins and possible therapeutic solutions. As with direct clinical work, such an approach would yield all the benefits of the traditional ‘diagnosis, treatment’ approach without its many inadequacies and dangers. This would require all clinicians— doctors, nurses and other professionals—to adopt new ways of thinking.

Why should these professionals do the bidding of a small group of self-serving psychologists? They were not involved in the process of constructing these recommendations and the psychologists failed to provide appropriate evidence. There is no evidence that this would improve patient outcomes.

A special pleading for marginalized and silenced Black African clients who were getting poor care.

Enter “African” as a search term in the 180 page Understanding Psychosis and you come up with only a brief mention on page 46 that fails to acknowledge the poor outcomes that tradition African Black are disproportionately achieving in outpatient care. Even if there are few, if any black members of the BPS Diision of Psychology and even if there were no blacks involved in the writing of Understanding Psychosis, surely there was some awareness of the gross disparities in outcomes that are achieved in outpatient care for psychosis and schizophrenia. A recent paper added further evidence to what was already known:

  • Early Intervention Services (EIS) have little effect on the much higher admission and retention rates of Black African clients.
  • There are low rates of GP involvement and high rates of police detention.
  • Poor outcomes were most marked in Black African women (7-8x  greater odds than White British women).
  • A post-hoc analysis showed that pathways to care and help-seeking behavior partially explained these differences.

Overall

In an increasingly outcome-driven and evidence-based era, EIS need to demonstrate a significant positive impact on detecting and treating psychosis early, across all groups. Our findings, when compared with UK studies from the pre-EIS era [5], suggest no improvement in the inequality between Black African patients with FEP and White British patients in terms of experiences of admission and detention. The high rates of detention and hospital admission overall are likely to have substantial implications for continuing engagement. The rate of detention is particularly elevated in Black African patients at 60% (Table 2). A disconcerting finding is of even higher rates in certain groups than prior to introduction of EIS, especially in women. While there is overall evidence that the EIS model is a cost-effective [31] means of engaging hard-to-reach young people, it would seem not all groups are being reached in ways that minimise stigma and trauma. Of note, a recent systematic review of initiatives to shorten DUP [32] concluded that establishing dedicated services for people with FEP does not in itself reduce DUP. This is despite evidence that longer DUP is associated with poorer outcomes [33],[34].

“In an increasingly outcome-driven and evidence-based era,” the British Psychological Society Division of Clinical Psychology had better involve a broader and more ethnically diverse range of opinions and more careful consideration of available evidence if they are going to be taking seriously.

Counterpoint from Richard Pemberton, UK Chair of the British Psychological Society Division of Clinical Psychology:

Your approach to debate and tendency to personalise professional differences however means that many senior people don’t take you seriously and/or aren’t willing to get in the same room as you. Describing the very senior and prestigious group of researchers who were co-authors of our recent psychosis publication as either ‘stoned or drunk’ is a case in point? Doing this in private would be testing but putting this out into the public domain certainly breaches UK professional ethical codes. I am sure that you sincerely believe that the report is deeply flawed and highly problematic but I doubt that you actually believe that we are all sitting around under the influence of drugs and alcohol producing 180 page publications.

“Understanding Psychosis and Schizophrenia” and mental health service users

understand coverDoes Understanding Psychosis and Schizophrenia exploit, disrespect, and marginalize service users?

Genre confusion.

The 180-page Understanding Psychosis and Schizophrenia produced by the British Psychological Society Division of Clinical Psychology is a puzzling document. We need to know its genre to decide what standards we apply in evaluating it. The authors tell us:

The report is intended as a resource for people who work in mental health services, people who use them and their friends and relatives, to help ensure that their conversations are as well informed and as useful as possible. It also contains vital information for those responsible for commissioning and designing both services and professional training, as well as for journalists and policy-makers. We hope that it will help to change the way that we as a society think about not only psychosis but also the other kinds of distress that are sometimes called mental illness.

“Well-informed” by what or whom? How is the information “vital”? Does “vital” assume “trustworthy” and “credible”?

As I will cover in a later blog issue, the document strikingly lacks the transparency that it would need to be taken seriously.  Understanding Psychosis conforms to none of the well-defined processes and standards – checks and balances – expected to be met by professional organizations producing a report aimed at policy-makers and the general public.

mental ellf1For now, note these psychologists did not engage other professionals with complementary viewpoints and expertise. And the writing  was closed to anyone not already expressing strongly held particular opinions. When critics nonetheless provided a detailed analysis of some crucial points at the popular blog, Mental Elf, the authors of Understanding Psychosis retweeted and favorited a denunciation of them as a “circle jerk,” i.e., mutually masturbating.

circle jerk
Please click to enlarge

how vulgar

Key stakeholders were simply excluded – primary care physicians, social workers, psychiatrists, police and corrections personnel who must make decisions about how to deal with disturbed behavior, and –most importantly- the family members of persons with severe disturbance. There was no check on the psychologists simply slanting the document to conform to their own narrow professional self-interests, which we are asked to accept as “expertise.”

Is Understanding Psychosis evidence-based?

Understanding Psychosis occasionally cites some empirical findings, but can’t be seen as evidenced-based. That would require transparent, systematic strategies for gathering, interpreting, and integrating evidence that are simply not there.

Indeed, I think it is an excellent document for PhD students and trainees to practice debunking the creation of false authority by selective citation and miscitation and ignoring of contradictory studies. I suggest that they arm themselves with Google Scholar and tools provided in

Greenberg, S. A. (2009). How citation distortions create unfounded authority: analysis of a citation network. BMJ, 339.

Then start checking the citations provided for seemingly evidence-based statements in Understanding Psychosis. Ask questions like “What relevant studies are not cited? What studies are misinterpreted or simply cited for findings they did not contain?” Go to Google Scholar or Web of Science and find out.

For instance, take the opinion

In view of the problems with diagnoses, many researchers and clinicians are moving away from using them, and recent high-profile reports have recommended this. 55 56.

Check the references and see that the authors of Understanding Psychosis are the “many researchers and clinicians.” They are praising their own opinion pieces as “high-profile.”

55. British Psychological Society (2013). Division of Clinical Psychology position statement on the classification of behaviour and experience in relation to functional psychiatric diagnoses: time for a paradigm shift. Leicester: British Psychological Society.

56. Division of Clinical Psychology (2011). Good practice guidelines on the use of psychological formulation. Leicester: British Psychological Society.

The authors of Understanding Psychosis would have embarrassed themselves if they stated outright “It is our opinion that…and we consider our opinion high-profile and you should be duly impressed.” They depend on readers not checking references.

Argument from cherry picked quotes.cherrypicking

Understanding Psychosis is a collection of quotes. We might be inclined to interpret this as a strength, a sign of collaborative  participatory research.

Or maybe this represents qualitative research allowing  people to speak for themselves, rather than requiring that their experiences be processed through others’ filters and concepts.  But bona fide, credible qualitative research requires that biases of  investigators not intrude upon what they report.  Some controls must be visibly present preventing the investigators from doing so.

Quotes are carefully selected to support by the psychologists opinions expressed before the document was prepared – like 15 years ago in their Recent Advances in Understanding Mental Illness and Psychotic Experiences.

Many quotes are not from people suffering from schizophrenia. In most instances, we are not given sufficient information to determine this.  The authors systematically withhold information that would allow readers to determine who is and who is not a service user.

In this issue of Mind the Brain, I examine implications of this heavy dependence on these particular quotes. I will question whether Understanding Psychosis involves using and even exploiting service users, pitting more highly functioning ones against those who are functioning less well and their families who have to deal with them when they cannot take care of themselves.

Where do the quotes in Understanding Psychosis come from?

Some quotes were simply pasted in from the 2000 Recent Advances in Understanding Mental Illness and Psychotic Experiences.

Presumably people had relevant experience in the interim for our grasping the relevance to what it like living with schizophrenia and other psychoses – if that was actually their circumstances. Unfortunately, no follow-up is provided. The authors did not respond to repeated inquiries to asking whether they even obtained permission to use these quotes.

The quotes also have been trimmed of most details about their context that are available in original sources. Going to the original sources, we find the sources deliberately sampled people who were not service users.

Yup, people stripped of their identities are paraded out without the benefit of information that would render their experiences meaningful. Readers can’t independently assessment the uses to which the psychologist authors of Understanding Psychosis put these quotes.

What is not at issue is whether people with unusual experiences can get our attention when they talk about them. What is at issue is that a group of professionals take these quotes out of context and insist that they be accepted as the primary basis for – as their title states – our understanding of psychosis and schizophrenia.

Some of the quotes come from sources like

Jackson, L., Hayward, M. & Cooke, A. (2011). Developing positive relationships withvoices: A preliminary grounded theory. International Journal of Social Psychiatry, 57(5), 487–495.

Freeman, D., Garety, P.A., Bebbington, P.E., Smith, B., Rollinson, R., Fowler, D. et al. (2005). Psychological investigation of the structure of paranoia in a non-clinical population. British Journal of Psychiatry, 186, 427–435.

Heriot-Maitland, C., Knight, M. & Peters, E. (2012). A qualitative comparison of psychotic-like phenomena in clinical and non-clinical populations. British Journal of Clinical Psychology, 51(1), 37–53.

Jackson et al report

Five men and seven women were recruited through local NHS services, community advertisement and the local branch of the Hearing Voices Network.

Freeman  et al report

An anonymous internet survey [was]… e-mailed the address of a website where they could take part in a survey of ‘everyday worries about others’.

Heriot-Maitland et report interviewing 12 participants, who reported “psychotic-like ‘out-of-the-ordinary’ experience (OOE) in the past five years.”

The quotes come from persons who are lucid enough to be recruited for small studies of , highly selected articulate persons. They certainly don’t display the distorted thought and behavior disorder and simple incoherence of many people with acute and chronic schizophrenia.

I agree with the Understanding Psychosis authors that few people who have ‘psychotic-like’ experiences meet criteria for a diagnosis of schizophrenia. But should we accept a carefully cherry-picked and edited group of quotes as the basis for revising our understanding of people who do meet criteria?

A number of quotes sound like people who are high functioning and showing an unusual degree of  fantasy-proneness:

P 29 I work four days a week in a professional job; I own my own house and live happily with my partner and pets. Occasionally I hear voices – for example when I have been particularly stressed or tired, or I have seen visions after a bereavement. Knowing that many people hear voices and live well, and that some cultures see these experiences as a gift, helps me to never catastrophise or to worry that it may be the start of a breakdown. Although I am lucky that the experiences have never been as upsetting as some people’s, if someone had told me it was madness I could have got into a vicious cycle and struggled to get out.

Some of the quotes seem to represent clinically significant distress, but probably not psychosis or schizophrenia.

p 53 One thing that you might hear a lot about is that anxiety is a trigger of suspicious thoughts. I have never been that good at recognising my own anxiety. Quite a high level of anxiety is pretty normal for me. So normal that I wouldn’t normally do anything about it, but I now recognise that it sets the background for the expected potential threats in any situation, and so the suspicious thoughts and ideas of reference can pop right in there. I find people as having the most potential as a source of threat and because of that I am prone to suspicious thoughts about others. So now what I do is try to address the level of anxiety I feel in these situations. Adam

We’re not provided any information suggesting this suspiciousness is the psychotic symptom, paranoia.

P 43 After being almost killed by my ex-boyfriend when I was 16 I have had OCD. I have also developed paranoia about someone trying to kill me. If I have conflict with someone over anything I worry they are going to kill me or have someone come and kill me. I wake up worried someone is in my bedroom. I think about trying to be ready to protect myself if someone comes at me. I don’t think I would have this if I had not been traumatised half my life ago.  Josephine

Yale Professor Joan Cook and other colleagues and I recently published  a mixed method study of a national sample of psychotherapists providing residential treatment to veterans for posttraumatic stress disorder.  A number reported difficulties deciding whether the “voices” that some veterans describe represented schizophrenia or vivid re-experiencing symptoms consistent with posttraumatic stress syndrome, for which exposure therapy is indicated.

The authors of Understanding Psychosis express a clear disdain for making diagnostic distinctions. But,  it is important for clinicians to decide about the nature of clients’ distress in order to decide how to treat it. They best do so by formulating a hypothesis based on evidence tied to diagnoses, and then sympathetically probing.  Gradual exposure to past trauma would likely tame the distress of someone meeting criteria for PTSD. But this could prove absolutely terrifying and decompensating for someone whom additional information suggested a diagnosis of psychosis. So clinicians have to have some evidence-based ideas to probe and make decisions or proceed blindly.

Some quotes probably refer to brief psychotic reactions. Responding to Understanding Psychosis, Allan Frances noted

Brief psychosis is considered a mental disorder, but it is just a transient one with excellent prognosis and no reason to expect long-term impairment. The symptoms emerge suddenly in response to stress and usually disappear just as suddenly (especially if the stress is removed), often never to reappear. This is common in many cultures, and I have seen it fairly often in college students away from home for the first time, in travelers in strange lands, and in people who have had something terrible happen to them. Antipsychotic medicine is needed only briefly, if at all.

Quotes were selected to fit the authors’ conviction that what other professionals call psychosis or schizophrenia is an understandable reaction to life events. But if we go to the larger literature, the associations between adverse experiences and psychosis, even in a meta-analysis of one of the authors of Understanding Psychosis, are not large enough that would suggest such strong causality.  Adverse experiences are linked to lots of negative outcomes, but generally do not lead to psychosis or schizophrenia, even if there is a significant, but not overwhelming correlation.

Understanding Psychosis is not a transparent, systematic review of available evidence. Authors are mustering quotes to fit their preconceived notions. And leaving out quotes and details that don’t fit.

American psychiatrist Bernard “Barney” Carroll slammed the arrogant response of President of the American Psychiatric Association President  Jeffrey Lieberman to media coverage of Understanding Psychosis. Barney called it over-the-top” and a “disservice to psychiatry.” Yet, this was not before he nailed the report for its “domesticating psychosis”:

Hallucinations become the experience of hearing voices; delusions become the experience of unusual beliefs; paranoid thinking becomes the experience of anxiety – never mind that the great majority of patients with clinical anxiety disorders are not at all paranoid in the way that psychotic patients are. They also make much of the fact that milder forms of these “experiences” are common in the general population – as are milder forms of many clearly medical symptoms. In short, they fail to acknowledge the state transition that demarcates mild or prodromal symptoms from outright psychotic illness.

… The BPS document fails adequately to convey the range of symptoms and associated behaviors in psychosis/schizophrenia. Even when these are mentioned, they are not addressed in a way that matches their clinical salience. Thus, decompensating psychotic crises are discussed unhelpfully in the framework of poor sleep habits. Acute inpatient psychiatric units are discussed in a patronizing way and are faulted as being unhelpful for some patients – never mind their rescue function. Catatonia as a common feature is not acknowledged. Psychotic terror and panic are not acknowledged. Formal thought disorder with truly crazy speech is not acknowledged.

A disclosure of my past.

I’m struck by the huge gap between the clear, articulate statements in the quotes provided in Understanding Psychosis and the incoherent mumbling and sometimes raging of people who are acutely psychotic.  I wonder how many of the authors have ever tried to conduct an interview with someone in that state.

cowboy entering belgium-1-page-001My clinical training involved six years of live supervision at the Mental Health Institute (MRI)provided by professionals widely recognized for their innovative work in analyzing the communication of persons considered as having schizophrenia –  Paul Watzlawick, John Weakland, and Richard Fisch – although they would have objected to that diagnostic label.

At the time, I probably was more anti-diagnosis than many of the authors of Understanding Psychosis are today. But then as Director of Research at Mental Research Institute, I witnessed the disaster of its Soteria Project. I’ll leave that for another time, but Wikipedia states

The Soteria project was admired by many professionals around the world who aspired to create mental health services based on a social, as opposed to a medical, model. It was also heavily criticized as irresponsible or ineffective. The US Soteria Project closed as a clinical program in 1983 due to lack of financial support, although it became the subject of research evaluation with competing claims and analysis. Second generation US successors to the original Soteria house called Crossing Place is still active, although more focused on medication management.

While Paul, John, and Dick were widely recognized for their work analyzing communication with severely disturbed persons, they operated with a sense that at some point the disturbance of thought and behavior could became too much to carry on a discussion. And talking to highly disturbed persons, they knew not to take what was being said literally.

Who was selected for inclusion in Understanding Psychosis and who was excluded and left silent?

Many patients with acute and chronic psychosis are essentially nonverbal and cannot communicate their distress. Sure, they can’t provide coherent quotes for the psychologists who assembled Understanding Psychosis, but it is irresponsible for those psychologists to pretend these people don’t exist or that the quotes they assembled represent their best interest.

Many patients who meet criteria for schizophrenia will times be unable to take care of themselves or to make basic decisions.  The burden of caring and decision-making will fall on family members if they are available. The alternative for persons with schizophrenia is to become homeless or go to jail or prisons because more appropriate beds and hospitals are not available. Nowhere in Understanding Psychosis are we reminded that persons with schizophrenia sometimes need sanctuary in hospitals.

Nowhere are we reminded that 10% of persons with schizophrenia will die by suicide. There is recent evidence that psychotic people may account for nearly 1/3 of suicide attempts with intent to die.

If I were a family member of someone with schizophrenia, I would be damn angry at the gap between the quotes in Understanding Psychosis what I knew about the person for whom I had to provide care. I’d also be angry that no one in my situation had been invited to participate as a stakeholder.

Psychologists in search of opportunities to work with YAVIS clients

purchase of friendshipThe carefully selected quotes suggest people who would be more satisfying to work with than many persons with psychosis and schizophrenia. Reading them, I was immediately reminded of William Schofield[‘s  50-year-old book Psychotherapy: The Purchase of Friendship in which he lamented the strong tendency of mental health professionals wanting to work with the YAVIS: clients who are young, attractive, verbal, intelligent, and successful.  One of the authors of Understanding Psychosis also co-authored the widely misrepresented Lancet study of cognitive behavioral therapy for psychosis and could tell us how difficult and ineffective  it was doing  therapy in  that study with the older patients who had more psychotic  episodes.

Despite the authors of the Lancet study having distanced themselves from earlier claims showed cognitive therapy had effects equivalent to antipsychotic medication, authors of Understanding Psychosis persist in making the claim to service users:

It would also appear that CBT can bring comparable benefits even when people choose not to take medication.

As we would expect from recommendations produced by tightly knit groups representing single professions, Understanding Psychosis is a bid for more resources for its authors to work with clients with whom they want to work.  But like any policy recommendations, we need to examine the evidence and look at where those resources would come.

Please click to enlarge
Please click to enlarge

I’ll leave that discussion to another blog post, but take a look at the graph on the left. It represents the dramatic shift in resources from inpatient beds to outpatient treatment settings. The profoundly disturbed persons who need those beds would undoubtedly be less suitable for the conversations that the Understanding Psychosis psychologists want to be having. The long term reduction in inpatent services represents not so much deinstitutionalization as transinstutionalization.  A lack of those beds means that persons in need of them are being relegated to jails and prisons. In the United States, the Los Angeles jails represent the largest mental health treatment facility in the United States and the conditions for the severely disturbed are abominable. Similar situations hold in the UK.

An inpatient psychiatrist recently wrote in the New York Times:

We also need to rethink how we care for another group of vulnerable patients who have been just as disastrously disserved by policies meant to empower and protect them: the severely mentally disabled.

He went on:

We have worked to minimize the use of restraint and seclusion on my unit, but have seen the frequency of both skyrocket. Nearly every week staff members are struck or scratched by largely nonverbal patients who have no other way to communicate their distress. Attempting to soothe these patients monopolizes the efforts of a staff whose mission is to treat acute psychiatric emergencies, not chronic neurological conditions. Everyone loses.

Professor-Simon-Wessely-007Somebody in the UK should be speaking up for the inarticulate vulnerable persons with schizophrenia needing inpatient beds who are silenced and marginalized by the authors of Understanding Psychosis.  Where the hell is Simon Wessely when they need him?

Promoting an unrealistic view of schizophrenia?

If the authors of Understanding Psychosis were truly interested in providing authoritative information for persons with schizophrenia or psychosis, their family members, and professionals who come into contact with them, they would’ve provided the latest evidence about long-term course and outcome.

For instance, a key English study provides a 10 year follow-up individuals with a first episode of psychosis initially identified in either southeast London or Nottingham.

Morgan, C., Lappin, J., Heslin, M., Donoghue, K., Lomas, B., Reininghaus, U., … & Dazzan, P. (2014). Reappraising the long-term course and outcome of psychotic disorders: the AESOP-10 study. Psychological medicine, 44(13), 2713-2726.

At follow-up, of 532 incident cases identified, at baseline 37 (7%) had died, 29 (6%) had emigrated and eight (2%) were excluded. Of the remaining 458, 412 (90%) were traced and some information on follow-up was collated for 387 (85%). Most cases (265, 77%) experienced at least one period of sustained remission; at follow-up, 141 (46%) had been symptom free for at least 2 years. A majority (208, 72%) of cases had been employed for less than 25% of the follow-up period. The median number of hospital admissions, including at first presentation, was 2 [interquartile range (IQR) 1–4]; a majority (299, 88%) were admitted a least once and a minority (21, 6%) had 10 or more admissions. Overall, outcomes were worse for those with a non-affective diagnosis, for men and for those from South East London.Conclusions Sustained periods of symptom remission are usual following first presentation to mental health services for psychosis, including for those with a non-affective disorder; almost half recover.

Put differently, overall

12% (9% for non-affective) of our sample recovered within 6 months of contact with services and did not have a further episode, 20% (14% for non-affective) never had an episode lasting more than 6 months, and around 50% (40% for non-affective) had not experienced symptoms in the 2 years prior to follow-up.

And then there is the most recent comprehensive systematic review and meta-analysis.

Jääskeläinen, E., Juola, P., Hirvonen, N., McGrath, J. J., Saha, S., Isohanni, M., … & Miettunen, J. (2012). A systematic review and meta-analysis of recovery in schizophrenia. Schizophrenia Bulletin, sbs130.

We identified 50 studies with data suitable for inclusion. The median proportion (25%–75% quantiles) who met our recovery criteria was 13.5% (8.1%–20.0%). Studies from sites in countries with poorer economic status had higher recovery proportions. However, there were no statistically significant differences when the estimates were stratified according to sex, midpoint of intake period, strictness of the diagnostic criteria, duration of follow-up, or other design features. Conclusions: Based on the best available data, approximately, 1 in 7 individuals with schizophrenia met our criteria for recovery. Despite major changes in treatment options in recent decades, the proportion of recovered cases has not increased.

One in 7 people with schizophrenia meet criteria for recovery, and the portion has not increased in recent decades. Compare that with the unrealistically cheery assessment offered in Understanding Psychosis:

Even if people continue to hear voices or hold unusual beliefs, they may nevertheless lead very happy and successful lives. Sometimes a tendency to ‘psychosis’ can be associated with particular talents or abilities.

And

p 30 People who continue to have severe and distressing experiences may lead happy and successful lives in all other respects, such as work and relationships.

Sure, the authors of Understanding Psychosis keep reminding us of the cliché that everybody is different. But they are asking us to make clinical and policy decisions that are life altering for some people and could be life-ending for others. We can’t afford to ignore a larger body of relevant data.

In light of the data from long term follow-up studies, Understanding Psychosis should be seen as a cruel hoax perpetrated against more typical severely disturbed mental health service users, their family, and policymakers