Troubles in the Branding of Psychotherapies as “Evidence Supported”

Is advertising a psychotherapy as “evidence supported,”  any less vacuous than “Pepsi’s the one”? A lot of us would hope so, having campaigned for rigorous scientific evaluation of psychotherapies in randomized controlled trials (RCTs), just as is routinely done with drugs and medical devices in Evidence-based Medicine (EBM). We have also insisted on valid procedures for generating, integrating, and evaluating evidence and have exposed efforts that fall short. We have been fully expecting that some therapies would emerge as strongly supported by evidence, while others would be found less so, and some even harmful.

Some of us now despair about the value of this labeling or worry that the process of identifying therapies as evidence supported has been subverted into something very different than we envisioned.  Disappointments and embarrassments in the branding of psychotherapies as evidence supported are mounting. A pair of what could be construed as embarrassments will be discussed in this blog.

Websites such as those at American Psychological Association Division 12 Clinical Psychology and SAMHSA’s National Registry of Evidence-based Programs and Practices offer labeling of specific psychotherapies as evidence supported. These websites are careful to indicate that a listing does not constitute an endorsement. For instance, the APA division 12 website declares

This website is for informational and educational purposes. It does not represent the official policy of Division 12 or the American Psychological Association, nor does it render individual professional advice or endorse any particular treatment.

Readers can be forgiven for thinking otherwise, particularly when such websites provide links to commercial sites that unabashedly promote the therapies with commercial products such as books, training videos, and workshops. There is lots of money to be made, and the appearance of an endorsement is coveted. Proponents of particular therapies are quick to send studies claiming positive findings to the committees deciding on listings with the intent of getting them acknowledged on these websites.

But now may be the time to begin some overdue reflection on how the label of evidence supported practice gets applied and whether there is something fundamentally wrong with the criteria.

Now you see it, now, you don’t: “Strong evidence” for the efficacy of acceptance and commitment therapy for psychosis

On September 3, 2012 the APA Division 12 website announced a rating of “strong evidence” for the efficacy of acceptance and commitment therapy for psychosis. I was quite skeptical. I posted links on Facebook and Twitter to a series of blog posts (1, 2, 3) in which I had previously debunked the study claiming to demonstrate that a few sessions of ACT significantly reduced rehospitalization of psychotic patients.

David Klonsky, a friend on FB who maintains the Division 12 treatment website quickly contacted me and indicated that he would reevaluate the listing after reading my blog posts and that he had already contacted the section editor to get her evaluation. Within a day, the labeling was changed to “designation under re-review as of 9/3/12”and it is now (10/16/12) “modest research support.”

David Klonsky is a serious, thoughtful guy with an unenviable job: keeping the Division 12 list of evidence supported treatments updated. This designation is no less important than it once was, but it is increasingly difficult to engage burned out committee members to evaluate the flood of new studies that proponents of particular therapies relentlessly send in. As we will see with this incident, the reports of studies that are considered are not necessarily reliable indicators of the efficacy of particular treatments, even when they come from prestigious, high impact journals.

The initial designation of ACT as having “strong evidence” for psychosis was mainly based on a single, well promoted study, claims for which made it all the way to Time magazine when it was first published.

Bach, P., & Hayes, S.C. (2002). The use of acceptance and commitment therapy to prevent the rehospitalization of psychotic patients: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 70, 1129-1139.

Of course, the designation of strong evidence requires support of two randomized trials, but the second trial was a modest attempt at replication of this study and was explicitly labeled as a pilot study.

The Bach and Hayes  article has been cited 175 times as of 10/21/12 according to ISI Web of Science, mainly  for claims that appear in its abstract: patients receiving up to four sessions of an ACT intervention had “a rate of rehospitalization half that of TAU [treatment as usual] participants over a four-month follow-up [italics added].” This would truly be a powerful intervention, if these claims are true. And my check of the literature suggests that these claims are almost universally accepted. I’ve never seen any skepticism expressed in peer reviewed journals about the extraordinary claim of cutting rehospitalization in half.

Before reading further, you might want to examine the abstract and, even better, read the article for yourself and decide whether you are persuaded. You can even go to my first blog post on this study where I identify safe some of the things to look for in evaluating the claims. If these are your intentions, you might want to stop reading here and resume after considering these materials.

Warning! Here comes the spoiler.

  • It is not clear that rehospitalization was originally set as the primary outcome, and so there is a possible issue of a shifting primary outcome, a common tactic in repackaging a null trial as positive. Many biomedical journals require that investigators publish their protocols with a designated primary outcome before they enter the first patient into a trial. That is a strictly enforced requirement  for later publication of the results of the trial. But that is not yet usually done for RCTs testing psychotherapies.The article is based on a dissertation. I retrieved a copy andI found that  the title of it seemed to suggest that symptoms, not rehospitalization, were the primary outcome: Acceptance and Commitment Therapy in the Treatment of Symptoms of Psychosis.
  • Although 40 patients were assigned to each group, analyses only involved 35 per group. The investigators simply dropped patients from the analyses with negative outcomes that are arguably at least equivalent to rehospitalization in their seriousness: committing suicide or going to jail. Think about it, what should we make of a therapy that prevented rehospitalization but led to jailing and suicides of mental patients? This is not only a departure from intention to treat analyses, but the loss of patients is nonrandom and potentially quite relevant to the evaluation of the trial. Exclusion of these patients have substantial impact on the interpretation of results: the 5 patients missing from the ACT group represented 71% of the reported rehospitalizations  and the 5 patients missing from the TAU group represent 36% of the reported rehospitalizations in that group.
  • Rehospitalization is not a typical primary outcome for a psychotherapy study. But If we suspend judgment for a moment as to whether it was the primary outcome for this study, ignore the lack of intent to treat analyses, and accept 35 patients per group, there is still not a simple, significant difference between groups for rehospitalization. The claim of “half” is based on voodoo statistics.
  • The trial did assess the frequency of psychotic symptoms, an outcome that is closer to what one would rely to compare to this trial with the results of other interventions. Yet oddly, patients receiving the ACT intervention actually reported more, twice the frequency of symptoms compared to patients in TAU. The study also assessed how distressing hallucinations or delusions were to patients, what would be considered a patient oriented outcome, but there were no differences on this variable. One would think that these outcomes would be very important to clinical and policy decision-making and these results are not encouraging.

This study, which has been cited 64 times according to ISI Web of Science, rounded out the pair needed for a designation of strong support:

Gaudiano, B.A., & Herbert, J.D. (2006). Acute treatment of inpatients with psychotic symptoms using acceptance and commitment therapy: Pilot results. Behaviour Research and Therapy, 44, 415-437.

Appropriately framed as a pilot study, this study started with 40 patients and only delivered three sessions of ACT. The comparison condition was enhanced treatment as usual consisting of psychopharmacology, case management, and psychotherapy, as well as milieu therapy. Follow-up data were available for all but 2 patients. But this study is hardly the basis for rounding out a judgment of ACT as efficacious for psychosis.

  • There were assessments with multiple conventional psychotic symptom and functioning measures, as well as ACT specific measures. The only conventional measure to achieve significance was distress related to hallucinations and there were no differences in ACT specific measures. There were no significant differences in rehospitalization.
  • The abstract puts a positive spin on these findings: “At discharge from the hospital, results suggest that short-term advantages in effect of symptoms, overall improvement, social impairment, and distress associated with hallucinations. In addition, more participants in the ACT condition reach clinically significant symptom improvement at discharge. Although four-month rehospitalization rates were lower in the ACT group, these differences did not reach statistical significance.”

The provisional designation of ACT as having strong evidence of efficacy for psychosis could have had important consequences. Clinicians and policymakers could decide that merely providing three sessions of ACT is a sufficient and empirically validated approach to keep chronic mental patients from returning to the hospital and maybe even make discharge decisions based on whether patients had received ACT. But the evidence just isn’t there that ACT prevents rehospitalization, and when the claim is evaluated against what is known about the efficacy of psychotherapy for psychotics, it appears to be an unreasonable claim bordering on the absurd.

The redesignation of ACT as having modest support was based on additional consideration of a follow-up study of the Bach and Hayes, plus an additional feasibility study that involved 27 patients randomized to either to treatment as usual or 10 sessions of ACT plus treatment as usual. Its stated goal was to investigate the feasibility of using ACT to facilitate emotional recovery following psychosis, but as a feasibility study, included a full range of outcomes with the intention of deciding which would be important for assessing the impact of ACT in this population. The scales included the two subscales of the Hospital Anxiety and Depression Scale (HADS), the positive and negative syndrome scale, an ACT specific scale, and a measure of the therapeutic alliance.  Three of the patients assigned just treatment as usual dropped out and so intent to treat analysis were not conducted. With such a small sample, it is not surprising that there were no differences on most measures. The investigators noted that the patients receiving ACT and had fewer crisis contacts over the duration of the trial, but it is not clear whether this is simply due to the treatment as usual group not having regular treatment and therefore having to resort to crisis contacts.

The abstract of the study states “ACT appears to offer promise in reducing negative symptoms, depression and crisis contacts in psychosis”, which is probably a bit premature. Note also that across these three trials, there is a shift in the outcome to which the investigators point as evidence for the efficacy of ACT for psychosis. The assumption seems to be that any positive result can be claimed to represent a replication, even if other variables were cited for this purpose among the other studies.

Overall, this trial would also be rated as having high risk of bias because of the lack of intent to treat analyses and the failure to specify a primary outcome among the battery that was administered, but more importantly, it would simply be excluded from meta-analyses with which I have been associated because of too few patients in it. A high risk of bias plus too few patients discourages any confidence in these results.

Is treating PTSD with acupoint stimulation supported by evidence ?

Whether or not ACT is more efficacious than other therapies, as its proponents sometimes claim, or whether it is efficacious for psychosis, is debatable, but probably no one would consider ACT anything other than a bona fide therapy. The same does not hold for Emotional Freedom Therapy (EFT) and its key component, acupoint.  I’m sure there was much consternation at APA and Division 12 when stories circulated on the Internet that APA had declared EFT to be evidence supported.

Wikipedia offers the following definition of EFT:

Emotional Freedom Techniques (EFT) is a form of counseling intervention that draws on various theories of alternative medicine including acupuncture, neuro-linguistic programming, energy medicine, and Thought Field Therapy. During an EFT session, the client will focus on a specific issue while tapping on so-called “end points of the body’s energy meridians.”

Writing in The Skeptical Inquirer, Brandon Gaudiano and James Herbert argued that there is no plausible mechanism to explain how the specifics of EFT could add to its effectiveness and they have been described as unfalsifiable and therefore pseudoscientific. EFT is widely dismissed by skeptics, along with its predecessor, Thought Field Therapy and has been described in the mainstream press as “probably nonsense.”[2] Evidence has not been found for the existence of acupuncture points, meridians or other concepts involved in traditional Chinese medicine.

The scathing Gaudiano and Herbert critique is worth a read and calls attention to claims of EFT by proxy: patients improve when therapists tap themselves rather than the patients! My imagination runs wild: how about televised sessions in which therapists tap themselves and liberate thousands of patients around the world from their PTSD?

According to David Feinstein, aproponent of EFT, in including a chapter on Thought Field Therapy in an anthology of innovative psychotherapies, Corsini (2001) acknowledged that it was “either one of the greatest advances in psychotherapy or it is a hoax.”

Claims have been made for acupoint that even proponents of EFT consider “provocative,” “extraordinary,”  and “too good to be true.” An article published in Journal of Clinical Psychology (not an APA journal), reported that 105 people were treated in Kosovo for severe emotional reactions to past torture, rape, and witnessing loved ones being burned or raped. Strong improvement was observed in 103 of these patients, despite an average of only three sessions. For comparison purposes, exposure therapy involves at least 15 sessions in the literature claims nowhere near this efficacy. However, even more extraordinary results were claimed for the combined sample of 337 patients treated in visits to Kosovo, Rwanda, the Congo, and South Africa. The 337 individuals expressed 1016 traumatic memories of which 1013 were successfully resolved, resulting in substantial improvement in 334 patients. Unfortunately the details of this study remain on unpublished, but claims of these results appear in a forthcoming article in the APA journal Review of General Psychology.

Reports circulating on the Internet that APA had declared EFT to be an evidence supported approach stemmed from a press release by the EFT Universe that cited a statement from the same Review of General Psychology article:

A literature search identified 50 peer-reviewed papers that report or investigate clinical outcomes following the tapping of acupuncture points to address psychological issues. The 17 randomized controlled trials in this sample were critically evaluated for design quality, leading to the conclusion that they consistently demonstrated strong effect sizes and other positive statistical results that far exceed chance after relatively few treatment sessions. Criteria for evidence-based treatments proposed by Division 12 of the American Psychological Association were also applied and found to be met for a number of conditions, including PTSD (Feinstein, 2012).

Feinstein had been developing his claims about energy therapies such as EFT meeting the Division 12 criteria for a while. In a 2008 article in the APA journal Psychotherapy Theory, Research, Practice, Training, he declared

although the evidence is still preliminary, energy psychology has reached the minimum threshold for being designated has an evidence-based treatment, with one form having met the APA division 12 criteria as a “probably efficacious” treatment for specific phobias; another for maintaining weight loss.

In this 2008 article, Feinstein also cited a review in the online book review journal of APA in which Ilene Selrin, Past President of APA’s Division of Humanistic Psychology praised Feinstein’s book for its “valuable expansion of the traditional biopsychosocial model of psychology to include the dimension of energy” and energy psychology as representing “a new discipline that has been receiving attention due to its speed and effectiveness with difficult cases.”

The reports that EFT had been designated as an evidence supported treatment made the rounds for a few months, sometimes with the clarification that EFT met the criteria, but had not yet been labeled as evidence supported by Division 12. In some communities, stories about EFT or –as it was called– tapping therapy made the local TV news. KABC news Los Angeles titled a story,”‘Tapping’ therapy can relieve anxiety, stress, researchers say” and got an APA spokesperson to provide a muted comment

 “Has this tapping therapy been proven effective? We don’t think so at this point,” said Rhea Farberman, Executive Director for Public and Member Communications at the APA.

The comment went on to say that APA viewed stress and anxiety as serious but treatable issues for some persons and cognitive behavior therapy recommended, but not tapping therapy.

What do these incidents say about branding of psychotherapies as evidence supported?

I will explore this issue in greater depth in a future blog post, but for now we are left with some questions.

The first incident involved designation of a psychotherapy as having strong evidence of efficacy for psychosis, but was quickly changed first to under review and then to modest support. The precipitant for this downgrading seems to be blog posts that revealed the abstract of the key study to be misleading. Designation of a therapy as having strong evidence for its efficacy requires two positive randomized controlled trials. The second trial was described as a pilot study explicitly aimed at replicating the first one. Like the first one, its abstract declared positive findings. However, this study failed to replicate the first study’s claimed reduction in hospitalization, and a cursory examination of the results section revealed that this study, like the study that it attempted to replicate, was basically a null trial.

  • Do the current criteria employed by Division 12-only 2 positive trials and no attention to size or quality- set too low a bar for a therapy receiving the seemingly important branding of having strong evidence?
  • The revised status of ACT for psychosis is that it has modest support. But how does two null trials published with confirmatory bias constitute modest support?
  • Are there pitfalls in uncritically accepting claims in the abstracts of articles appearing in prestigious journals like JCCP?
  • More generally, to what extent do the shortcomings of articles appearing in prestigious journals like JCCP warrant skepticism, not only by reviewers for Division 12, but consumers more generally?
  • Should we expect a prestigious journals like JCCP to encourage and make a place for post publication peer review of the articles that have appeared there?
  • Should revised criteria for evidence supported therapies not just count whether there are two or only one positive trial, but incorporate formal quality ratings of trials for overall quality and risk of bias?

The second incident involves rumors of APA having designated as evidence supported a bizarre therapy with extravagant claims of efficacy. The rumor was based on a forthcoming review in an APA Journal that indicated that EFT had sufficient number of positive randomized trials to meet APA division 12 criteria for evidence supported. It was left to a media person from APA to clarify that APA did not endorse this therapy, but it was unclear on what basis this declaration was made.

  • If ACT for psychosis has modest support, where does EFT stand when evaluated by the same criteria?
  • Can sources other than APA Division 12 apply the criteria to psychotherapies and declare the therapies as warranting evidence-based status? If not, why not?
  • Do consumers, as well as proponents of innovative and even strange therapies, deserve evaluation with formal criteria by APA Division 12 and designation of the therapies not only as warranting a designation of “strong evidence” if they meet these criteria, but alternatively as having demonstrated a failure to accumulate evidence of efficacy, and even as having demonstrated possible harm?
  • If APA Division 12 takes on the task of publicizing the evidence based status of psychotherapies, does it thereby assume a responsibility to alert policy makers and consumers of therapies that fail to meet these criteria?
  • If application of the existing Division 12 criteria warrants EFT as having strong evidence of efficacy, what does that say about the adequacy of these criteria?

To be continued……

What I learned as an Academic Editor for PLOS ONE

Open access week is just around the corner, and I thought I’d take the opportunity to share my experience as an Academic Editor for PLOS ONE.

I was invited to join the team following a conversation at Science Online 2010 with I think Steve Koch, who recommended me to PLOS ONE, and before I knew it I was receiving lots of emails asking me to handle a manuscript.

The nice thing about PLOS ONE is that I get to choose which articles I get to handle, and I am very picky. I think that my role is not just to’ handle’ the manuscript but also make sure that the review process is fair. To do this, I need to understand the manuscript myself. I read every article that I take on and write a ‘mini-review’ of it for myself. When I get the external peer reviews I go through every comment they make against the submitted version, compare the different reviews and revisit my first impression of the manuscript. I have learned a lot from the reviewers, they see things I have missed, and they miss things I have detected. It has been a great insight into the peer review process. And I love not having to pull my crystal ball out to determine whether the article is ‘important’ but just having to decide whether it is scientifically solid.

Read/Review
Image by Wiertz Sébastien on Flickr, licenced under CC-BY

If the science is fundamentally good the articles are sent back to the authors for either minor or major changes, and then it falls back into my inbox. I have found it really interesting to see how authors deal with the reviewer’s comments. The re-submission is also a lot of work. I need to compare the original and new version, make sure that the authors have done what they say they have done, make sure that all reviewer’s comments have been addressed. And then I decide if I send it back for re-review or not. One thing that I found interesting in this second phase is when authors respond to the reviewer’s comments in the letter but do not incorporate that into the article. It is almost as if the responses are for my and the reviewer’s benefit only. So back it goes asking them to incorporate that rationale into the actual manuscript. Oh well. That means another round. Luckily this does not happen that often.

And then it is time to ‘accept’ the paper – and so back to the manuscript where I go through commas, colons, paragraphs, spelling mistakes, in text citations, reference lists, formatting, image quality, figure legends, etc. This I normally send to the authors together with their acceptance letter but don’t ask for the article to be re-submitted.

The main challenge I find with the process is time management.

When I get the request to handle an article, I accept or nor based on how much time I have to process the article. That is all good. Except that I cannot predict when the reviews, resubmissions, etc will eventually happen – and many times these articles ‘ready for decision’ show up in my inbox at a time when I cannot give it the full attention it deserves.  Let alone being able to predict when the revised version will be submitted! I find it impossible to plan ahead for this, especially since I have very little control over a lot of my time commitments (like the days I need to lecture, submit exam questions, mark exams). So if an article arrives while I am somewhere at a conference with limited internet connection… How can I plan for this?

Finding reviewers is another challenge. Sometimes they are hard to find. Nothing as discouraging as finding the “reviewer declined…” emails in my inbox indicating that it is back to the system to do something that I thought was done and dusted. The other day someone asked what is a reasonable amount of reviewing one should do a year? My answer was that one should probably at minimum return the number of reviews provided for one’s articles. Say I publish 3 articles a year, each with 3 reviews, then I should not start complaining about reviewing until I have reviewed at least 9 articles. (of course, one can factor in rejection rate, number of authors, etc) but a tit for tat trade-off seems like a fair expectation. So then why is it so hard to find reviewers? Come on people – if it was your paper getting delayed you’d be sending letters to the journal asking how come the article shows as still sitting with the Editor!

And that is the other thing I learned. Editors don’t just sit on papers because they are lazy. There are many reasons why handling an article may take more or less time. In some cases, after receiving the reviews I feel that something has been raised that needs a specialist to look at a specific aspect of the paper. Sometimes I need a second opinion because there is too little agreement between reviewers. Sometimes the reviewers don’t submit in the agreed time. There are many reasons why an article can be delayed, and so what I learned is to be patient with the editors when I send my papers for publication.

But despite the headaches, the stress and the struggle of being an Academic Editor, it is also an extremely rewarding experience. I keep learning more about science because I see a range of articles before they take their final shape, because I get to look into the discussion of what is good and what is weak. And I get to be part of what makes science great: trying to put out the best we can produce.

It is unfortunate that this process is locked up. I think that there is a lot to learn from it. I think that students and early career scientists would really benefit from seeing the process in articles that are not their own, how variable the quality of the reviews are, what dealing well with reviewers comments and suggestions looks like. And the public too would benefit from seeing what this peer review is all about – what the strengths and weaknesses of the process are and what having been peer reviewed really means.

So, back to Open Access week. Access to the final product is really good. Access to the process of peer review can make understanding the literature even better, because it exposes a part of the process of science that is also worth sharing.

 

A First Post and the First Image of Brain Tissue under the Microscope

It is not easy to write a first post. So, as a first post I thought I’d share another first.

As far as I know, this image below is the first published image of ‘brain’ tissue under the microscope. (I hope Mo Costandi corrects me if I am wrong.) It is really picture of the nerve that connects the eye to the brain, still nervous tissue. The image was published in 1675 by Mr Antonie van Leeuwenhoek. [1]

Wellcome Library, London under Creative Commons by-nc 2.0 UK: England & Wales

What I find fascinating about this image is how it reminds me of how it all started. Most of what I do studying neuroscience involves using microscopes, and this image reminds me how far along we’ve come. I can’t but wonder what went through Leeuwenhoek’s mind when he saw this image – after all he did not know what we know now. In fact, even what light was at the time was not what we know now, so interpreting what that image of the optic nerve meant for visual neuroscience must have been quite an interesting challenge.

I can’t help but chuckle when I read in his manuscript this passage:

“I here thought to myself whether every one of these hollownesses might not have been a filament in the Nerve and besides, that twas needless, there should be a cavity in the Optic Nerve through which the Animal Spirits, representing the species or images in the Eye, might pass into the brain.”

I chuckle, because I am amused by his reference to ‘Animal Spirits’. But I can’t but try to imagine what that first observation might have looked like to someone who saw it for the first time, and without the experience that even the average biology student has. I often wish I could get one of those old microscopes and repeat his experiments and see what nervous tissue might have looked like at that time to understand why people thought of the brain the way they did. Microscopy was such a new thing that even a century after that image was produced a word of caution was expressed in Home’s Croonian lecture (1799) [2]

“It is scarcely necessary to mention that parts of an animal body are not fitted by being examined by glasses of a great magnifying power, and, whenever they are shewn one hundred times larger than their natural size, no dependence can be placed upon their appearance.”

It would take some time for microscopes and the methods to process tissues to get better so that we could make more sense of what we were looking at under the lens, and so it is not surprising that it was not until the end of the 19th century that the ‘cellular theory’ that was contemporary to Leeuwenhoek’s observation was accepted to be true for the brain as well.

After all, how much detail we know about anything in biology is only as good as the precision of the instruments we use to study them. I can’t help but wonder what Leewenhoek would think of the microscopic images of nerve tissue that we produce today. We have come a long way, and gained a lot of precision. And after all, that is the way that science moves on.

A few years back I came across this snippet by George Brecht at the Walker Art Center in Mineapolis, Minnesotta

“Excercise
Determine the limits of an object or event
Determine the limits more precisely
Repeat,
Until further precision is impossible”

I couldn’t help thinking how well this artist described the process of science. We keep hitting the limits of the precision we can measure stuff with, and have to wait until a new tool is developed to measure the same thing a little bit better. Sometimes, we confirm what we thought previously, on occasion we find something unexpected and are forced to change our minds about what we hold true. It is the hope of hitting that unexpected that gets me out of bed every morning to go to the lab.

[1] Mr. Leewenhoeck Microscopical Observations of Mr. Leewenhoeck, Concerning the Optic Nerve, Communicated to the Publisher in Dutch, and by Him Made English. Phil. Trans. January 1, 1675 10 378-380; doi:10.1098/rstl.1675.0032 (pdf)

[2] Everard Home The Croonian Lecture. Experiments and Observations upon the Structure of Nerves. By Everard Home, Esq. F. R. S.Phil. Trans. R. Soc. Lond. January 1, 1799 89 1-12; doi:10.1098/rstl.1799.0002 (pdf)

Preventing Veteran Suicide

The alarm clock, flashing 03:00 in green neon, signals to me that I should be fast asleep. I close my eyes and take deep breaths, trying to lull myself back into a peaceful slumber–the day ahead holds a daunting schedule with no room for yawns or fatigue.  Then it appears, pops right into the forefront of my mind, a solitary question that has needled its way through my dreams, forcing me to deal with its implications. “Did you miss something with Dave?”  Instantly I recognize this thought as residue from yesterday’s day at work, something that had been neglected amongst the whirlwind of patient visits, typing of progress notes, writing of prescriptions, answering of emails, voice mails and texts.

In the still darkness of my bedroom, I strain to recall the details of his clinic visit.  My patient Dave, a veteran, had been home from Iraq for two years, but the passage of time had not healed his psychological wounds. Through stifled tones he told me about his tormented nights and stunned days—horrifying memories, so difficult to erase, now dangerously directing his life. Tears had welled in his honey brown eyes that were dulled by the weight of war. Silence hung between us and I shifted uneasily in my chair, hesitating to reach for the Kleenex. Then, I asked the question I was duty bound to ask, “Have you had thoughts to kill yourself?” A pause, then, “No, Doc, No.”

I run through a mental checklist, to make sure I had done all that his clinical condition had required, I had: increased his Sertraline dosage (a medication to treat symptoms of posttraumatic stress); prescribed a short course of sleep medication to help ease the agony of his insomnia; recommended he see his therapist weekly instead of every other week; called his therapist and shared my concerns and asked him to return to my clinic in two weeks (the time it would take for the extra Sertraline to kick in) instead of the usual month.  As I had walked him to the door, I had made sure he had all the emergency numbers to call if things got worse.

It seemed, on paper at least, I had done all I was supposed to do — so then why am I tossing and turning at this ungodly hour?

My heart sinks as I realize what had been missing from my visit with Dave: the “click”.  The click is a feeling that is beyond rational comprehension, more intuition than fact, it can come and go in the blink of an eye and is hard to quantify or measure.  The presence of the click signals to me that my patient is telling me everything I need to know, we are on the same page and we share the same hope for their recovery. The click signals a mutual trust and respect and a healthy alliance in our relationship. Treating thousands of patients, over a decade of clinical practice, has taught me that the absence of the click invariably means trouble is brewing.

Now the green neon flashes 04:00. I sigh and get out of bed knowing full well that sleep will evade me. I wait for dawn and a reasonable time for when I can call Dave to make sure he is okay.

Available Research on Veterans and Suicide

30,000 to 32,000 Americans die from suicide per year and about 20% of these Americans are veterans. It is important to make the distinction between veterans i.e. persons who have served in the military, naval or air service and are now discharged from active duty personnel i.e. persons who are still serving in the military.  The distinction is important because in Army and Marine active duty personnel the statistics are different, for this population, suicide rates have nearly doubled between 2005 and 2009.  A recent and thoughtful analysis of this tragic situation can be found here.

I am a psychiatrist, working for Veterans Affairs (VA), so the subject of veteran suicide is never far from my mind. There are about 5 deaths from suicide, per day, among veterans who also receive care in VA hospitals like the one where I work.  More than 60% of these suicides occur among veterans who use VA services and are known to have a mental health condition.  The statistics are sobering and as troops return home from the conflicts in Iraq and Afghanistan the topic of veteran suicide will, no doubt, continue to vex and distress all concerned parties.

In high income countries, like the U.S., suicide usually occurs in the context of mental illness. Hence, the key to suicide prevention is providing high quality mental healthcare for the mental health disorder that ails the person seeking help. Whilst Posttraumatic Stress Disorder (PTSD) and Traumatic Brain Injury (the signature injury of the Iraq war) are always forefront in the minds of professionals treating veterans, our care would be reductionistic if we did not thoroughly evaluate for and treat other common mental health disorders such as clinical depression, alcohol or drug addiction, bipolar disorder and schizophrenia.  All of these mental health disorders are associated with an elevated suicide risk so getting the diagnosis correct is crucial. The diagnosis dictates what the treatment should be and the correct treatment increases the odds of recovery from the specific mental health disorder.  This is still our best strategy for preventing suicide.

In reality there are, however, many obstacles to this strategy.  For mental health treatment to be effective it often requires the steady delivery of treatment for several weeks (i.e. a certain dose of psychological treatments and psychotropic medication is often needed for a sustained recovery).  Yet, studies have shown that, amongst recent returnees from the conflicts in Iraq and Afghanistan who have PTSD, for instance, such treatment courses are less likely to be completed.

Specifically with regards to psychotropic medication: Guidelines addressing the treatment of veterans with PTSD strongly recommend a therapeutic trial (i.e. taking the medication for long enough and at enough dosage to see its full effect on symptoms) of medications called selective serotonin reuptake inhibitors (SSRIs) or serotonin-norepinephrine reuptake inhibitors (SNRIs).  Yet a recent study we published shows that, when compared to veterans from previous eras, recent returnees from the conflicts in Iraq and Afghanistan were less likely to complete such a therapeutic trial.  Moreover, if they were clinically depressed, in addition to having PTSD, their odds of getting a therapeutic trial were diminished even further.  This hints to, as yet, unexplained obstacles to engaging veterans from the recent conflicts in Iraq and Afghanistan, in mental health treatment—a disconcerting thought for clinicians who work with this population.

Another obstacle is that, despite the best efforts of an individual clinician, the reality is that conventional mental health services still fail to reach many individuals who are suicidal i.e. many of those who are suicidal are not even engaged in the healthcare system to begin with.  The health disparities literature is replete with evidence demonstrating how, in our society, those most in need of psychiatric and medical care are often the least likely to get it.  Social determinants such as where one lives, economic security, housing quality and employment opportunities all play a key role in the development of such inequities.  These limitations have been compounded by a historic lack of coordinated suicide prevention strategies by well-meaning organizations and agencies.  So this begs the question, what is the VA system doing to prevent veteran suicide?

In 2004, the VA started to focus on deficits in its mental healthcare services and developed a VA Comprehensive Mental Health Strategic Plan to address identified problems and focused suicide prevention efforts beginning in 2007.  Examples of outreach efforts are a 24/7 suicide prevention hotline where veterans, or those concerned about a veteran, can call 1-800-273-8255 and then press 1 to be connected to a VA mental health professional trained to deal with the immediate crisis. A written Chat Service at  http://veteranscrisisline.net/Default.aspx and a texting service, at 838255, both of which connect those in crisis directly to mental health professionals is also available.

In addition, screening and assessment processes have been set up through the system to assist in the identification of patients at risk for suicide.  The VA electronic medical record has a suicide risk flagging system that has been developed to assure continuity of care and enhance awareness among care-givers. Each VA medical center has a suicide prevention coordinator, whose job it is to ensure the Veteran, at risk, is connected to the right services and receives adequate followed up. Those identified as high risk receive an enhanced level of care, including missed appointment follow ups, safety planning and weekly follow up visits.

The relative recency of these efforts means their actual effectiveness in reducing suicide rates remains to be fully evaluated.  Speaking from the viewpoint of a physician, who has worked in a variety of hospital systems from public to private, it is hard not to be impressed by the comprehensive nature of VA’s current suicide prevention efforts. Furthermore, dedicated and well trained professionals continue to come up with thoughtful and innovative ways to tackle these tough problems head on.

Yet there has only been a slight decrease in suicide in VA treated veterans in recent years and this begs the question: Why does this goal of reducing suicide rates amongst Veterans (and, indeed, the general public also) remain so elusive?

Some of these reasons lie in the state of the science of suicide research.  We have a lack of knowledge regarding the fundamental biological markers that could help us predict who will commit suicide.  Risk of suicide is shared by biological, but not adoptive relatives, prompting the conclusion that familiality of suicide is due to genes rather than family environment or culture.  Yet, despite gargantuan efforts on the part of psychiatric researchers, there is currently no suicide gene or genetic test that would be useful in predicting risk of suicidal behavior in any particular individual.

From an epidemiological standpoint, we know several clinical factors that are associated with increasing the odds someone will commit suicide. These factors include: having a mental illness; endorsing suicidal ideation; a prior history of a suicide attempt; a recent interpersonal loss; recent discharge from a psychiatric hospital and family history of suicide.Yet, with an overall rate of 11.3 American suicide deaths per 100,000 people, it is very hard to predict who will actually commit suicide as there are many people who have these clinical risk factors (i.e. the risks are common) but relatively few of these people will actually commit suicide.  In short, the predictive validity of these clinical risk factors is poor.  Add to this the reality that human beings are infinitely complex (and not just a sum of their clinical risk factors) means identifying under what exact circumstances, and at what point in time, a high risk individual may actually attempt suicide is seemingly impossible.

To complicate matters further a clinician cannot simply rely on a patient’s denial of suicidal ideation when assessing suicide risk. The reality is a suicidal patient may not be inclined to admit their suicidality to a mental health professional for fear they will be forced into treatment or that this will result in their suicidal plans being challenged. This complicates the dynamic between caregiver and patient and regrettably means that even the most well meaning vigilant clinicians may not be able to identify a suicidal patient.

A Time Honored Tradition

Faced with the complexities and uncertainties surrounding the issues of veteran suicide, I find myself relying heavily on a time honored tradition of medicine—the power of a strong therapeutic alliance with my patient; the importance of creating an environment where they feel they can say whatever is on their mind and making it clear that, if things are not going well, I want to know about it.  Creating such an environment is no easy feat as 21st century medical practice offers no end of distraction to the practicing physician: back to back clinic schedules; an electronic medical record that dishes up a steady stream of alerts, notifications and orders that require your continued visual attention; instant messages, email, texts and phone calls that ask you to make, in real time, clinical decisions and judgments and, of course, mounds of paperwork.  In such an environment, I find listening—really listening– to my patient has become one of the most powerful things I can offer. Listening creates silent spaces that can be filled with a patient’s expressions of their worst fears and deepest secrets, untainted by fabrication or distortion.

The Click

In my office, I watch the clock ticking on the wall. The second it hits 8AM, I pick up the phone and call Dave’s cell phone number.  The phone rings and rings, my heart sinks as I fear it is heading for voice mail, then skips a beat as it is picked up,

“Hello?”

“Hi Dave, its Dr. Jain from the Palo Alto VA”

“Oh hi Doc,

“I know yesterday was a bad day for you, so I thought I would just check in.”

Silence.

“I am thinking you should come back to clinic in a week instead of next week; how does that sound?”

Silence

“Yes, I think that would be a good idea.”

There it is—in his voice, what had been missing from clinic, an inflection in his tone, a subtle change but somehow reassuring—the click. He is listening to me and, perhaps more importantly, he knows I am listening to him— we are a team working toward a mutually agreed upon goal. I know this does not guarantee that there will not be troubled times ahead but, for now, this is enough.


The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of Veterans Affairs or the United States Government.

In an effort to protect individual patient privacy the patient stories depicted here are composites of various real encounters brought together to illustrate the situation.