Why Addiction is NOT a Brain Disease

Addiction to substances (e.g., booze, drugs, cigarettes) and behaviors (e.g., eating, sex, gambling) is an enormous problem, seriously affecting something like 40% of individuals in the Western world. Attempts to define addiction in concrete scientific terms have been highly controversial and are becoming increasingly politicized. What IS addiction? We as scientists need to know what it is, if we are to have any hope of helping to alleviate it.

There are three main definitional categories for addiction: a disease, a matter of choice, and self-medication. There is some overlap among these meta-models, but each has unique implications for treatment, from the level of government policy to that of available options for individual sufferers.

The dominant party line in the U.S. and Canada is that addiction is a brain disease. For example, according to the National Institute on Drug Abuse (NIDA), “Addiction is defined as a chronic, relapsing brain disease that is characterized by compulsive drug seeking and use, despite harmful consequences.” In this post, I want to challenge that idea based on our knowledge of normal brain change and development.

Why many professionals define addiction as a disease.

The idea that addiction is a type of disease or disorder has a lot of adherents. This should not be surprising, as the loudest and strongest voices in the definitional wars come from the medical community. Doctors rely on categories to understand people’s problems, even problems of the mind. Every mental and emotional problem fits a medical label, from borderline personality disorder to autism to depression to addiction. These conditions are described as tightly as possible, and listed in the DSM (Diagnostic and Statistical Manual of Mental Disorders) and the ICD (International Classification of Diseases) for anyone to read.

I won’t try to summarize all the terms and concepts used to define addiction as a disease, but Steven Hyman, M.D., previous director of NIMH and Provost of Harvard University, does a good job of it. His argument, which reflects the view of the medical community more generally (e.g., NIMH, NIDA, the American Medical Association), is that addiction is a condition that changes the way the brain works, just like diabetes changes the way the pancreas works. Nora Volkow M.D. (the director of NIDA) agrees. Going back to the NIDA site, “Brain-imaging studies from drug-addicted individuals show physical changes in areas of the brain that are critical for judgment, decisionmaking, learning and memory, and behavior control.” Specifically, the dopamine system is altered so that only the substance of choice is capable of triggering dopamine release to the nucleus accumbens (NAC), also referred to as the ventral striatum, while other potential rewards do so less and less. The NAC is responsible for goal-directed behaviour and for the motivation to pursue goals.

Different theories propose different roles for dopamine in the NAC. For some, dopamine means pleasure. If only drugs or alcohol can give you pleasure, then of course you will continue to take them. For others, dopamine means attraction. Berridge’s theory (which has a great deal of empirical support) claims that cues related to the object of addiction become “sensitized,” so they greatly increase dopamine and therefore attraction — which turns to craving when the goal is not immediately available. But pretty much all the major theories agree that dopamine metabolism is altered by addiction, and that’s why it counts as a disease. The brain is part of the body, after all.

What’s wrong with this definition?

It’s accurate in some ways. It accounts for the neurobiology of addiction better than the “choice” model and other contenders. It explains the helplessness addicts feel: they are in the grip of a disease, and so they can’t get better by themselves. It also helps alleviate guilt, shame, and blame, and it gets people on track to seek treatment. Moreover, addiction is indeed like a disease, and a good metaphor and a good model may not be so different.

What it doesn’t explain is spontaneous recovery. True, you get spontaneous recovery with medical diseases…but not very often, especially with serious ones. Yet many if not most addicts get better by themselves, without medically prescribed treatment, without going to AA or NA, and often after leaving inadequate treatment programs and getting more creative with their personal issues. For example, alcoholics (which can be defined in various ways) recover “naturally” (independent of treatment) at a rate of 50-80% depending on your choice of statistics (but see this link for a good example). For many of these individuals, recovery is best described as a developmental process — a change in their motivation to obtain the substance of choice, a change in their capacity to control their thoughts and feelings, and/or a change in contextual (e.g., social, economic) factors that get them to work hard at overcoming their addiction. In fact, most people beat addiction by working really hard at it. If only we could say the same about medical diseases!

The problem with the disease model from a brain’s-eye view.

According to a standard undergraduate text: “Although we tend to think of regions of the brain as having fixed functions, the brain is plastic: neural tissue has the capacity to adapt to the world by changing how its functions are organized…the connections among neurons in a given functional system are constantly changing in response to experience (Kolb, B., & Whishaw, I.Q. [2011] An introduction to brain and behaviour. New York: Worth). To get a bit more specific, every experience that has potent emotional content changes the NAC and its uptake of dopamine. Yet we wouldn’t want to call the excitement you get from the love of your life, or your fifth visit to Paris, a disease. The NAC is highly plastic. It has to be, so that we can pursue different rewards as we develop, right through childhood to the rest of the lifespan. In fact, each highly rewarding experience builds its own network of synapses in and around the NAC, and that network sends a signal to the midbrain: I’m anticipating x, so send up some dopamine, right now! That’s the case with romantic love, Paris, and heroin. During and after each of these experiences, that network of synapses gets strengthened: so the “specialization” of dopamine uptake is further increased. London just doesn’t do it for you anymore. It’s got to be Paris. Pot, wine, music…they don’t turn your crank so much; but cocaine sure does. Physical changes in the brain are its only way to learn, to remember, and to develop. But we wouldn’t want to call learning a disease.

So how well does the disease model fit the phenomenon of addiction? How do we know which urges, attractions, and desires are to be labeled “disease” and which are to be considered aspects of normal brain functioning? There would have to be a line in the sand somewhere. Not just the amount of dopamine released, not just the degree of specificity in what you find rewarding: these are continuous variables. They don’t lend themselves to two (qualitatively) different states: disease and non-disease.

In my view, addiction (whether to drugs, food, gambling, or whatever) doesn’t fit a specific physiological category. Rather, I see addiction as an extreme form of normality, if one can say such a thing. Perhaps more precisely: an extreme form of learning. No doubt addiction is a frightening, often horrible, state to endure, whether in oneself or in one’s loved ones. But that doesn’t make it a disease.

The Complexities of Diagnosing Posttraumatic Stress Disorder (PTSD)

When I was in medical school, senior physicians would frequently usher a group of us students into a patient’s room so we might hear them tell the story of their illness.  It seemed that the more classic the story was for a particular illness the more intense was their ushering.  We would huddle around the patient’s bed all of us transfixed by the doctor interviewing the patient. I remember hanging on the patient’s every last word and, simultaneously, shifting through the textbook data stored in my brain in search of a diagnostic match.  When done, the senior doctor would turn around and challenge us to diagnose what ailed the patient and we would respond with a flurry of answers. I still remember the thrill of solving the puzzle, of making a “textbook diagnosis”.

Image courtesy of coalitionforveterans.org

These days, almost 20 years later, it seems I rarely meet a patient with a “text book diagnosis” and the patients I care for in real life clinical practice are more complex than those described in the pages of thick medical texts.  Perhaps, nowhere does this complexity become more apparent than when I meet patients who have experienced a severe psychological trauma.

In my work as a psychiatrist that go to “text book” is called the DSM IV, the diagnostic and statistical Manual of Mental Disorders which is currently in its fourth version.  This is the standard diagnostic manual used by psychiatrists and psychologists all over the USA.

In this 943 paged book, under chapter 7 titled, Anxiety Disorders, one can find several pages devoted to Posttraumatic Stress Disorder (PTSD).  Page after page documents all one could possibly need to know about diagnosing PTSD: the core clinical features, associated features and disorders, specific cultural and age features, prevalence of PTSD, clinical course of PTSD, familial patterns and Differential Diagnoses (i.e. other disorders that look like PTSD but are not)

Yet, as valuable as these pages are, this diagnosis of PTSD still appears dissatisfying to many.

In her 1992 landmark text, Trauma and Recovery, Judith Herman M.D., a Harvard psychiatrist, argued that “the diagnosis of posttraumatic stress disorder as it is presently defined does not fit accurately enough the complicated symptoms seen in survivors of prolonged repeated trauma”.  She proposed that the syndrome that follows upon exposure to prolonged repeated trauma needs its own name and offered the new term, “complex PTSD”.

I find myself thinking of Dr. Herman’s complex PTSD diagnosis often these days—I think complex PTSD better explains some of the symptoms I see in my patients who have experienced severe trauma. In such cases I find the DSM IV wanting and instead find that the complex PTSD diagnosis holds more real life value or clinical utility.

The DSM IV is currently undergoing a revision with the latest version, the DSM 5¸slated to come out in May of 2013. This has raised the possibility that complex PTSD would be included as a separate diagnostic entity in the DSM-5.  But it is not so easy to get into the DSM­, for a new disorder to be considered for entry a strict set of criteria need to be met: Is there a clear definition of the disorder? Are there reliable methods to diagnose the disorder? In the case of complex PTSD, is it truly distinct from PTSD or just a different, perhaps more severe, type of PTSD? What is the value of adding a new diagnosis—how will it change the way we care for those living with PTSD?

In fact, vigorous discussion over this very question was recently published in the Journal of Traumatic Stress, an academic journal published by the International Society for Traumatic Stress Studies. Leaders and experts in the field of traumatic case articulately state their arguments for and against the inclusion of complex PTSD in the DSM 5.

One issue fundamental to my specialty that is no doubt fueling this controversy is the lack of objective biomarkers available to mental health professionals to diagnose mental disorders such as PTSD.  A limitation of much of our diagnosis in psychiatry is that we base our diagnosis on the self report of our patient and have limited blood tests or scans at our disposal to make an “objective” diagnosis.

On a positive note we can be reassured that psychiatry is in the midst of a biological revolution, hurtling toward a time when it will soon be able to diagnose with blood tests and brain scans and offer tailored treatments to patients. Still, this does not obviate me from my duty to heal the pain of those suffering today and though I work with a diagnostic system that is imperfect, I know that that does not make such a system invalid when used properly.

The diagnostic status of complex PTSD is controversial and not likely to be resolved soon, in the meantime, I will have to get used to living in a world where patients with “text book diagnoses” appear to be scarce, and, instead, venture into more ambiguous territory. Textbooks aside, I try instead to make sense of the mental dysfunction I am witnessing in the hope that it offers some meaning to the person seeking help from me and, through this validation, perhaps an improved sense of their overall well being.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of Veterans Affairs or the United States Government.

I am holding my revised manuscript hostage until the editor forwards my complaint to a rogue reviewer.

This blog post started as a reply to an editor who had rendered a revise and resubmit decision on my invited article based on a biased review. I realized the dilemma I faced was a common one, but unlike many authors, I am sufficiently advanced in my career to take the risk of responding publicly, rather than just simply cursing to myself and making the changes requested by the rogue reviewer. Many readers will resonate with the issues I identify, even if they do not yet feel safe enough making such a fuss. Readers who are interested in the politics and professional intrigue of promoting screening cancer patients for distress might also like reading my specific responses to the reviewer. I end with an interesting analogy, which is probably the best part of the blog.

Dear Editor,

I appreciate the opportunity to receive reviews and revise and resubmit my manuscript. However, my manuscript is now being held hostage in a safe place. It will be released to you when you assure me that my complaint has been forwarded to a misbehaving reviewer with a request for a response.

Unscrupulous reviewers commonly abuse their anonymity and gatekeeping function by unfairly controlling what appears in peer reviewed journals. They do so to further their own selfish and professional guild interests.

They usually succeed in imposing their views, coercing authors to bring their manuscripts into compliance with their wishes or risk rejection. Effects on the literature include gratuitous citations of the reviewer’s work, authors distorting the reporting of findings in ways that flatter the reviewer’s work, and the suppression of negative findings.  More fundamentally, however, such unscrupulous tactics corrupt what science does best: confronting ideas with evidence and allowing the larger scientific community to decide how and in what form, if any, the idea survives the confrontation.

With their identities masked, unscrupulous reviewers bludgeon authors in unlit alleyways and slip away.  Victimized authors are reluctant to complain because they do not want to threaten future prospects with the journal and so simply give in.

This time, however, I am announcing the crime in the bright daylight of the Internet.  I have not yet unmasked  the reviewer with 100% certainty (although I can say he has a bit of an accent that is different than the other members of his department), but I can ask you forward my communication to him and extend an offer to debate me at a symposium or other professional gathering.

My manuscript was invited as one of two sides of a conference debate concerning screening cancer patients for distress. Although held at 8:30 AM on the last day of the conference, the debate was packed, and as one of the organizers of the conference said afterwards, we woke the crowd up. The other speaker and I had substantial disagreements, but we found common ground in engaging each other in good humor. Some people that I talked with afterwards said that I had persuaded them, but more importantly, others said that I had forced them to think.

Discussions of whether we should screening cancer patients for distress have rapidly moved from considering the evidence to agitation for international  recommendations and even mandating of screening. Pharma has poured millions of dollars into the development of quality indicators that can be used to monitor whether oncologists ask patients about psychological distress and if the patients indicate that they are experiencing distress, the indicators record what action was taken. Mandating screening is of great benefit to Pharma because these quality indicators can be met by oncologists casually offering antidepressants to distressed patients, without formal diagnosis or follow-up.

As I’ve shown in my research, breast cancer patients are already receiving antidepressant prescriptions at an extraordinary rate, and often in the absence of ever having had a two week mood disturbance in their lives. Receiving a prescription for an antidepressant has become an alternative to allowing patients unhurried times with cancer care professionals to discuss why they are distressed and the preferred way of addressing their distress.

This reviewer’s comments are just another effort at suppressing discussion of the lack of evidence that screening cancer patients for distress will improve their outcomes. You’re well aware of other such efforts. Numerous professional advocacy groups have gain privileged access to ostensibly peer-reviewed journals for articles promoting screening with the argument that it would take too long to accumulate evidence whether screening really benefits patients. The flabbiness of their arguments and the poor quality of some of these papers attest to their not having been adequately tempered by peer review.

A phony consensus group has been organized and claims to have done a systematic review of the evidence concerning screening. When I contacted the authors, they conceded that there was no formal process for organizing the group or arriving at consensus. Rather, it was a convenience group of persons already known to have strong positive opinions of screening and there were strict controls on what would go into the paper.  I’ve taken us a close look at that paper and found serious flaws in the identification, classification, and integration of studies. The paper would be ripe for one of the withering point by point deconstructions that my colleagues and I are notorious for. Unfortunately the paper is published in a journal that does not allow post publication commentary and so, at least in the journal in which was published, it will evade critique.

This reviewer abused both the role as gatekeeper for my manuscript and the anonymity of reviewers in demanding that I make changes in the manuscript that were not based on the weight of evidence, but rather  on an insistence that I fall in line with the dictates of party lines and professional politics. requiring the promotion of screening of cancer patients for distress, despite the utter lack of evidence This reviewer insists that I not call attention to lack of evidence screening benefits patients and instead praise screening for its benefits to professionals.

Below I italicized some of the reviewer’s comments, with my responses interspersed:

My review is in many ways unusual and for the sake of clarity and fairness requires a substantial preamble.
The manuscript represents a transcript of one speaker’s portion (Coyne) of a 2-sided debate and both contributions are meant to be published side by side…
I will try to provide this review in an unbiased fashion but that will be a mighty challenge because I was never a swing voter, I had a position prior to this debate and this position is leaning ‘pro’-screening, as long as some key foundational conditions are in place.

Okay, this reviewer declared loyalties ahead of time and provides a strong warning of bias.  But forewarning  is not an excuse for the reviewer not having taken on my manuscript.

I see an urgent need to remove the untenable categorical opinion (i.e., claim that there is no supporting research on screening (see opening line in abstract and page 10)) when the other paper clearly shows the (imperfect) opposite based on his systematic review.

Why the urgency? I make the argument that before we implement routine screening of cancer patients for distress, we need evidence that it will lead to improved patient outcomes. In that sense, screening for distress is no different than any other change in clinical procedures that is potentially harmful or costly and disruptive of existing efforts to meet patient needs.

Evidence would consist of a demonstration in a randomized trial that screening for distress and feedback to clinicians and patients leads to better patient outcomes than simply giving patients opportunities to talk to clinicians without regard to their scores on screening instruments and giving them the same access to services that screened patients have. The other side in the live debate conceded in the debate that there was as yet no such evidence, although I do not get the sense that the reviewer attended the debate.

The author needs to tone down what comes across as an almost personal attack of psycho-oncology researchers from the Calgary group, and needs to remove polemic language around the ‘6th Vital Sign’ (“sloganeering..”) ; 6th Vital sign is a concept created as a marketing strategy rather than a substantive issue.

I appreciate that the reviewer at least concedes calling distress the “sixth vital sign” is a marketing strategy, but the phrase has increasingly made it into the titles of peer-reviewed articles and is offered as a rationale for recommending and  even mandating screening in the absence of data. And let’s look at the vacuousness of this “marketing strategy.” It capitalizes on the well-established 4 vital signs: temperature, pulse or heart rate, blood pressure, and respiratory rate. These are all objective measures that do not depend on patient self-report. Pain has been proposed as the fifth vital sign, although it is controversial, in part because it is not objective. The “Calgary group” as the reviewer refers to them has championed making distress becoming the six sign, but distress is neither objective nor vital, and assessment depends on self-report. Temperature is measured with a thermometer, and distress is measured with a pencil and paper or touchscreen thermometer. But there the analogy ceases.

Coyne raises a number of tangential issues that don’t belong here; I think they merely distract:
[a] do we really need new pejorative lingo: “Anglo-American Linguistic Imperialism”  ?? I think not,  because the real point is that some terms translate better than others and ‘distress’ does not translate well.

How are these issues tangential? Proponents of screening call for international guidelines mandating routine screening, but distress is not a word that translates into many languages. I attended a symposium recently in which a French presenter described the bewilderment of cancer patients when they were asked to mark their level of distress on a picture of a thermometer. In many languages, it is not a matter of finding a direct translation of “distress”, because no direct translation exists and there is no unitary corresponding concept. And the linguistic problems are compounded when advocates stretch distress to include every psychological discomfort, spiritual issue, and side effect of cancer. One word cannot serve so many functions in other languages.

So, I think is a big issue to impose this Anglo-American term on other cultures and to insist patients respond even when there is no coherent concept being assessed in their language. For what purpose, international solidarity? The reviewer defends “sixth vital sign” from the “Calgary group” which I don’t think we need, but he disallows me my “Anglo-American linguistic imperialism” for which I provide an adequate rationale.

Another [partially] straw-man argument is that routine use of screening and follow through are expensive.  There are numerous settings where screening is done via touch-screen computer that autoscore and spit out summary sheets with ‘red-flagged’ results.  This is cheap and I don’t see how anyone could argue otherwise.

Screening is much more than getting patients to tap a touchscreen if the intention is to improve their well-being. Unfortunately, in some American settings that have implemented screening, patients tap a touchscreen thermometer to indicate their level of distress and results are whisked to an electronic medical record where the information is ignored. Is that what the reviewer wants?

Results of screening, particularly with a distress thermometer, are highly ambiguous and need to be followed up with an interview by professional. I cited research in my manuscript that most distressed patients are not seeking a referral, variously because their needs are already addressed elsewhere, they don’t see the cancer care setting as appropriate place to get services for their needs, they want services that are not available at the cancer center, or they are simply not convinced that they need services.

Many screening instruments have items referring to “being better off dead” or other indications of suicidal ideation or intention to self-harm. Although cancer patients endorsing such items have a small likelihood of attempting suicide, the issue needs to be addressed in an interview with a trained professional. In some cancer care settings, this could cost a patient $200, and most endorsements of such items turn out to be false positives. To not do follow-up assessments is unconscionable, unethical and could be the basis of a malpractice suit in many settings. To adopt a clinical policy of “don’t ask, don’t tell,” is equally unconscionable, unethical, and could conceivably be the basis of a malpractice suit.

Coyne reports (based on three studies) that almost half of the samples identified as depressed/anxious were already in psychological/psychiatric treatment when diagnosed with cancer.  While Coyne’s numbers were derived from good quality studies these numbers don’t jibe with population estimates.

If the reviewer doesn’t like my “good quality studies,” he should propose some others. As for the wild estimates of half or more of all the people in the community walking round with untreated mental illness, I don’t think we can take seriously the results of studies based on lay interviewers administering structured interview schedules to community residing persons as estimates of unmet needs for psychiatric services.

Coyne posits that screening should improve patient outcomes and offers a detailed section showing that we don’t yet have convincing evidence that it does; this is where the debate between [the opposing side] and Coyne is particularly interesting and valuable.  However, for reasons that are not explicated, Coyne and a number of individuals with whom he shares the ‘anti’ position never allow the argument that systematic screening has two other valuable functions, namely to offer a degree of social justice inherent in equal access to care, and it helps psychological service providers to use clinical population-derived data for clarifying resource needs and tracking system efficiency. I do wonder whether or not these latter two issues are affected by context, namely that they might be more naturally attractive to Psycho-Oncology clinicians in countries with universal health care.

What is this focusing first on the “ Calgary group” and now on “Coyne and a number of individuals” ?  Is the reviewer talking about rival gangs or ideas?

I fail to see how screening can “offer a degree of social justice inherent in equal access to care” if it does not improve patient outcomes. We know from lots of studies of screening for depression that persons with low income and other social disparities have a difficult time completing referrals, even when some of the obvious barriers like costs are removed. Studies find that persons with social disparities may need 25 efforts at contact by telephone, with up to eight completed, in order to get them to the first session of mental health treatment. Many of them will not return.

So, where is the social justice in referring low income and other disadvantaged patients to services they won’t get to, and especially when there is no assurance that the services are effective? The reviewer should visit an American community mental health setting where Medicaid patients are sent because psychiatrists prefer to treat patients who pay out of pocket. Or visit the bewildered primary care physicians who get sent cancer patients from Danish or Dutch cancer centers screening for distress.

Routine screening risks compounding  social disparities in receipt of services. Persons with higher income or other social resources are much more likely to complete the referrals that are offered. Even when services are free, people with social disparities are much less likely to show up than people who have the resources to get there.

I don’t know what the reviewer intends by saying screening should implemented because it gives providers clinical population-derived data for clarifying resource needs. I see this argument as a transparent effort to exploit patients who are not getting any benefit from screening to bolster support to hire professionals to be available to provide services. Think of it: would we provide mammograms to women simply to document a need for more oncologists, if the women do not get any benefit from the mammograms?

Imagine this scenario: attorneys push for screening the general population for unmet legal needs. With short checklists, pissed-off thermometers, and web-based surveys, they identify people having unresolved disputes with their relatives and neighbors that the attorneys could help them settle by suing each other. They thereby uncover what they consider unmet need for litigation. Now, some people may have misgivings about suing family and supposed friendsS. The attorneys could then argue that this is just due to a sense of stigma and launch anti-stigma campaigns to break down their resistance to accepting services.

The attorneys’ denial that their primary interest was to generate business for themselves would be more easily dismissed than that of mental health professionals calling for screening for cancer patients for distress. But the conflict of interest is just as great.

 

Troubles in the Branding of Psychotherapies as “Evidence Supported”

Is advertising a psychotherapy as “evidence supported,”  any less vacuous than “Pepsi’s the one”? A lot of us would hope so, having campaigned for rigorous scientific evaluation of psychotherapies in randomized controlled trials (RCTs), just as is routinely done with drugs and medical devices in Evidence-based Medicine (EBM). We have also insisted on valid procedures for generating, integrating, and evaluating evidence and have exposed efforts that fall short. We have been fully expecting that some therapies would emerge as strongly supported by evidence, while others would be found less so, and some even harmful.

Some of us now despair about the value of this labeling or worry that the process of identifying therapies as evidence supported has been subverted into something very different than we envisioned.  Disappointments and embarrassments in the branding of psychotherapies as evidence supported are mounting. A pair of what could be construed as embarrassments will be discussed in this blog.

Websites such as those at American Psychological Association Division 12 Clinical Psychology and SAMHSA’s National Registry of Evidence-based Programs and Practices offer labeling of specific psychotherapies as evidence supported. These websites are careful to indicate that a listing does not constitute an endorsement. For instance, the APA division 12 website declares

This website is for informational and educational purposes. It does not represent the official policy of Division 12 or the American Psychological Association, nor does it render individual professional advice or endorse any particular treatment.

Readers can be forgiven for thinking otherwise, particularly when such websites provide links to commercial sites that unabashedly promote the therapies with commercial products such as books, training videos, and workshops. There is lots of money to be made, and the appearance of an endorsement is coveted. Proponents of particular therapies are quick to send studies claiming positive findings to the committees deciding on listings with the intent of getting them acknowledged on these websites.

But now may be the time to begin some overdue reflection on how the label of evidence supported practice gets applied and whether there is something fundamentally wrong with the criteria.

Now you see it, now, you don’t: “Strong evidence” for the efficacy of acceptance and commitment therapy for psychosis

On September 3, 2012 the APA Division 12 website announced a rating of “strong evidence” for the efficacy of acceptance and commitment therapy for psychosis. I was quite skeptical. I posted links on Facebook and Twitter to a series of blog posts (1, 2, 3) in which I had previously debunked the study claiming to demonstrate that a few sessions of ACT significantly reduced rehospitalization of psychotic patients.

David Klonsky, a friend on FB who maintains the Division 12 treatment website quickly contacted me and indicated that he would reevaluate the listing after reading my blog posts and that he had already contacted the section editor to get her evaluation. Within a day, the labeling was changed to “designation under re-review as of 9/3/12”and it is now (10/16/12) “modest research support.”

David Klonsky is a serious, thoughtful guy with an unenviable job: keeping the Division 12 list of evidence supported treatments updated. This designation is no less important than it once was, but it is increasingly difficult to engage burned out committee members to evaluate the flood of new studies that proponents of particular therapies relentlessly send in. As we will see with this incident, the reports of studies that are considered are not necessarily reliable indicators of the efficacy of particular treatments, even when they come from prestigious, high impact journals.

The initial designation of ACT as having “strong evidence” for psychosis was mainly based on a single, well promoted study, claims for which made it all the way to Time magazine when it was first published.

Bach, P., & Hayes, S.C. (2002). The use of acceptance and commitment therapy to prevent the rehospitalization of psychotic patients: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 70, 1129-1139.

Of course, the designation of strong evidence requires support of two randomized trials, but the second trial was a modest attempt at replication of this study and was explicitly labeled as a pilot study.

The Bach and Hayes  article has been cited 175 times as of 10/21/12 according to ISI Web of Science, mainly  for claims that appear in its abstract: patients receiving up to four sessions of an ACT intervention had “a rate of rehospitalization half that of TAU [treatment as usual] participants over a four-month follow-up [italics added].” This would truly be a powerful intervention, if these claims are true. And my check of the literature suggests that these claims are almost universally accepted. I’ve never seen any skepticism expressed in peer reviewed journals about the extraordinary claim of cutting rehospitalization in half.

Before reading further, you might want to examine the abstract and, even better, read the article for yourself and decide whether you are persuaded. You can even go to my first blog post on this study where I identify safe some of the things to look for in evaluating the claims. If these are your intentions, you might want to stop reading here and resume after considering these materials.

Warning! Here comes the spoiler.

  • It is not clear that rehospitalization was originally set as the primary outcome, and so there is a possible issue of a shifting primary outcome, a common tactic in repackaging a null trial as positive. Many biomedical journals require that investigators publish their protocols with a designated primary outcome before they enter the first patient into a trial. That is a strictly enforced requirement  for later publication of the results of the trial. But that is not yet usually done for RCTs testing psychotherapies.The article is based on a dissertation. I retrieved a copy andI found that  the title of it seemed to suggest that symptoms, not rehospitalization, were the primary outcome: Acceptance and Commitment Therapy in the Treatment of Symptoms of Psychosis.
  • Although 40 patients were assigned to each group, analyses only involved 35 per group. The investigators simply dropped patients from the analyses with negative outcomes that are arguably at least equivalent to rehospitalization in their seriousness: committing suicide or going to jail. Think about it, what should we make of a therapy that prevented rehospitalization but led to jailing and suicides of mental patients? This is not only a departure from intention to treat analyses, but the loss of patients is nonrandom and potentially quite relevant to the evaluation of the trial. Exclusion of these patients have substantial impact on the interpretation of results: the 5 patients missing from the ACT group represented 71% of the reported rehospitalizations  and the 5 patients missing from the TAU group represent 36% of the reported rehospitalizations in that group.
  • Rehospitalization is not a typical primary outcome for a psychotherapy study. But If we suspend judgment for a moment as to whether it was the primary outcome for this study, ignore the lack of intent to treat analyses, and accept 35 patients per group, there is still not a simple, significant difference between groups for rehospitalization. The claim of “half” is based on voodoo statistics.
  • The trial did assess the frequency of psychotic symptoms, an outcome that is closer to what one would rely to compare to this trial with the results of other interventions. Yet oddly, patients receiving the ACT intervention actually reported more, twice the frequency of symptoms compared to patients in TAU. The study also assessed how distressing hallucinations or delusions were to patients, what would be considered a patient oriented outcome, but there were no differences on this variable. One would think that these outcomes would be very important to clinical and policy decision-making and these results are not encouraging.

This study, which has been cited 64 times according to ISI Web of Science, rounded out the pair needed for a designation of strong support:

Gaudiano, B.A., & Herbert, J.D. (2006). Acute treatment of inpatients with psychotic symptoms using acceptance and commitment therapy: Pilot results. Behaviour Research and Therapy, 44, 415-437.

Appropriately framed as a pilot study, this study started with 40 patients and only delivered three sessions of ACT. The comparison condition was enhanced treatment as usual consisting of psychopharmacology, case management, and psychotherapy, as well as milieu therapy. Follow-up data were available for all but 2 patients. But this study is hardly the basis for rounding out a judgment of ACT as efficacious for psychosis.

  • There were assessments with multiple conventional psychotic symptom and functioning measures, as well as ACT specific measures. The only conventional measure to achieve significance was distress related to hallucinations and there were no differences in ACT specific measures. There were no significant differences in rehospitalization.
  • The abstract puts a positive spin on these findings: “At discharge from the hospital, results suggest that short-term advantages in effect of symptoms, overall improvement, social impairment, and distress associated with hallucinations. In addition, more participants in the ACT condition reach clinically significant symptom improvement at discharge. Although four-month rehospitalization rates were lower in the ACT group, these differences did not reach statistical significance.”

The provisional designation of ACT as having strong evidence of efficacy for psychosis could have had important consequences. Clinicians and policymakers could decide that merely providing three sessions of ACT is a sufficient and empirically validated approach to keep chronic mental patients from returning to the hospital and maybe even make discharge decisions based on whether patients had received ACT. But the evidence just isn’t there that ACT prevents rehospitalization, and when the claim is evaluated against what is known about the efficacy of psychotherapy for psychotics, it appears to be an unreasonable claim bordering on the absurd.

The redesignation of ACT as having modest support was based on additional consideration of a follow-up study of the Bach and Hayes, plus an additional feasibility study that involved 27 patients randomized to either to treatment as usual or 10 sessions of ACT plus treatment as usual. Its stated goal was to investigate the feasibility of using ACT to facilitate emotional recovery following psychosis, but as a feasibility study, included a full range of outcomes with the intention of deciding which would be important for assessing the impact of ACT in this population. The scales included the two subscales of the Hospital Anxiety and Depression Scale (HADS), the positive and negative syndrome scale, an ACT specific scale, and a measure of the therapeutic alliance.  Three of the patients assigned just treatment as usual dropped out and so intent to treat analysis were not conducted. With such a small sample, it is not surprising that there were no differences on most measures. The investigators noted that the patients receiving ACT and had fewer crisis contacts over the duration of the trial, but it is not clear whether this is simply due to the treatment as usual group not having regular treatment and therefore having to resort to crisis contacts.

The abstract of the study states “ACT appears to offer promise in reducing negative symptoms, depression and crisis contacts in psychosis”, which is probably a bit premature. Note also that across these three trials, there is a shift in the outcome to which the investigators point as evidence for the efficacy of ACT for psychosis. The assumption seems to be that any positive result can be claimed to represent a replication, even if other variables were cited for this purpose among the other studies.

Overall, this trial would also be rated as having high risk of bias because of the lack of intent to treat analyses and the failure to specify a primary outcome among the battery that was administered, but more importantly, it would simply be excluded from meta-analyses with which I have been associated because of too few patients in it. A high risk of bias plus too few patients discourages any confidence in these results.

Is treating PTSD with acupoint stimulation supported by evidence ?

Whether or not ACT is more efficacious than other therapies, as its proponents sometimes claim, or whether it is efficacious for psychosis, is debatable, but probably no one would consider ACT anything other than a bona fide therapy. The same does not hold for Emotional Freedom Therapy (EFT) and its key component, acupoint.  I’m sure there was much consternation at APA and Division 12 when stories circulated on the Internet that APA had declared EFT to be evidence supported.

Wikipedia offers the following definition of EFT:

Emotional Freedom Techniques (EFT) is a form of counseling intervention that draws on various theories of alternative medicine including acupuncture, neuro-linguistic programming, energy medicine, and Thought Field Therapy. During an EFT session, the client will focus on a specific issue while tapping on so-called “end points of the body’s energy meridians.”

Writing in The Skeptical Inquirer, Brandon Gaudiano and James Herbert argued that there is no plausible mechanism to explain how the specifics of EFT could add to its effectiveness and they have been described as unfalsifiable and therefore pseudoscientific. EFT is widely dismissed by skeptics, along with its predecessor, Thought Field Therapy and has been described in the mainstream press as “probably nonsense.”[2] Evidence has not been found for the existence of acupuncture points, meridians or other concepts involved in traditional Chinese medicine.

The scathing Gaudiano and Herbert critique is worth a read and calls attention to claims of EFT by proxy: patients improve when therapists tap themselves rather than the patients! My imagination runs wild: how about televised sessions in which therapists tap themselves and liberate thousands of patients around the world from their PTSD?

According to David Feinstein, aproponent of EFT, in including a chapter on Thought Field Therapy in an anthology of innovative psychotherapies, Corsini (2001) acknowledged that it was “either one of the greatest advances in psychotherapy or it is a hoax.”

Claims have been made for acupoint that even proponents of EFT consider “provocative,” “extraordinary,”  and “too good to be true.” An article published in Journal of Clinical Psychology (not an APA journal), reported that 105 people were treated in Kosovo for severe emotional reactions to past torture, rape, and witnessing loved ones being burned or raped. Strong improvement was observed in 103 of these patients, despite an average of only three sessions. For comparison purposes, exposure therapy involves at least 15 sessions in the literature claims nowhere near this efficacy. However, even more extraordinary results were claimed for the combined sample of 337 patients treated in visits to Kosovo, Rwanda, the Congo, and South Africa. The 337 individuals expressed 1016 traumatic memories of which 1013 were successfully resolved, resulting in substantial improvement in 334 patients. Unfortunately the details of this study remain on unpublished, but claims of these results appear in a forthcoming article in the APA journal Review of General Psychology.

Reports circulating on the Internet that APA had declared EFT to be an evidence supported approach stemmed from a press release by the EFT Universe that cited a statement from the same Review of General Psychology article:

A literature search identified 50 peer-reviewed papers that report or investigate clinical outcomes following the tapping of acupuncture points to address psychological issues. The 17 randomized controlled trials in this sample were critically evaluated for design quality, leading to the conclusion that they consistently demonstrated strong effect sizes and other positive statistical results that far exceed chance after relatively few treatment sessions. Criteria for evidence-based treatments proposed by Division 12 of the American Psychological Association were also applied and found to be met for a number of conditions, including PTSD (Feinstein, 2012).

Feinstein had been developing his claims about energy therapies such as EFT meeting the Division 12 criteria for a while. In a 2008 article in the APA journal Psychotherapy Theory, Research, Practice, Training, he declared

although the evidence is still preliminary, energy psychology has reached the minimum threshold for being designated has an evidence-based treatment, with one form having met the APA division 12 criteria as a “probably efficacious” treatment for specific phobias; another for maintaining weight loss.

In this 2008 article, Feinstein also cited a review in the online book review journal of APA in which Ilene Selrin, Past President of APA’s Division of Humanistic Psychology praised Feinstein’s book for its “valuable expansion of the traditional biopsychosocial model of psychology to include the dimension of energy” and energy psychology as representing “a new discipline that has been receiving attention due to its speed and effectiveness with difficult cases.”

The reports that EFT had been designated as an evidence supported treatment made the rounds for a few months, sometimes with the clarification that EFT met the criteria, but had not yet been labeled as evidence supported by Division 12. In some communities, stories about EFT or –as it was called– tapping therapy made the local TV news. KABC news Los Angeles titled a story,”‘Tapping’ therapy can relieve anxiety, stress, researchers say” and got an APA spokesperson to provide a muted comment

 “Has this tapping therapy been proven effective? We don’t think so at this point,” said Rhea Farberman, Executive Director for Public and Member Communications at the APA.

The comment went on to say that APA viewed stress and anxiety as serious but treatable issues for some persons and cognitive behavior therapy recommended, but not tapping therapy.

What do these incidents say about branding of psychotherapies as evidence supported?

I will explore this issue in greater depth in a future blog post, but for now we are left with some questions.

The first incident involved designation of a psychotherapy as having strong evidence of efficacy for psychosis, but was quickly changed first to under review and then to modest support. The precipitant for this downgrading seems to be blog posts that revealed the abstract of the key study to be misleading. Designation of a therapy as having strong evidence for its efficacy requires two positive randomized controlled trials. The second trial was described as a pilot study explicitly aimed at replicating the first one. Like the first one, its abstract declared positive findings. However, this study failed to replicate the first study’s claimed reduction in hospitalization, and a cursory examination of the results section revealed that this study, like the study that it attempted to replicate, was basically a null trial.

  • Do the current criteria employed by Division 12-only 2 positive trials and no attention to size or quality- set too low a bar for a therapy receiving the seemingly important branding of having strong evidence?
  • The revised status of ACT for psychosis is that it has modest support. But how does two null trials published with confirmatory bias constitute modest support?
  • Are there pitfalls in uncritically accepting claims in the abstracts of articles appearing in prestigious journals like JCCP?
  • More generally, to what extent do the shortcomings of articles appearing in prestigious journals like JCCP warrant skepticism, not only by reviewers for Division 12, but consumers more generally?
  • Should we expect a prestigious journals like JCCP to encourage and make a place for post publication peer review of the articles that have appeared there?
  • Should revised criteria for evidence supported therapies not just count whether there are two or only one positive trial, but incorporate formal quality ratings of trials for overall quality and risk of bias?

The second incident involves rumors of APA having designated as evidence supported a bizarre therapy with extravagant claims of efficacy. The rumor was based on a forthcoming review in an APA Journal that indicated that EFT had sufficient number of positive randomized trials to meet APA division 12 criteria for evidence supported. It was left to a media person from APA to clarify that APA did not endorse this therapy, but it was unclear on what basis this declaration was made.

  • If ACT for psychosis has modest support, where does EFT stand when evaluated by the same criteria?
  • Can sources other than APA Division 12 apply the criteria to psychotherapies and declare the therapies as warranting evidence-based status? If not, why not?
  • Do consumers, as well as proponents of innovative and even strange therapies, deserve evaluation with formal criteria by APA Division 12 and designation of the therapies not only as warranting a designation of “strong evidence” if they meet these criteria, but alternatively as having demonstrated a failure to accumulate evidence of efficacy, and even as having demonstrated possible harm?
  • If APA Division 12 takes on the task of publicizing the evidence based status of psychotherapies, does it thereby assume a responsibility to alert policy makers and consumers of therapies that fail to meet these criteria?
  • If application of the existing Division 12 criteria warrants EFT as having strong evidence of efficacy, what does that say about the adequacy of these criteria?

To be continued……

What I learned as an Academic Editor for PLOS ONE

Open access week is just around the corner, and I thought I’d take the opportunity to share my experience as an Academic Editor for PLOS ONE.

I was invited to join the team following a conversation at Science Online 2010 with I think Steve Koch, who recommended me to PLOS ONE, and before I knew it I was receiving lots of emails asking me to handle a manuscript.

The nice thing about PLOS ONE is that I get to choose which articles I get to handle, and I am very picky. I think that my role is not just to’ handle’ the manuscript but also make sure that the review process is fair. To do this, I need to understand the manuscript myself. I read every article that I take on and write a ‘mini-review’ of it for myself. When I get the external peer reviews I go through every comment they make against the submitted version, compare the different reviews and revisit my first impression of the manuscript. I have learned a lot from the reviewers, they see things I have missed, and they miss things I have detected. It has been a great insight into the peer review process. And I love not having to pull my crystal ball out to determine whether the article is ‘important’ but just having to decide whether it is scientifically solid.

Read/Review
Image by Wiertz Sébastien on Flickr, licenced under CC-BY

If the science is fundamentally good the articles are sent back to the authors for either minor or major changes, and then it falls back into my inbox. I have found it really interesting to see how authors deal with the reviewer’s comments. The re-submission is also a lot of work. I need to compare the original and new version, make sure that the authors have done what they say they have done, make sure that all reviewer’s comments have been addressed. And then I decide if I send it back for re-review or not. One thing that I found interesting in this second phase is when authors respond to the reviewer’s comments in the letter but do not incorporate that into the article. It is almost as if the responses are for my and the reviewer’s benefit only. So back it goes asking them to incorporate that rationale into the actual manuscript. Oh well. That means another round. Luckily this does not happen that often.

And then it is time to ‘accept’ the paper – and so back to the manuscript where I go through commas, colons, paragraphs, spelling mistakes, in text citations, reference lists, formatting, image quality, figure legends, etc. This I normally send to the authors together with their acceptance letter but don’t ask for the article to be re-submitted.

The main challenge I find with the process is time management.

When I get the request to handle an article, I accept or nor based on how much time I have to process the article. That is all good. Except that I cannot predict when the reviews, resubmissions, etc will eventually happen – and many times these articles ‘ready for decision’ show up in my inbox at a time when I cannot give it the full attention it deserves.  Let alone being able to predict when the revised version will be submitted! I find it impossible to plan ahead for this, especially since I have very little control over a lot of my time commitments (like the days I need to lecture, submit exam questions, mark exams). So if an article arrives while I am somewhere at a conference with limited internet connection… How can I plan for this?

Finding reviewers is another challenge. Sometimes they are hard to find. Nothing as discouraging as finding the “reviewer declined…” emails in my inbox indicating that it is back to the system to do something that I thought was done and dusted. The other day someone asked what is a reasonable amount of reviewing one should do a year? My answer was that one should probably at minimum return the number of reviews provided for one’s articles. Say I publish 3 articles a year, each with 3 reviews, then I should not start complaining about reviewing until I have reviewed at least 9 articles. (of course, one can factor in rejection rate, number of authors, etc) but a tit for tat trade-off seems like a fair expectation. So then why is it so hard to find reviewers? Come on people – if it was your paper getting delayed you’d be sending letters to the journal asking how come the article shows as still sitting with the Editor!

And that is the other thing I learned. Editors don’t just sit on papers because they are lazy. There are many reasons why handling an article may take more or less time. In some cases, after receiving the reviews I feel that something has been raised that needs a specialist to look at a specific aspect of the paper. Sometimes I need a second opinion because there is too little agreement between reviewers. Sometimes the reviewers don’t submit in the agreed time. There are many reasons why an article can be delayed, and so what I learned is to be patient with the editors when I send my papers for publication.

But despite the headaches, the stress and the struggle of being an Academic Editor, it is also an extremely rewarding experience. I keep learning more about science because I see a range of articles before they take their final shape, because I get to look into the discussion of what is good and what is weak. And I get to be part of what makes science great: trying to put out the best we can produce.

It is unfortunate that this process is locked up. I think that there is a lot to learn from it. I think that students and early career scientists would really benefit from seeing the process in articles that are not their own, how variable the quality of the reviews are, what dealing well with reviewers comments and suggestions looks like. And the public too would benefit from seeing what this peer review is all about – what the strengths and weaknesses of the process are and what having been peer reviewed really means.

So, back to Open Access week. Access to the final product is really good. Access to the process of peer review can make understanding the literature even better, because it exposes a part of the process of science that is also worth sharing.

 

A First Post and the First Image of Brain Tissue under the Microscope

It is not easy to write a first post. So, as a first post I thought I’d share another first.

As far as I know, this image below is the first published image of ‘brain’ tissue under the microscope. (I hope Mo Costandi corrects me if I am wrong.) It is really picture of the nerve that connects the eye to the brain, still nervous tissue. The image was published in 1675 by Mr Antonie van Leeuwenhoek. [1]

Wellcome Library, London under Creative Commons by-nc 2.0 UK: England & Wales

What I find fascinating about this image is how it reminds me of how it all started. Most of what I do studying neuroscience involves using microscopes, and this image reminds me how far along we’ve come. I can’t but wonder what went through Leeuwenhoek’s mind when he saw this image – after all he did not know what we know now. In fact, even what light was at the time was not what we know now, so interpreting what that image of the optic nerve meant for visual neuroscience must have been quite an interesting challenge.

I can’t help but chuckle when I read in his manuscript this passage:

“I here thought to myself whether every one of these hollownesses might not have been a filament in the Nerve and besides, that twas needless, there should be a cavity in the Optic Nerve through which the Animal Spirits, representing the species or images in the Eye, might pass into the brain.”

I chuckle, because I am amused by his reference to ‘Animal Spirits’. But I can’t but try to imagine what that first observation might have looked like to someone who saw it for the first time, and without the experience that even the average biology student has. I often wish I could get one of those old microscopes and repeat his experiments and see what nervous tissue might have looked like at that time to understand why people thought of the brain the way they did. Microscopy was such a new thing that even a century after that image was produced a word of caution was expressed in Home’s Croonian lecture (1799) [2]

“It is scarcely necessary to mention that parts of an animal body are not fitted by being examined by glasses of a great magnifying power, and, whenever they are shewn one hundred times larger than their natural size, no dependence can be placed upon their appearance.”

It would take some time for microscopes and the methods to process tissues to get better so that we could make more sense of what we were looking at under the lens, and so it is not surprising that it was not until the end of the 19th century that the ‘cellular theory’ that was contemporary to Leeuwenhoek’s observation was accepted to be true for the brain as well.

After all, how much detail we know about anything in biology is only as good as the precision of the instruments we use to study them. I can’t help but wonder what Leewenhoek would think of the microscopic images of nerve tissue that we produce today. We have come a long way, and gained a lot of precision. And after all, that is the way that science moves on.

A few years back I came across this snippet by George Brecht at the Walker Art Center in Mineapolis, Minnesotta

“Excercise
Determine the limits of an object or event
Determine the limits more precisely
Repeat,
Until further precision is impossible”

I couldn’t help thinking how well this artist described the process of science. We keep hitting the limits of the precision we can measure stuff with, and have to wait until a new tool is developed to measure the same thing a little bit better. Sometimes, we confirm what we thought previously, on occasion we find something unexpected and are forced to change our minds about what we hold true. It is the hope of hitting that unexpected that gets me out of bed every morning to go to the lab.

[1] Mr. Leewenhoeck Microscopical Observations of Mr. Leewenhoeck, Concerning the Optic Nerve, Communicated to the Publisher in Dutch, and by Him Made English. Phil. Trans. January 1, 1675 10 378-380; doi:10.1098/rstl.1675.0032 (pdf)

[2] Everard Home The Croonian Lecture. Experiments and Observations upon the Structure of Nerves. By Everard Home, Esq. F. R. S.Phil. Trans. R. Soc. Lond. January 1, 1799 89 1-12; doi:10.1098/rstl.1799.0002 (pdf)

Preventing Veteran Suicide

The alarm clock, flashing 03:00 in green neon, signals to me that I should be fast asleep. I close my eyes and take deep breaths, trying to lull myself back into a peaceful slumber–the day ahead holds a daunting schedule with no room for yawns or fatigue.  Then it appears, pops right into the forefront of my mind, a solitary question that has needled its way through my dreams, forcing me to deal with its implications. “Did you miss something with Dave?”  Instantly I recognize this thought as residue from yesterday’s day at work, something that had been neglected amongst the whirlwind of patient visits, typing of progress notes, writing of prescriptions, answering of emails, voice mails and texts.

In the still darkness of my bedroom, I strain to recall the details of his clinic visit.  My patient Dave, a veteran, had been home from Iraq for two years, but the passage of time had not healed his psychological wounds. Through stifled tones he told me about his tormented nights and stunned days—horrifying memories, so difficult to erase, now dangerously directing his life. Tears had welled in his honey brown eyes that were dulled by the weight of war. Silence hung between us and I shifted uneasily in my chair, hesitating to reach for the Kleenex. Then, I asked the question I was duty bound to ask, “Have you had thoughts to kill yourself?” A pause, then, “No, Doc, No.”

I run through a mental checklist, to make sure I had done all that his clinical condition had required, I had: increased his Sertraline dosage (a medication to treat symptoms of posttraumatic stress); prescribed a short course of sleep medication to help ease the agony of his insomnia; recommended he see his therapist weekly instead of every other week; called his therapist and shared my concerns and asked him to return to my clinic in two weeks (the time it would take for the extra Sertraline to kick in) instead of the usual month.  As I had walked him to the door, I had made sure he had all the emergency numbers to call if things got worse.

It seemed, on paper at least, I had done all I was supposed to do — so then why am I tossing and turning at this ungodly hour?

My heart sinks as I realize what had been missing from my visit with Dave: the “click”.  The click is a feeling that is beyond rational comprehension, more intuition than fact, it can come and go in the blink of an eye and is hard to quantify or measure.  The presence of the click signals to me that my patient is telling me everything I need to know, we are on the same page and we share the same hope for their recovery. The click signals a mutual trust and respect and a healthy alliance in our relationship. Treating thousands of patients, over a decade of clinical practice, has taught me that the absence of the click invariably means trouble is brewing.

Now the green neon flashes 04:00. I sigh and get out of bed knowing full well that sleep will evade me. I wait for dawn and a reasonable time for when I can call Dave to make sure he is okay.

Available Research on Veterans and Suicide

30,000 to 32,000 Americans die from suicide per year and about 20% of these Americans are veterans. It is important to make the distinction between veterans i.e. persons who have served in the military, naval or air service and are now discharged from active duty personnel i.e. persons who are still serving in the military.  The distinction is important because in Army and Marine active duty personnel the statistics are different, for this population, suicide rates have nearly doubled between 2005 and 2009.  A recent and thoughtful analysis of this tragic situation can be found here.

I am a psychiatrist, working for Veterans Affairs (VA), so the subject of veteran suicide is never far from my mind. There are about 5 deaths from suicide, per day, among veterans who also receive care in VA hospitals like the one where I work.  More than 60% of these suicides occur among veterans who use VA services and are known to have a mental health condition.  The statistics are sobering and as troops return home from the conflicts in Iraq and Afghanistan the topic of veteran suicide will, no doubt, continue to vex and distress all concerned parties.

In high income countries, like the U.S., suicide usually occurs in the context of mental illness. Hence, the key to suicide prevention is providing high quality mental healthcare for the mental health disorder that ails the person seeking help. Whilst Posttraumatic Stress Disorder (PTSD) and Traumatic Brain Injury (the signature injury of the Iraq war) are always forefront in the minds of professionals treating veterans, our care would be reductionistic if we did not thoroughly evaluate for and treat other common mental health disorders such as clinical depression, alcohol or drug addiction, bipolar disorder and schizophrenia.  All of these mental health disorders are associated with an elevated suicide risk so getting the diagnosis correct is crucial. The diagnosis dictates what the treatment should be and the correct treatment increases the odds of recovery from the specific mental health disorder.  This is still our best strategy for preventing suicide.

In reality there are, however, many obstacles to this strategy.  For mental health treatment to be effective it often requires the steady delivery of treatment for several weeks (i.e. a certain dose of psychological treatments and psychotropic medication is often needed for a sustained recovery).  Yet, studies have shown that, amongst recent returnees from the conflicts in Iraq and Afghanistan who have PTSD, for instance, such treatment courses are less likely to be completed.

Specifically with regards to psychotropic medication: Guidelines addressing the treatment of veterans with PTSD strongly recommend a therapeutic trial (i.e. taking the medication for long enough and at enough dosage to see its full effect on symptoms) of medications called selective serotonin reuptake inhibitors (SSRIs) or serotonin-norepinephrine reuptake inhibitors (SNRIs).  Yet a recent study we published shows that, when compared to veterans from previous eras, recent returnees from the conflicts in Iraq and Afghanistan were less likely to complete such a therapeutic trial.  Moreover, if they were clinically depressed, in addition to having PTSD, their odds of getting a therapeutic trial were diminished even further.  This hints to, as yet, unexplained obstacles to engaging veterans from the recent conflicts in Iraq and Afghanistan, in mental health treatment—a disconcerting thought for clinicians who work with this population.

Another obstacle is that, despite the best efforts of an individual clinician, the reality is that conventional mental health services still fail to reach many individuals who are suicidal i.e. many of those who are suicidal are not even engaged in the healthcare system to begin with.  The health disparities literature is replete with evidence demonstrating how, in our society, those most in need of psychiatric and medical care are often the least likely to get it.  Social determinants such as where one lives, economic security, housing quality and employment opportunities all play a key role in the development of such inequities.  These limitations have been compounded by a historic lack of coordinated suicide prevention strategies by well-meaning organizations and agencies.  So this begs the question, what is the VA system doing to prevent veteran suicide?

In 2004, the VA started to focus on deficits in its mental healthcare services and developed a VA Comprehensive Mental Health Strategic Plan to address identified problems and focused suicide prevention efforts beginning in 2007.  Examples of outreach efforts are a 24/7 suicide prevention hotline where veterans, or those concerned about a veteran, can call 1-800-273-8255 and then press 1 to be connected to a VA mental health professional trained to deal with the immediate crisis. A written Chat Service at  http://veteranscrisisline.net/Default.aspx and a texting service, at 838255, both of which connect those in crisis directly to mental health professionals is also available.

In addition, screening and assessment processes have been set up through the system to assist in the identification of patients at risk for suicide.  The VA electronic medical record has a suicide risk flagging system that has been developed to assure continuity of care and enhance awareness among care-givers. Each VA medical center has a suicide prevention coordinator, whose job it is to ensure the Veteran, at risk, is connected to the right services and receives adequate followed up. Those identified as high risk receive an enhanced level of care, including missed appointment follow ups, safety planning and weekly follow up visits.

The relative recency of these efforts means their actual effectiveness in reducing suicide rates remains to be fully evaluated.  Speaking from the viewpoint of a physician, who has worked in a variety of hospital systems from public to private, it is hard not to be impressed by the comprehensive nature of VA’s current suicide prevention efforts. Furthermore, dedicated and well trained professionals continue to come up with thoughtful and innovative ways to tackle these tough problems head on.

Yet there has only been a slight decrease in suicide in VA treated veterans in recent years and this begs the question: Why does this goal of reducing suicide rates amongst Veterans (and, indeed, the general public also) remain so elusive?

Some of these reasons lie in the state of the science of suicide research.  We have a lack of knowledge regarding the fundamental biological markers that could help us predict who will commit suicide.  Risk of suicide is shared by biological, but not adoptive relatives, prompting the conclusion that familiality of suicide is due to genes rather than family environment or culture.  Yet, despite gargantuan efforts on the part of psychiatric researchers, there is currently no suicide gene or genetic test that would be useful in predicting risk of suicidal behavior in any particular individual.

From an epidemiological standpoint, we know several clinical factors that are associated with increasing the odds someone will commit suicide. These factors include: having a mental illness; endorsing suicidal ideation; a prior history of a suicide attempt; a recent interpersonal loss; recent discharge from a psychiatric hospital and family history of suicide.Yet, with an overall rate of 11.3 American suicide deaths per 100,000 people, it is very hard to predict who will actually commit suicide as there are many people who have these clinical risk factors (i.e. the risks are common) but relatively few of these people will actually commit suicide.  In short, the predictive validity of these clinical risk factors is poor.  Add to this the reality that human beings are infinitely complex (and not just a sum of their clinical risk factors) means identifying under what exact circumstances, and at what point in time, a high risk individual may actually attempt suicide is seemingly impossible.

To complicate matters further a clinician cannot simply rely on a patient’s denial of suicidal ideation when assessing suicide risk. The reality is a suicidal patient may not be inclined to admit their suicidality to a mental health professional for fear they will be forced into treatment or that this will result in their suicidal plans being challenged. This complicates the dynamic between caregiver and patient and regrettably means that even the most well meaning vigilant clinicians may not be able to identify a suicidal patient.

A Time Honored Tradition

Faced with the complexities and uncertainties surrounding the issues of veteran suicide, I find myself relying heavily on a time honored tradition of medicine—the power of a strong therapeutic alliance with my patient; the importance of creating an environment where they feel they can say whatever is on their mind and making it clear that, if things are not going well, I want to know about it.  Creating such an environment is no easy feat as 21st century medical practice offers no end of distraction to the practicing physician: back to back clinic schedules; an electronic medical record that dishes up a steady stream of alerts, notifications and orders that require your continued visual attention; instant messages, email, texts and phone calls that ask you to make, in real time, clinical decisions and judgments and, of course, mounds of paperwork.  In such an environment, I find listening—really listening– to my patient has become one of the most powerful things I can offer. Listening creates silent spaces that can be filled with a patient’s expressions of their worst fears and deepest secrets, untainted by fabrication or distortion.

The Click

In my office, I watch the clock ticking on the wall. The second it hits 8AM, I pick up the phone and call Dave’s cell phone number.  The phone rings and rings, my heart sinks as I fear it is heading for voice mail, then skips a beat as it is picked up,

“Hello?”

“Hi Dave, its Dr. Jain from the Palo Alto VA”

“Oh hi Doc,

“I know yesterday was a bad day for you, so I thought I would just check in.”

Silence.

“I am thinking you should come back to clinic in a week instead of next week; how does that sound?”

Silence

“Yes, I think that would be a good idea.”

There it is—in his voice, what had been missing from clinic, an inflection in his tone, a subtle change but somehow reassuring—the click. He is listening to me and, perhaps more importantly, he knows I am listening to him— we are a team working toward a mutually agreed upon goal. I know this does not guarantee that there will not be troubled times ahead but, for now, this is enough.


The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of Veterans Affairs or the United States Government.

In an effort to protect individual patient privacy the patient stories depicted here are composites of various real encounters brought together to illustrate the situation.