The Antidepressant Wars, a Sequel: How the Media Distort Findings and Do Harm to Patients

Clinicians and patients are eager to know if antidepressants are effective and journalists recognize a hot topic when they see one. Science and health journalists in the media play a crucial role in communicating scientific findings to professional and lay audiences, but their goals in doing so are not limited to conveying best-evidence assessments of often complex and contradictory findings, and also include attracting readership to  themselves and their media outlets.

As in other journalism, there’s often a reason to suspect that exaggeration and distortion gets introduced into science reporting in order to generate buzz. The goal of preserving necessary complexity is often no match for opportunities for publicity-grabbing hype. Recently, a producer of a TV news story about antidepressants told a well-known psychiatrist-researcher colleague of mine “You really make sense, but what you say would so complicate what I have to say,” and left him out of the story.

Scientists frequently become frustrated when they see their messages twisted in media reports, but we need to realize that journalists don’t work for us, they work for the media and themselves. Some scientists know that and play to the media, spinning interpretation of their finding in the scientific literature so that they are media-ready from the get go. A recent study demonstrated that apparent media distortion of science often starts with distorted scientific articles, and particularly those with hyped and exaggerated abstracts.

Psychiatrist Adrian Preda’s guest blog post for Mind the Brain hits on the all too familiar theme of supposedly dispassionate and objective science authors pitching exaggerated claims about antidepressants (back and forth) to responsive journalists in ways that leave clinicians and laypersons confused and ambivalent about important decisions concerning medical treatment. Clinicians are left contending not only with their own confusion, but that of patients who come back to question past decisions.

I have blogged a number of times elsewhere about a psychologist mentioned in Dr. Preda’s blog post, Irving Kirsch. I noted how even a seasoned and award-winning television media person was left bewildered interviewing him for CBS Sunday Morning News when he claimed that antidepressants did not significantly improve patients’ depression better than a placebo pill. In the lively exchange that followed, I further showed that if we were to adopt his arbitrary criterion for ‘significantly better,’ we would have to accept not only that antidepressants do not work, but that psychotherapy does not either, and that many medical treatments widely seen as effective actually are not. A recent detailed dissection of the statistical malpractice in Kirsch’s analyses probably will not dampen his repetition of his claims or their echoing in the media.

And now in a case of what Yogi Berra would call “deja vu all over again” Adrian Preda reports on another episode of conflicting interpretations of the data concerning the efficacy of antidepressants getting further confused and selectively reported in the media. To paraphrase his quote from psychiatrist Michael Thase, I’m sure there’s no last word here. –Jim Coyne


The Antidepressant Wars, a Sequel: How the Media Distort Findings and Do Harm to Patients

Guest blog post by Adrian Preda MD

 Central to the perspective I present in this blog post is my work supervising psychiatric residents and medical students at a university-based psychiatry clinic where our patient population includes a good number of adults suffering from mild to moderate depression.

In 2010, the publication by JAMA of a single-study challenged and upended a major assumption that had guided clinical work like ours for over three decades (Barrett 2001; Qaseem 2008). This was the widely covered meta-analysis of antidepressant (AD) trials conducted by Fournier and colleagues(2010), which drew the far reaching conclusion that ADs show significant response in very severely depressed patients, but are not more effective than taking a placebo  in less severe cases.

Fournier was not the first study that took aim at the foundation of treatment guidelines for depression, which in essence recommend treating depression with antidepressants. In 2008 Kirsch et al. meta-analysis of clinical trial data submitted to the Food and Drug Administration ended with a rather strongly worded conclusion:

“Drug–placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for severely depressed patients” (emphasis added). (Kirsch et al., 2008)

After reading their findings a neutral conclusion for Kirsch et al. would be that

  1. ADs are statistically better than placebo
  2. Response correlates with patients’ severity of symptoms.

Not an earth shattering conclusion by any means as both results were already common knowledge for anyone who started prescribing ADs since 2002, the date when Kahn et al. published their 45 studies based meta-analysis of FDA submitted AD trial data. Their conclusion?

“The magnitude of symptom reduction was significantly related to [..] initial depression […] scores; the higher the […] initial […] score, the larger the change.” (Kahn et al. 2002)

Therefore, one can look at the Kirsch (2008) study findings as a replication of earlier findings, a continuation of a line of knowledge that has already been established. Which is most times the way scientific knowledge expands. Given this, one would be hard pressed to understand how a study that essentially replicated prior positive findings would become the poster child for the anti-antidepressant movement that followed. But that is exactly what happened.

Interestingly, Kahn et al. (2002) was not cited by Kirsch et al. (2008), in itself a remarkable oversight considering the similarities between the two studies. But I found even more troubling that instead of conservatively explaining their findings and providing as much of a neutral and tentative explanation as possible — the validated scientific communication tradition —  Kirsch et al. appeared to formulate their conclusion from a position of commitment to an anti-antidepressant view:

“The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo [even] among very severely depressed patients, rather than to increased responsiveness to medication. (Kirsch et al., 2008)

And that strongly worded conclusion made the Kirsch study an almost overnight media hit. Front-page newspaper and radio coverage followed, criticism was dismissed (Horder 2011).

To this date, the Kirsch study remains one of the most popular papers on the PLoS Medicine website, as reflected in the following metrics: 282,219 views, 631 citations, 300 academic bookmarks, and 404 social share (data as of December 20th, 2012).  A number of critical commentaries followed.  Some directly criticized Kirsch et al. (2008) for methodology or overstated conclusions (Kelly, 2008; Khan and Khan, 2008; McAllister-Williams, 2008a, 2008b; Moller, 2008; Nutt and Malizia 2008; Parker, 2009; Turner and Rosenthal, 2008). More interestingly, a few who decided to re-analyze Kirsch’s data found they could not replicate Kirsch et al. pessimistic view on AD’s efficacy (Fountoulakis , 2011; Horder et al., 2011). For unclear reasons these subsequent reports aimed at reestablishing the ADs respectability got much less media attention than Kirsch’s 2008 original.

“Déjà vu All Over Again”

In this context when Fournier at al. came along in 2010 I thought I had a déjà vu. Not as much in terms of the study’s conclusions but rather in terms of the emotional intensity and dramatic flavor with which it was greeted by the mass media. I first heard about it on NPR, and this surprised me as I usually get the studies I am interested in before the media does.

Over the next couple of days headlines such as these appeared in print and online media around the world:

  • The NY Times: “Popular Drugs May Help Only Severe Depression”(Carey B, 2010)
  • From the LA Times: “Antidepressant medications probably provide little or no benefit to people with mild or moderate depression” (Roan S, 2010)

Immediately following this media hoopla, I found that my students – a new generation who have not been part of the Kirsch antidepressants wars – began to routinely question the wisdom of continuing or starting antidepressant treatment for our patients suffering from mild or moderate depression.

And it did not take long for our patients themselves to express their doubts about the efficacy of antidepressants — even for severe depression.

I was troubled at the time by the unquestioning coverage of Fournier et al which inferred that this single study was in fact “settled science” on the subject of antidepressants when it was not; and by the inattention given (in either the professional literature or popular press) to either the complexities or long history of debate (as discussed above) or at least the serious flaws in the study’s methodology – as I’ve summarized below.

Two years later I’m equally concerned about the lack of media coverage given to a 2012 publication, also by JAMA, of a study by Gibbons and colleagues (2012) which, history aside, refutes Fournier’s claim that antidepressants are not more effective than placebo for mild to moderate depression. Similar to Fournier et al. (2010) Gibbons et al.’s (2012) findings are based on individual patient data and include longitudinal measurement which makes its conclusions a strong counterpoint to those of Fournier et al. (2010).

Among the points I now make to my students when questions arise about antidepressant efficacy as a result of the meta-analysis conducted by Fournier et al, are the following:

  • The individual patient-level data approach used by Fournier et al represented an improvement over standard meta-analyses; however their results were based on only 6 studies that met their criteria from more than 200 relevant studies
  • Reducing 2164 citations to 6 is hardly representative, especially when the 6 analyzed studies represent only two medications: paroxetine and imipramine, the latter not recommended for first line treatment of depressive disorders
  • Furthermore, of the 6 studies, 5 specifically excluded with very mild depression making the authors’ conclusions about lack of separation of ADs from placebo for mild depression weak.

Exclusion Criteria Raise Major Questions

The strength of a meta-analysis is based on applying a solid statistical approach to all studies meeting a set of relevant inclusion/exclusion criteria, and in this case it appeared that the authors excluded too many relevant studies.  Specifically, 228 studies were excluded based upon their exclusionary “placebo washout lead-in” requirement (a requirement that all study participants get a placebo to start with and only those who do not respond to the placebo continue in the study). The placebo washout/lead-in represents a common historical design used in antidepressant trials with the intent of excluding patients who do not demonstrate symptom stability thus are not likely to benefit from a truly effective AD. Fournier et al. (2010) acknowledge that “it is not clear that placebo washouts actually enhance the statistical power of antidepressant medication/placebo comparisons” nevertheless they proposed that in order to evaluate the rates of “true placebo response” one should exclude all studies using a placebo wash-out/ lead-in design.

While it is true that a placebo washout might limit accurate estimates of placebo-response and might not improve the probability of an AD being more effective that a placebo, this design for studies of depression would not affect the validity of an active AD – placebo separation, were one to be found. The exclusion of washout studies was especially problematic precisely because this represents a common design for AD clinical trials, meaning that numerous relevant studies will be excluded. In other words Fournier et al. imposed a seemingly arbitrary (i.e. not evidence based) exclusionary criterion that effectively filtered out the majority of the relevant studies. This is a very bright red flag and potential source of bias, which greatly limits the validity of the authors’ conclusions.  Assuming these easily excluded studies were otherwise methodologically sound, the number of study investigators contacted would have increased from 23 to 251; and likely significantly more than 6 would have contributed to the final analysis.

Considering the potentially grave implications of either mental health providers or patients accepting the headlines generated by widespread publication of these results at face value, the study’s  methodological weaknesses  –which were not treated in any depth by comments accepted for publication by JAMA – warrant further critical review.

Overlooked and Highly Relevant Research

Likely because it received dramatically less coverage, far fewer of my students are aware of the 2012 study by Gibbons et al (2012) who, after reviewing 43 fluoxetine and venlafaxine trials, concluded that, contrary to the Fournier at al. (2010) findings, these two antidepressants are in fact efficacious for major depressive disorder in all age groups, regardless of the depression severity at baseline.

As noted, Gibbons et al. (2012), as Fournier et al. (2010), also used patient-level data – making the point against Fournier el al. even more significant. In addition, if you compare Gibbons et al. (2012) final set of 43 studies with a meta-analysis population of 4303 patients in the fluoxetine trials and 4882 patients in the venlafaxine trials (in total more 9000 patients) to the Fournier et al. (2010) final set of 6 studies (3 paroxetine and 3 imipramine trials) with a total of 718 patients, Gibbons et al. (2012) significantly larger number of studies makes for a more believable conclusion.

Both studies are limited in that they focused on only 2 ADs: paroxetine and imipramine for Fournier et al. (2010) versus fluoxetine and venlafaxine for Gibbons et al. (2012). At the same time Gibbons et al. (2012) used an all-inclusive set of studies, whereas,  as noted above, the Fournier et al. (2010) study used a highly selective group of studies. There are also important differences in data analytic methods that could explain the differences in results. For example, Gibbons et al. (2012) defined severity differently than Fournier et al. (2010).

To expert eyes, the main effects for the drug versus placebo differences can be actually seen as similar in the two data sets. And that is the very reason for engaging in this debate.

Which study is more convincing?

The Gibbons study reminds us that it is our duty as physicians and society at large to carefully screen and aggressively treat depression, including with medications if so recommended. The Fournier study makes us aware that there might be more to the story of AD response than a straightforward active ingredient effect.

We can all speculate about why the Gibbons study received so much less media coverage than did Fournier and colleagues.

 The Sequel

In the antidepressant wars, we have seen the pendulum’s full swing from the early nineties when Elisabeth Wurtzel’s “Prozac Nation” was thrilled to be “Listening to Prozac” with Peter Kramer, and into the early millennium years when Healy’s tongue in cheek advice was to “Let Them Eat Prozac”. By the time Carl Elloit’s “Prozac as a Way of Life” hit the stands in 2003, some thought we were at the end of an era.  But ADs came back strong, only to engender renewed debate and, as argued above, uneven and thus inaccurate media coverage in the current decade.

Unintended Consequences of an Unevenly Covered Debate

As my esteemed colleague Michael Thase adeptly put it to me, “There is no ‘last word’ in the science of this debate.” He is undoubtedly correct. And, as a physician, I find relief in the fact that we continue to question engrained assumption and are reluctant to accept there is such a thing as a last word or simple explanation when it comes to complex issues. Depression, with its multidimensional tentacles equally anchored in nature and nurture will never be a good subject for simple explanations.

But, again, as a physician I am very concerned about major unintended consequences of uneven coverage of the competing major findings discussed above. Specifically, I fear that clinically depressed members of the public at large will refuse a likely efficacious treatment option.  And while all may be well if that depressed patient makes the informed alternative choice of starting treatment with cognitive behavioral therapy (CBT), a validated form of therapy for depression that compares well with SSRIs for mild or moderate depression, all is certainly NOT well if the patient’s decision not to accept treatment with antidepressants is based primarily on media delivered misinformation.

Given the stigma against acknowledging or treating a mental illness with a psychotropic medication, the media saturation given to one study only worsens an already difficult situation for many patients who fear the personal and social consequences of admitting their illness and seeking treatment.

 In closing: my hope is that members of the media who cover this debate will realize that “first do no harm” is not only the duty of physicians; it is also the responsibility of anyone trusted with giving health information to the public at large.

Acknowledgements: I would like to thank Lawrence Faziola and Steven Potkin for critically discussing Fournier et al. and Michael Thase for his critical read of the draft to this article.


Adrian Preda MD is a psychiatrist and Health Sciences Professor of Psychiatry and Human Behavior at University of California Irvine School of Medicine.  He received his residency training at Yale, and has been a faculty member at Yale and UT Southwestern, prior to joining U.C. Irvine.



Barrett JE, Williams JW Jr, Oxman TE; et al. (2001) Treatment of dysthymia and minor depression in primary care: a randomized trial in patients aged 18 to 59 years. J Fam Pract. 50(5):405-412.

Carey B (2010) Popular Drugs May Help Only Severe Depression. New York Times, January 5, 2010

Fournier JC, DeRubeis RJ, Hollon SD; et al. (2010) Antidepressant Drug Effects and Depression Severity A Patient-Level Meta-analysis. JAMA303(1):47-53.

Fountoulakis KN, Möller HJ (2011) Efficacy of antidepressants: a re-analysis and re-interpretation of the Kirsch data. Int J Neuropsychopharmacol. 14(3):405-12. Epub 2010 Aug 27.

Gibbons RD, Hur K, Brown CH, Davis JM, Mann JJ (2012) Benefits from antidepressants: synthesis of 6-week patient-level outcomes from double-blind placebo-controlled randomized trials of fluoxetine and venlafaxine.Arch Gen Psychiatry 69(6):572-9.

Horder J, Matthews P, Waldmann R. (2011) Placebo, prozac and PLoS: significant lessons for psychopharmacology. J Psychopharmacol. 25(10):1277-88.Epub 2010 Jun 22.

Kelly BD (2008) Do new-generation antidepressants work?. Ir Med J 101: 155–155.

Khan A, Leventhal RM, Khan SR, Brown WA (2002) Severity of depression and response to antidepressants and placebo: an analysis of the Food and Drug Administration database. J Clin Psychopharmacol 22: 40–45.

Khan A, Khan S (2008) Placebo response in depression: a perspective for clinical practice. Psychopharmacol Bull 41: 91–98.

Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT (2008) Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med 5: e45–e45.

McAllister-Williams RH (2008a) Do antidepressants work? A commentary on ‘Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration’ by Kirsch et al. Evid Based Ment Health 11: 66–68.

McAllister-Williams RH (2008b) Misinterpretation of randomized trial evidence: Do antidepressants work?. Br J Hosp Med (Lond) 69: 246–247.

Moller HJ (2001) Methodological aspects in the assessment of severity of depression by the Hamilton Depression Scale. Eur Arch Psychiatry Clin Neurosci 251(suppl 2): II13–20.

Moller HJ (2008) Isn’t the efficacy of antidepressants clinically relevant? A critical comment on the results of the metaanalysis by Kirsch et al. 2008. Eur Arch Psychiatry Clin Neurosci 258: 451–455.

Nutt DJ, Malizia A (2008) Why does the world have such a ‘down’ on antidepressants?. J Psychopharmacol 22: 223–226.

Qaseem A, Snow V, Denberg TD, Forciea MA; et al. (2008) Clinical Efficacy Assessment Subcommittee of American College of Physicians. Using second-generation antidepressants to treat depressive disorders: a clinical practice guideline from the American College of Physicians. Ann Intern Med. 149(10):725-33.

Parker G (2009) Antidepressants on trial: how valid is the evidence?. Br J Psychiatry 19: 1–3. Web of Science

Turner EH, Rosenthal R (2008) Efficacy of antidepressants. Br Med J 336: 516–517.

Roan S (2010) Study finds medication of little help to patients with mild, moderate depression. Los Angeles Times. January 06, 2010.


Adrian Preda, M.D., Health Sciences Professor, Department of Psychiatry and Human Behavior, University of California, Irvine School of Medicine


38 thoughts on “The Antidepressant Wars, a Sequel: How the Media Distort Findings and Do Harm to Patients”

  1. As a lay person it is sometimes hard to weed through the media’s sensationalized view of medical studies and whatnot. I have learned to listen to my doctor and I also look things up on my own, but I take any media announcements with a grain of salt. Not everyone will do this and this can cause a problem as you have found even among your students.
    Basically the media is out to make money and will latch on to anything that they think will catch on and get people to consume their product. The researchers, Kirsch et al., should know better, as it should not be their jobs to sensationalize their work like that. That they did not even mention an earlier study that produced similar results is shameful as well.
    Of course we haven’t heard about the studies that contradicted this, it is not so sensational. A headline stating it is just as we thought, anti-depressants help depression is not a very exciting, is it?


  2. Good points, Ericka. Man bites dog stories get a lot more attention than dog bites man stories, but fortunately most of us have enough familiarity with humans and dogs to not get confused by the preponderance of man bites dog stories.

    Kirsch has recently published another paper in PLOS One that suggests that antidepressant medications, psychotherapy, and acupuncture are no different in their effects on depression. This of course is absurd and authoritative Cochrane Collaboration include that the just isn’t enough high-quality acupuncture research targeting depression to make any conclusions at all. Kirsch is at least consistent. See


  3. Of course one can argue that a story with so many unexpected turns of events such as the ADs story would actually make for a better story.

    Not only that. If good stories are what journalists are looking for than they should know that a story line is much better when one looks at how it unfolds over time rather than just at some specific point in time.

    Interestingly, we also know that a discussion on any public health issue is much more informative when it takes into account longitudinal rather than cross-sectional data.

    But despite all the above focusing on singly point events is precisely what mass media likes to do.

    When it comes to public health issues, in addition to substandard stories, the danger is that by doing so the typical science journalists automatically limits themselves to a biased set of data, in turn producing a biased and likely misleading conclusion.


  4. Thank you for writing this. I am one of those “members of the public at large” who “made a decision not to accept treatment with antidepressants” (to quit treatment, in my case) in part because of the loud and frequent messages I’d gotten from the media about the myth of antidepressants after the Kirsch book was published. This message fit nicely into my worldview (that depression diagnoses and antidepressant use skyrocketed in recent decades because of aggressive marketing by Big Pharma) and reinforced the stigma I felt about taking an antidepressant.

    So I quit the antidepressant I had been taking for 5 years, sure that the benefits I had gained from it were all in my mind (because it’s no better than a placebo, right?) and sure that nothing would change and I’d be fine. Not surprisingly, this experiment didn’t turn out so well. In fact, it was almost fatal.

    Now, tho, I am much better informed about depression and its treatments. I hate that popular media typically addresses depression as if it’s either psychosocial or biological/medical in nature – and thus should be treated either with therapy or with drugs. And addresses antidepressants as if they are either The Best Thing Ever or Bullsh*# in a Pill. Obviously, complexity and thoughtful analysis don’t make for exciting headlines. But as you point out, and as I’ve experienced, the media’s sensationalism and oversimplification of health-related topics can be pretty damaging to a lot of people.


  5. “(…)but we need to realize that journalists don’t work for us, they work for the media and themselves. ”
    As a journalist I tend to believe we work for the public.


  6. “We can all speculate about why the Gibbons study received so much less media coverage than did Fournier and colleagues”.

    To be honest I think the Gibbons got more covereage than it deserved. To me it read as a piece of marketing by Pfizer. It is based on data that only he (a massive antidepressant supporter and opponent of the black box warnings) and pfizer have access to and we are supposed to believe that Pfizer had this data all along and just didn’t notice the benefit.

    Which study is more convincing?

    Surely without access to the same raw data Gibbons used to come to his conclusions, it all means close to nothing.


  7. @Ana
    Very sorry to hear about your experience.

    It unfortunately mirrors the confusion that I have seen in some of my patients following big media announcements to the point that depression is “all in your head”.

    I believe that in terms of treatment recommendations it is fist and forermost the responsibility of the physician to try to make sense of the confusing and often contradictory information that’s “out there”.

    At the same time I believe that responsible journalism is about checking one’s sources and make an effort to present alternative viewpoints in as much a neutral manner as possible. In my opinion “spinning” and sensationalism is simply unethical when it comes to public health stories.


  8. Like science, with journalism it helps if the reader understands the process. In the United States, unless it is an editorial, a news account should not contain the writer’s opinion.

    Many journalists specialize in a subject, but articles are also assigned to journalists who many know nothing about a subject which is why they interview sources who are experts instead of making it up themselves. For this reason the article should use multiple “expert” sources (rule of thumb is three) who express their viewpoint. If the source is less than truthful or disingenuous in their interview responses it may make it into the news unless the journalist has another way of fact checking – reporters don’t have ESP. One way to do that is to use multiple points of view from neutral sources or sources who may disagree with the authors of the study or field in question.

    As well, because scientists disagree quite often a good journalist will provide context so that reader’s understand why scientists disagree. Only interviewing authors of the study in question – or only scientists who support the author’s viewpoint in a controversial area – is known among journalists by the rather derogatory term “press release” journalism.

    However, a reader’s disagreement with a source doesn’t automatically mean the article, journalist or media outlet hyped or distorted anything – often times it means the reader’s point of view is different than that of the source quoted.


  9. @Kate Benson

    Thanks for the clarifications. It sounds like a well thought out process. It also looks like following through will take some time.

    If that’s the case I wonder how many media people chose to follow it all the way through when the reality is that of an environment that puts so much pressure on being the first to hit the press.

    BTW, I did not mean to imply that the all the responsibility for the process stays entirely with the media. Scientists can and should play a major part in the responsible dissemination of their research findings. However there is an obvious conflict of interest when one reports on one’s own research.

    And that is precisely when input from a neutral referee, ideally the media person at the other end of the microphone, could make a tremendous difference.


  10. Papers like the one published by Gibbons et al (2012) present scientists working this area with a real ethical dilemma. On the one hand it confirms that current antidepressants are grossly inadequate, reporting as it does that over 40% of patients do not respond to these drugs. On the other, it is clear that some people are deriving at least some benefit even though the net advantage to these is (according to the Gibbons et al (2012) paper) a mere 2.5 HAM-D units.

    The de facto resolution to this dilemma has been to try and prevent the inadequacies of these drugs from becoming common knowledge. This was done in the first instance to preserve drug company profits but these drugs are now out of patent. Hence, this now seems to be being driven by a fear that the benefits of prescribing these drugs are so fragile that they could be destroyed, rather like a placebo effect, if these inadequacies become too well known.

    The choice is not however between the needs of those who do and do not respond to the currently available drugs. It is a decision about whether this situation can be accepted as the status quo for yet another generation or more.

    The reality is that Big Pharma no longer sees new drug development for the treatment of psychiatric disorders as a priority and this is likely to remain the case until they are persuaded to do otherwise.

    Hence, we are in consequence faced with a stark choice – whether to keep the inadequacies of current treatments under wraps in the hope of preserving marginal benefits for a minority of patients or to publicize these inadequacies as widely as possible in order to bring public and political pressure to bear on finding a proper solution to what is the mental health problem of our age.

    My own view is that bad science has put us into this situation (see Hendrie and Pickles, 2012) and that only the full public recognition of this and good science can get us out of it.


    1. I think that you have grabbed and identified the tale of the elephant, but are not noticing the trunk. Antidepressants have about the same efficacy as psychotherapy and both have similar differences from pill placebo in the dozen or so trials that allow comparison. Certainly, patients differ in whether they respond better to one or the other, but it is difficult to predict which one ahead of time. So I think if there is a plateau having been reached in the development of antidepressants that needs recognition, so too there is a plateau in the development of psychotherapies. Moreover, in the case of psychotherapies there is little evidence that any one structured, supportive intervention with an adequate rationale is better than any other one.


  11. I agree we need better anti-depressive treatments. Under anti-depressive treatments I certainly include antidepressants. To get there we need better research models at all levels – from molecules to social design.

    But the discussion here is about a slightly different topic. My goal was to discuss how the same evidence (assuming there is such as a thing as an absolute truth about a treatment effect in this population) can be made to appear not as one but a number of different things under the polarized lenses of different statistical models and then further distorted by the even “deeper” polarized lenses of mass media outlets.


    1. I don’t disagree with you, Adrian, but I think you are downplaying the degree to which the media circus IN FAVOR of antidepressants has led to the very backlash you now decry. While andidepressants may be statistically better than placebo, it’s not by a huge amount, and placebos are pretty darned effective overall. It is true that psychotherapy may be similar in effectiveness, but it has a much lower risk profile (at least with a competent therapist) and so really should have always been the first line of defense for mild to moderate depression, and should still be. The huge media circus celebrating the wonders of antidepressants led many people to believe their problems would be relieved, when in fact, their moods can only be temporarily and partially lifted in most cases. There is also emerging evidence that long-term changes in the brain may be caused by long-term antidepressant use, and these changes may actually make it more likely to turn an acute depressive episode into something chronic. With all of this information now known, it seems that psychiatry is overdue for a bit of an apology. Kirsh may have overstated the case against ADs, but the APA and the pharmaceutical industry have grossly overstated the case in their favor, and continue to do so. ny thinking is that ADs should be but one tool in a big toolbox that includes exercise, diet, mindfulness, neurofeedback, peer support networks, psychoeducation, and therapy, with pills being an adjunct that are used when needed. ADs became the first line treatment based on distortions of the research. It’s time for those in the know to publicly acknowledge those distortions. It will make work like Kirsh’s much less significant if the real truth is available in all it’s complexity.

      —- Steve


      1. Steve, it is not evidence based to justify a particular position as a backlash. The evidence is that antidepressants have equivalent effects equivalent to psychotherapy vis a vis pill placebo. Neurofeedback is not evidence based. Really, Steve, now that you have made the suggestion, I think it is perhaps time for you to acknowledge your distortions.


      2. Much of medicine is not evidence-based, and much evidence-based medicine eventually turns out to be poor medicine. I would remind you that absence of evidence is not evidence of absence. Steve put forward (to Adrian) a very resonable position which perhaps (by way of your response) revealed your own biases Steve?


  12. It’s been a long time since I spent any time thinking about these issues, so my ideas may seem hopelessly naive. All of these studies seem to use the Hamilton Depression Rating Scale. Is this a good instrument? If the scale is too crude, perhaps antidepressants have beneficial effects that are too subtle to be reflected in the scale.
    Part of the challenge in studying SSRI is their long latency to take effect. Is the two-week latency period enough time for mild/moderate depression to spontaneously remit? Spontaneous remissions would be additive to ( and indistinguishable from ) any “true” placebo effect. In this scenario, I guess that the bottom line would still be the same (SSRI’s apparently don’t do much for mild to moderate depression).
    Has anyone looked at the effect of antidepressants on the individual components that go into the HAM-D scale (e.g. depressed mood, feelings of guilt, etc)? It seems to me possible, at least in principle, that antidepressants could preferentially affect a subset of these symptoms separately from others.
    Part of my obsession with the scale is that the things the scale measures could be “downstream” effects that are not as tightly correlated with the “core” problem as it seems. For instance, insomnia often follows excessive use of soft-drinks marketed as energy drinks, but that probably has nothing to do with depression.


  13. Thank you! This was really informative and beautifully reported. Balanced, detailed, logical. A pleasure to read! I feel like I’ve gained badly needed insight not only into the SSRI meta-analysis wars but also into how science reporting operates in the complex layers of public (and scientific!) opinion.


  14. @Mark Lewis:


    Following the feedback I got here I think that it would be worthwhile to impress on the media decision makers (editors?) to make more of a concerted effort to engage scientists/experts alongside the science journalists when it comes to debating public health issues (in my field in addition to the hot topics of depression/antidepressants we should also include preventive mental health, schizophrenia, dementia, substance abuse, prescriptions drugs – and, sadly, I can keep going…)


  15. Thirty years ago the same discussions were raging in the Academics Circles as “psychiatry/Mentalheath” we’re still in the closet and the media did not paid much attention. At Washington University, Robbins&Guze were helping patients by trying to bring some sense to all the contradictory schools of Psychiatry. They were applying the Scientific Method in rigorous ways, as it was the obvious method that Medicine used to understand and help patients. They used the same 5 criteria to define Disorders/Syndromes as used in the successfully fields of medicine. Over and over they tried helping other Psychiatrists understand the need for strong criteria to make specific Diagnosis so research and clinical treatment could be used to help patients. They gave research backed diagnosis, including how to define the “depressive disorders” to help avoid all the confusion that today, 30 yrs. later, still rips apart the Psychiatric/Mentalhealth establishment, taking down as collateral damage the patients and the families they are trying to help. The Major Depression diagnosis used now, like the DSMIVR is including so many different disorder/syndromes that makes all this studies confusing and contradictory. The first thing they did was stopped using “Major Depression” to avoid the confusion w those clinicians that considered depression as a continuum of normal feeling. So they call it Primary Affective Disorder. Why primary? Because they recognized that studies had shown, and also common sense, that people who had other psychiatric disorders, like alcoholism, taking medications like cortisol, grieving the lost of a love one, anxiety, etc , had a different natural history, or course, responding different to treatment. Today we still do not understand science, start w the wrong premises, lump all the “depressions” (I always wonder how much influence has pharma achieved as it is better for them to have a broad number of patients, so more patients to treat), use the Adversarial System w the black or white simple explanations, that polarizes us more and more to the detriment of the patients. When are we going to stop acting like Special Interest Groups, pushing our agenda even when we obtain such a conflicting results that make us be the laughing stock of medicine, science, and fuels the fire of those who want to deny the brain disorders that devastates our patients an their families. It is time to accept the Aristotelian approach of observation and corroboration has triumph over the Platonic approach that only some in Psychiatry follow. These are not new controversies, but the same that appear every 2-3 yrs, but with different coloring, now calling these studies “Science”. Yes, I am frustrated, after all the hard work of these pioneers, 30 yrs. later I still do not have much improvements in Diagnosis nor treatment options to help my patients who still have to fight shame, stigma, refusal of their insurance to pay for treatment, and the periodic media frenzy making them and their family feel like fools for taking medications that are helping them. Do not blame the media nor the general public, blame the “professionals”!


  16. @ Felix R. Toro

    There is a difference between complexity and confusion. Psychiatry turns out to be a complex discipline. with roots in the biological, psychological and social realms.

    To make matters even more complicated, brain biology is a whole other ball game than cardiac biology. Cause and effect biological explanations are not an easy fit for a complex biological system such as the brain where the relationship between structure and function is multidimensional.

    Guze’s model is essentially a biologically reductionist model for mental illness. Which makes perfect sense in the context of some of the questions we try to answer with regards to diagnoses and treatments; however in our quest for understanding we need to remember that while simple is good simplistic is bad.

    In this context it is expected that the process of scientific discovery in psychiatry will not be as straightforward as in other medical disciplines.

    This is not a matter of disappointment or failure. The hard work that has been completed to date is only a foundation not the final answer to “all things psych.”


  17. @ Altostrata

    I found your comment illustrative for a certain attitude, in my view characteristic of the field of antipsychiatry debates (see Which is essence claims that either mental illness does not exist as such – and it is the profit and power-motivated psychiatrists that invented it, or if it exists it should not be treated with medications, and the whole medication recs for psychiatric illness are not based on anything but a nefarious relationship between psychiatry and pharma industry founded on mutual financial incentives.

    Antipsychiatry bias aside your comment falls under the category of what has been called “troll comments” ( see, a type of comments that are in essence non-informative, biased, and attempt to promote an anti-whatever-the topic might be agenda, by using a number of diversionary techniques with misleading face validity but a fallacious logical foundation.

    It is unfortunate to see that your comment exemplary follows the outline of a typical troll comment, including:

    1. Make a sweeting contrarian statement such as:

    [in regards to one of the studies we discuss]:

    the conclusion being it was concocted, the data twisted, to support a pre-existing opinion.

    2. Make sure your statement is not tentative in any way. Don’t make it look as a hypothesis you will work towards proving (or disproving). On the contrary, state it in a most definitive manner. There are no ifs, ands or buts, what you state should appear as the final truth.

    3. As what you stated appears as the final truth assume that it is the final truth. Therefore there is no need to bother to present evidence for your case.

    4. Make some more sweeping statements, with the goal of further distracting the reader about the issue under debate:

    Research in psychiatry amounts to either pharma-sponsored infomercials or territorial defense by “experts” who established their credentials when they were pharma consultants.

    […]Journal selectivity is not what it once was.

    5. Don’t bother to qualify or prove any of those – or in fact any of the “final’ and “definitive “conclusions” – maybe because you can’t defend them in the first place but most likely because you don’t want to make your argument appear “weak”.

    6. Conclude with a blistering ad-hominen attack to signal to your opponent that it might not be a good idea to engage in further debates.
    Dr. Preda should be much more skeptical of his sources, or he’ll be doing nothing but parroting a party line established in the pharma money glory days.

    7. Put your comment out there and distract the heck out of the readers – thus effectively weakening their ability to objectively assess the case under debate.

    Fait accompli!

    By the way I have no trouble discussing evidence that contradicts what I state – I actually believe progress and good science require a healthy dose of skepticism and contrarianism – at the same time I do take issue with inflammatory claims based on poorly constructed arguments.


  18. @Altostrata

    JAMA Psychiatry just published two letters to the editor about the Gibbons et al. study I discussed. They both make important points about weaknesses in Gibbons methodology. Gibbons replied.

    I advise anyone interested in this discussion to have a look for themselves at the JAMA Gibbons discussion.

    To make my position clear: I am equally invested or equally non-invested in Gibbons and Fournier. My intent is only to make the point that there is more than one side to a complex story.

    Further, the implicit conclusion is that even research articles that have been already subjected to rigorous peer review do NOT make for absolute truth – and non-partisan skepticism should be equally applied to all sides of the antidepressant discussion.


  19. @Altostrata

    “Supposedly” is problematic. What you are proposing is some sort of conspiracy theory where Big Pharma and generations of researchers and clinicians had an underground, behind closed door agreement to drug up millions of people.

    Understanding that in reality depression is a complex bio-psycho-social disorder explains why any simplistic approach to its diagnosis and treatment is doomed to fail.

    That is exactly why it is harder to make sense of what at times appear to be contradictory data about its response to treatments – and yes that includes medications and psychotherapy as well as placebo response.

    What I propose is that this level of data complexity needs to be properly understood before (too easily) casting blame and calling names, in addition to uncritically deciding on what works or does not work.


  20. Are you familiar with this meta-analysis that suggests that the relapse rate is much greater in AD treatment groups vs. placebo groups (and correlated with the potential of the AD used to perturb the monoamine homeostasis)?

    Is the study flawed or can the results be explained in a way that leaves AD:s in a good light?


    1. The study raises an interesting, but intricate hypothesis. I really cannot evaluate the meta analysis. It was not pre-registered and so I cannot tell if the intricacy of the hypothesis was achieved post hoc or HARKed-hypothesized after results were known. Furthermore, basic statistics like analyses of heterogeneity are not provided that I would need to evaluate its appropriateness. I am a great fan of the AMSTAD criteria and this meta analysis would not be rated highly.


  21. Also as a physician Adrian, I am worried by your well-defined criticisms of Fournier’s paper but lack of any (despite many having been made in the literature- not least regarding study selection) around Gibbons’.

    Do you have any financial disclosures to make?


  22. I participated as consultant, inside a CRO, in a protocol testing Fluoxetine, an standardized Hypericum (st John’s wort) preparation, and placebo, for mild to moderately depressed patients.
    First issue was locating patients willing to enter the trial, psychiatrists had not enough of them to be screened for inclusion criteria, enrollment was sluggish; after the search was moved to primary care practitioners, cases begun to be recruited, to finally obtain the result that Hypericum gave better results with statistical signification over Fluoxetine, but neither Fluoxetine nor Hypericum differed significantly from placebo.

    A possible explanation may be that no good definition of what a depression is exists, symptoms of all mental disorders overlap, and that each case, and each genetic background, pharmacogenomics, has their ‘best choice drug’, perhaps we don’t have enough information about what happens in persons suffering mental nuisances and looking for care.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s