Danish RCT of cognitive behavior therapy for whatever ails your physician about you

I was asked by a Danish journalist to examine a randomized controlled trial (RCT) of cognitive behavior therapy (CBT) for functional somatic symptoms. I had not previously given the study a close look.

I was dismayed by how highly problematic the study was in so many ways.

I doubted that the results of the study showed any benefits to the patients or have any relevance to healthcare.

I then searched and found the website for the senior author’s clinical offerings.  I suspected that the study was a mere experimercial or marketing effort of the services he offered.

Overall, I think what I found hiding in plain sight has broader relevance to scrutinizing other studies claiming to evaluate the efficacy of CBT for what are primarily physical illnesses, not psychiatric disorders. Look at the other RCTs. I am confident you will find similar problems. But then there is the bigger picture…

[A controversial assessment ahead? You can stop here and read the full text of the RCT  of the study and its trial registration before continuing with my analysis.]

Schröder A, Rehfeld E, Ørnbøl E, Sharpe M, Licht RW, Fink P. Cognitive–behavioural group treatment for a range of functional somatic syndromes: randomised trial. The British Journal of Psychiatry. 2012 Apr 13:bjp-p.

A summary overview of what I found:

 The RCT:

  • Was unblinded to patients, interventionists, and to the physicians continuing to provide routine care.
  • Had a grossly unmatched, inadequate control/comparison group that leads to any benefit from nonspecific (placebo) factors in the trial counting toward the estimated efficacy of the intervention.
  • Relied on subjective self-report measures for primary outcomes.
  • With such a familiar trio of design flaws, even an inert homeopathic treatment would be found effective, if it were provided with the same positive expectations and support as the CBT in this RCT. [This may seem a flippant comment that reflects on my credibility, not the study. But please keep reading to my detailed analysis where I back it up.]
  • The study showed an inexplicably high rate of deterioration in both treatment and control group. Apparent improvement in the treatment group might only reflect less deterioration than in the control group.
  • The study is focused on unvalidated psychiatric diagnoses being applied to patients with multiple somatic complaints, some of whom may not yet have a medical diagnosis, but most clearly had confirmed physical illnesses.

But wait, there is more!

  • It’s not CBT that was evaluated, but a complex multicomponent intervention in which what was called CBT is embedded in a way that its contribution cannot be evaluated.

The “CBT” did not map well on international understandings of the assumptions and delivery of CBT. The complex intervention included weeks of indoctrination of the patient with an understanding of their physical problems that incorporated simplistic pseudoscience before any CBT was delivered. We focused on goals imposed by a psychiatrist that didn’t necessarily fit with patients’ sense of their most pressing problems and the solutions.

OMGAnd the kicker.

  • The authors switched primary outcomes – reconfiguring the scoring of their subjective self-report measures years into the trial, based on a peeking at the results with the original scoring.

Investigators have a website which is marketing services. Rather than a quality contribution to the literature, this study can be seen as an experimercial doomed to bad science and questionable results from before the first patient was enrolled. An undeclared conflict of interest in play? There is another serious undeclared conflict of interest for one of the authors.

For the uninformed and gullible, the study handsomely succeeds as an advertisement for the investigators’ services to professionals and patients.

Personally, I would be indignant if a primary care physician tried to refer me or friend or family member to this trial. In the absence of overwhelming evidence to the contrary, I assume that people around me who complain of physical symptoms have legitimate physical concerns. If they do not yet have a confirmed diagnosis, it serves little purpose to stop the probing and refer them to psychiatrists. This trial operates with an anachronistic Victorian definition of psychosomatic condition.

something is rotten in the state of DenmarkBut why should we care about a patently badly conducted trial with switched outcomes? Is it only a matter of something being rotten in the state of Denmark? Aside from the general impact on the existing literature concerning CBT for somatic conditions, results of this trial  were entered into a Cochrane review of nonpharmacological interventions for medically unexplained symptoms. I previously complained about one of the authors of this RCT also being listed as an author on another Cochrane review protocol. Prior to that, I complained to Cochrane  about this author’s larger research group influencing a decision to include switched outcomes in another Cochrane review.  A lot of us rightfully depend heavily on the verdict of Cochrane reviews for deciding best evidence. That trust is being put into jeopardy.

Detailed analysis

1.This is an unblinded trial, a particularly weak methodology for examining whether a treatment works.

The letter that alerted physicians to the trial had essentially encouraged them to refer patients they were having difficulty managing.

‘Patients with a long-term illness course due to medically unexplained or functional somatic symptoms who may have received diagnoses like fibromyalgia, chronic fatigue syndrome, whiplash associated disorder, or somatoform disorder.

Patients and the physicians who referred them subsequently got feedback about to which group patients were assigned, either routine care or what was labeled as CBT. This information could have had a strong influence on the outcomes that were reported, particularly for the patients left in routine care.

Patients’ learning that they did not get assigned to the intervention group was undoubtedly disappointing and demoralizing. The information probably did nothing to improve the positive expectations and support available to patients in routine. This could have had a nocebo effect. The feedback may have contributed to the otherwise  inexplicably high rates of subjective deterioration [to be noted below] reported by patients left in the routine care condition. In contrast, the authors’ disclosure that patients had been assigned to the intervention group undoubtedly boosted the morale of both patients and physicians and also increased the gratitude of the patients. This would be reflected in the responses to the subjective outcome measures.

The gold standard alternative to an unblinded trial is a double-blind, placebo-controlled trial in which neither providers, nor patients, nor even the assessors rating outcomes know to which group particular patients were assigned. Of course, this is difficult to achieve in a psychotherapy trial. Yet a fair alternative is a psychotherapy trial in which patients and those who refer them are blind to the nature of the different treatments, and in which an effort is made to communicate credible positive expectations about the comparison control group.

Conclusion: A lack of blinding seriously biases this study toward finding a positive effect for the intervention, regardless of whether the intervention has any active, effective component.

2. A claim that this is a randomized controlled trial depends on the adequacy of the control offered by the comparison group, enhanced routine care. Just what is being controlled by the comparison? In evaluating a psychological treatment, it’s important that the comparison/control group offers the same frequency and intensity of contact, positive expectations, attention and support. This trial decidedly did not.

 There were large differences between the intervention and control conditions in the amount of contact time. Patients assigned to the cognitive therapy condition received an additional 9 group sessions with a psychiatrist of 3.5 hour duration, plus the option of even more consultations. The over 30 hours of contact time with a psychiatrist should be very attractive to patients who wanted it and could not otherwise obtain it. For some, it undoubtedly represented an opportunity to have someone to listen to their complaints of pain and suffering in a way that had not previously happened. This is also more than the intensity of psychotherapy typically offered in clinical trials, which is closer to 10 to 15, 50-minute sessions.

The intervention group thus received substantially more support and contact time, which was delivered with more positive expectations. This wealth of nonspecific factors favoring the intervention group compromises an effort to disentangle the specific effects of any active ingredient in the CBT intervention package. From what has been said so far, the trials’ providing a fair and generalizable evaluation of the CBT intervention is nigh impossible.

Conclusion: This is a methodologically poor choice of control groups with the dice loaded to obtain a positive effect for CBT.

3.The primary outcomes, both as originally scored and after switching, are subjective self-report measures that are highly responsive to nonspecific treatments, alleviation of mild depressive symptoms and demoralization. They are not consistently related to objective changes in functioning. They are particularly problematic when used as outcome measures in the context of an unblinded clinical trial within an inadequate control group.

There have been consistent demonstrations that assigning patients to inert treatments and measuring the outcomes with subjective measures may register improvements that will not correspond to what would be found with objective measures.

For instance, a provocative New England Journal of Medicine study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

There have been a number of demonstrations that treatments such as the one offered in the present study to patient populations similar to those in the study produce changes in subjective self-report that are not reflected in objective measures.

Much of the improvement in primary outcomes occurred before the first assessment after baseline and not very much afterwards. The early response is consistent with a placebo response.

The study actually included one largely unnoticed objective measure, utilization of routine care. Presumably if the CBT was effective as claimed, it would have produced a significant reduction in healthcare utilization. After all, isn’t the point of this trial to demonstrate that CBT can reduce health-care utilization associated with (as yet) medically unexplained symptoms? Curiously, utilization of routine care did not differ between groups.

The combination of the choice of subjective outcomes, unblinded nature of the trial, and poorly chosen control group bring together features that are highly likely to produce the appearance of positive effects, without any substantial benefit to the functioning and well-being of the patients.

Conclusion: Evidence for the efficacy of a CBT package for somatic complaints that depends solely on subjective self-report measures is unreliable, and unlikely to generalize to more objective measures of meaningful impact on patients’ lives.

4. We need to take into account the inexplicably high rates of deterioration in both groups, but particularly in the control group receiving enhanced care.

There was an unexplained deterioration of 50% deterioration in the control group and 25% in the intervention group. Rates of deterioration are only given a one-sentence mention in the article, but deserve much more attention. These rates of deterioration need to qualify and dampen any generalizable clinical interpretation of other claims about outcomes attributed to the CBT. We need to keep in mind that the clinical trials cannot determine how effective treatments are, but only how different a treatment is from a control group. So, an effect claimed for a treatment and control can largely or entirely come from deterioration in the control group, not what the treatment offers. The claim of success for CBT probably largely depends on the deterioration in the control group.

One interpretation of this trial is that spending an extraordinary 30 hours with a psychiatrist leads to only half the deterioration experienceddoing nothing more than routine care. But this begs the question of why are half the patients left in routine care deteriorating in such a large proportion. What possibly could be going on?

Conclusion: Unexplained deterioration in the control group may explain apparent effects of the treatment, but both groups are doing badly.

5. The diagnosis of “functional somatic symptoms” or, as the authors prefer – Severe Bodily Distress Syndromes – is considered by the authors to be a psychiatric diagnosis. It is not accepted as a valid diagnosis internationally. Its validation is limited to the work done almost entirely within the author group, which is explicitly labeled as “preliminary.” This biased sample of patients is quite heterogeneous, beyond their physicians having difficulty managing them. They have a full range of subjective complaints and documented physical conditions. Many of these patients would not be considered primarily having a psychiatric disorder internationally and certainly within the US, except where they had major depression or an anxiety disorder. Such psychiatric disorders were not an exclusion criteria.

Once sent on the pathway to a psychiatric diagnosis by their physicians’ making a referral to the study, patients had to meet additional criteria:

To be eligible for participation individuals had to have a chronic (i.e. of at least 2 years duration) bodily distress syndrome of the severe multi-organ type, which requires functional somatic symptoms from at least three of four bodily systems, and moderate to severe impairment.in daily living.

The condition identified in the title of the article is not validated as a psychiatric diagnosis. Two papers to which the authors refer to their  own studies ( 1 , 2 ) from a single sample. The title of one of these papers makes a rather immodest claim:

Fink P, Schröder A. One single diagnosis, bodily distress syndrome, succeeded to capture 10 diagnostic categories of functional somatic syndromes and somatoform disorders. Journal of Psychosomatic Research. 2010 May 31;68(5):415-26.

In neither the two papers nor the present RCT is there sufficient effort to rule out a physical basis for the complaints qualifying these patients for a psychiatric diagnosis. There is also a lack of follow-up to see if physical diagnoses were later applied.

Citation patterns of these papers strongly suggest  the authors are not having got much traction internationally. The criteria of symptoms from three out of four bodily systems is arbitrary and unvalidated. Many patients with known physical conditions would meet these criteria without any psychiatric diagnosis being warranted.

The authors relate what is their essentially homegrown diagnosis to functional somatic syndromes, diagnoses which are themselves subject to serious criticism. See for instance the work of Allen Frances M.D., who had been the chair of the American Psychiatric Association ‘s Diagnostic and Statistical Manual (DSM-IV) Task Force. He became a harsh critic of its shortcomings and the failures of APA to correct coverage of functional somatic syndromes in the next DSM.

Mislabeling Medical Illness As Mental Disorder

Unless DSM-5 changes these incredibly over inclusive criteria, it will greatly increase the rates of diagnosis of mental disorders in the medically ill – whether they have established diseases (like diabetes, coronary disease or cancer) or have unexplained medical conditions that so far have presented with somatic symptoms of unclear etiology.

And:

The diagnosis of mental disorder will be based solely on the clinician’s subjective and fallible judgment that the patient’s life has become ‘subsumed’ with health concerns and preoccupations, or that the response to distressing somatic symptoms is ‘excessive’ or ‘disproportionate,’ or that the coping strategies to deal with the symptom are ‘maladaptive’.

And:

 “These are inherently unreliable and untrustworthy judgments that will open the floodgates to the overdiagnosis of mental disorder and promote the missed diagnosis of medical disorder.

The DSM 5 Task force refused to adopt changes proposed by Dr. Frances.

Bad News: DSM 5 Refuses to Correct Somatic Symptom Disorder

Leading Frances to apologize to patients:

My heart goes out to all those who will be mislabeled with this misbegotten diagnosis. And I regret and apologize for my failure to be more effective.

The chair of The DSM Somatic Symptom Disorder work group has delivered a scathing critique of the very concept of medically unexplained symptoms.

Dimsdale JE. Medically unexplained symptoms: a treacherous foundation for somatoform disorders?. Psychiatric Clinics of North America. 2011 Sep 30;34(3):511-3.

Dimsdale noted that applying this psychiatric diagnosis sidesteps the quality of medical examination that led up to it. Furthermore:

Many illnesses present initially with nonspecific signs such as fatigue, long before the disease progresses to the point where laboratory and physical findings can establish a diagnosis.

And such diagnoses may encompass far too varied a group of patients for any intervention to make sense:

One needs to acknowledge that diseases are very heterogeneous. That heterogeneity may account for the variance in response to intervention. Histologically, similar tumors have different surface receptors, which affect response to chemotherapy. Particularly in chronic disease presentations such as irritable bowel syndrome or chronic fatigue syndrome, the heterogeneity of the illness makes it perilous to diagnose all such patients as having MUS and an underlying somatoform disorder.

I tried making sense of a table of the additional diagnoses that the patients in this study had been given. A considerable proportion of patients had physical conditions that would not be considered psychiatric problems in the United States.. Many patients could be suffering from multiple symptoms not only from the conditions, but side effects of the medications being offered. It is very difficult to manage multiple medications required by multiple comorbidities. Physicians from the community found their competence and ability to spend time with these patients taxing.

table of functional somatic symptoms

Most patients had a diagnosis of “functional headaches.” It’s not clear what this designation means, but conceivably it could include migraine headaches, which are accompanied by multiple physical complaints. CBT is not an evidence-based treatment of choice for functional headaches, much less migraines.

Over a third of the patients had irritable bowel syndrome (IBS). A systematic review of the comorbidity  of irritable bowel syndrome concluded physical comorbidity is the norm in IBS:

The nongastrointestinal nonpsychiatric disorders with the best-documented association are fibromyalgia (median of 49% have IBS), chronic fatigue syndrome (51%), temporomandibular joint disorder (64%), and chronic pelvic pain (50%).

In the United States, many patients and specialists would consider considering irritable bowel syndrome as a psychiatric condition offensive and counterproductive. There is growing evidence that irritable bowel syndrome is a disturbance in the gut microbiota. It involves a gut-brain interaction, but the primary direction of influence is of the disturbance in the gut on the brain. Anxiety and depression symptoms are secondary manifestations, a product of activity in the gut influencing the nervous system.

Most of the patients in the sample had a diagnosis of fibromyalgia and over half of all patients in this study had a diagnosis of chronic fatigue syndrome.

Other patients had diagnosable anxiety and depressive disorders, which, particularly at the lower end of severity, are responsive to nonspecific treatments.

Undoubtedly many of these patients, perhaps most of them, are demoralized by not been able to get a  diagnosis for what they have good basis to believe is a medical condition, aside from the discomfort, pain, and interference with their life that they are experiencing. They could be experiencing a demoralization secondary to physical illness.

These patients presented with pain, fatigue, general malaise, and demoralization. I have trouble imagining how their specific most pressing concerns could be addressed in group settings. These patients pose particular problems for making substantive clinical interpretation of outcomes that are highly general and subjective.

Conclusion: Diagnosing patients with multiple physical symptoms as having a psychiatric condition is highly controversial. Results will not generalize to countries and settings where the practice is not accepted. Many of the patients involved in the study had recognizable physical conditions, and yet they are being shunted to psychiatrists who focused only on their attitude towards the symptoms. They are being denied the specialist care and treatments that might conceivably reduce the impact of their conditions on their lives

6. The “CBT” offered in this study is as part of a complex, multicomponent treatment that does not resemble cognitive behavior therapy as it is practiced in the United States.

it is thoughtAs seen in figure 1 in the article, The multicomponent intervention is quite complex and consists of more than cognitive behavior therapy. Moreover, at least in the United States, CBT has distinctive elements of collaborative empiricism. Patients and therapist work together selecting issues on which to focus, developing strategies, with the patients reporting back on efforts to implement them. From the details available in the article, the treatment sounded much more like an exhortation or indoctrination, even arguing with the patients, if necessary. An English version available on the web of the educational material used in initial sessions confirmed a lot of condescending pseudoscience was presented to convince the patients that their problems were largely in their heads.

Without a clear application of learning theory, behavioral analysis, or cognitive science, the “CBT”  treatment offered in this RCT has much more in common with the creative novation therapy offered by Hans Eysenck, which is now known to have been justified with fraudulent data. Indeed,  the educational materials  for this study to what is offered in Eysenck’s study reveal striking similarities. Eysenck was advancing the claim that his intervention could prevent cardiovascular disease and cancer and overcome the iatrogenic effects. I know, this sounds really crazy, but see my careful documentation elsewhere.

Conclusion: The embedding of an unorthodox “CBT” in a multicomponent intervention in this study does not allow isolating any specific, active component ofCBT that might be at work.

7. The investigators disclose having altered their scoring of their primary outcome years after the trial began, and probably after a lot of outcome data had been collected.

I found a casual disclosure in the method section of this article unsettling, particularly noting that the original trial registration was:

We found an unexpected moderate negative correlation of the physical and mental component summary measures, which are constructed as independent measures. According to the SF-36 manual, a low or zero correlation of the physical and mental components is a prerequisite of their use.23 Moreover, three SF-36 scales that contribute considerably to the PCS did not fulfil basic scaling assumptions.31 These findings, together with a recent report of problems with the PCS in patients with physical and mental comorbidity,32 made us concerned that the PCS would not reliably measure patients’ physical health in the study sample. We therefore decided before conducting the analysis not to use the PCS, but to use instead the aggregate score as outlined above as our primary outcome measure. This decision was made on 26 February 2009 and registered as a protocol change at clinical trials. gov on 11 March 2009. Only baseline data had been analysed when we made our decision and the follow-up data were still concealed.

Switching outcomes, particularly after some results are known, constitutes a serious violation of best research practices and leads to suspicion of the investigators refining their hypotheses after they had peeked at the data. See How researchers dupe the public with a sneaky practice called “outcome switching”

The authors had originally proposed a scoring consistent with a very large body of literature. Dropping the original scoring precludes any direct comparison with this body of research, including basic norms. They claim that they switched scoring because two key subscales were correlated in the opposite direction of what is reported in the larger literature. This is troubling indication that something has gone terribly wrong in authors’ recruitment of a sample. It should not be pushed under the rug.

The authors claim that they switched outcomes based only on examining of baseline data from their study. However, one of the authors, Michael Sharpe is also an author on the controversial PACE trial  A parallel switch was made to the scoring of the subjective self-reports in that trial. When the data were eventually re-analyzed using the original scoring, any positive findings for the trial were substantially reduced and arguably disappeared.

Even if the authors of the present RCT did not peekat their outcome data before deciding to switch scoring of the primary outcome, they certainly had strong indications from other sources that the original scoring would produce weak or null findings. In 2009, one of the authors, Michael Sharpe had access to results of a relevant trial. What is called the FINE trial had null findings, which affected decisions to switch outcomes in the PACE trial. Is it just a coincidence that the scoring of the outcomes was then switched for the present RCT?

Conclusion: The outcome switching for the present trial  represents bad research practices. For the trial to have any credibility, the investigators should make their data publicly available so these data could be independently re-analyzed with the original scoring of primary outcomes.

The senior author’s clinic

 I invite readers to take a virtual tour of the website for the senior author’s clinical services  ]. Much of it is available in English. Recently, I blogged about dubious claims of a health care system in Detroit achieving a goal of “zero suicide.” . I suggested that the evidence for this claim was quite dubious, but was a powerful advertisement for the health care system. I think the present report of an RCT can similarly be seen as an infomercial for training and clinical services available in Denmark.

Conflict of interest

 No conflict of interest is declared for this RCT. Under somewhat similar circumstances, I formally complained about undeclared conflicts of interest in a series of papers published in PLOS One. A correction has been announced, but not yet posted.

Aside from the senior author’s need to declare a conflict of interest, the same can be said for one of the authors, Michael Sharpe.

Apart from the professional and reputational interest, (his whole career has been built making strong claims about such interventions) Sharpe works for insurance companies, and publishes on the subject. He declared a conflict of interest for the for PACE trial.

MS has done voluntary and paid consultancy work for government and for legal and insurance companies, and has received royalties from Oxford University Press.

Here’s Sharpe’s report written for the social benefits reinsurance company UnumProvident.

If results of this are accepted at face, they will lend credibility to the claims that effective interventions are available to reduce social disability. It doesn’t matter that the intervention is not effective. Rather persons receiving social disability payments can be disqualified because they are not enrolled in such treatment.

Effects on the credibility of Cochrane collaboration report

The switched outcomes of the trial were entered into a Cochrane systematic review, to which primary care health professionals look for guidance in dealing with a complex clinical situation. The review gives no indication of the host of problems that I exposed here. Furthermore, I have glanced at some of the other trials included and I see similar difficulties.

I been unable to convince the Cochrane to clean up conflicts of interest that are attached to switched outcomes being entered in reviews. Perhaps some of my readers will want to approach Cochrane to revisit this issue.
I think this post raises larger issues about whether Cochrane has any business conducting and disseminating reviews of such a bogus psychiatric diagnosis, medically unexplained symptoms. These reviews do patients no good, and may sidetrack them from getting the medical care they deserve. The reviews do serve the interest of special interests, including disability insurance companies.

Special thanks to John Peters and to Skeptical Cat for their assistance with my writing this blog. However, I have sole responsibility for any excesses or distortions.

 

13 thoughts on “Danish RCT of cognitive behavior therapy for whatever ails your physician about you”

  1. Thanks for another interesting blog.

    This blog reminded me of another Sharpe RCT (Guided self-help for functional (psychogenic) symptoms: a randomized controlled) that was discussed on a patient forum here (with lots of similar points being made about that RCT… are they failing to learn from their mistakes, or successfully learning how to get ‘successful’ results?): http://forums.phoenixrising.me/index.php?threads/sharpe-non-cfs-guided-self-help-for-functional-psychogenic-symptoms-a-randomized-controlled.22910/

    Joining Sharpe in the worrying Unum piece linked to above is Mansel Aylward, former CMO at the DWP and their observer on the PACE trial’s steering committee. His use of the biopsychosocial model to cut spending on disability has long been a matter of concern for disability activists, and has now started to attract the attention of academics too, eg: http://csp.sagepub.com/content/early/2016/05/25/0261018316649120.abstract

    Like

    1. There are possibly dozens of “CBT for physical symptoms” studies like this. They share similar methodological flaws and the authors all appear determined to ignore criticism and continue to hold themselves to a low standard while injustice is inflicted on patients.

      I hope that one day an insider feeling remorse will tell us what keeps this dysfunctional system going.

      Like

  2. Thank you for exposing yet another appalling example of non science. It shows all the same problems as have been extensively described with the PACE trial which is now thoroughly discredited.

    I’m a bit puzzled though by your statement that both groups’ health declined. The graphs seem to show steady state or slight improvement for both groups on all six measures, apart from a very slight drop in the control group on a couple of measures. On first 3 graphs a score increase means improvement and the last 3 graphs a decrease means improvement, as I understand it. Can you explain.

    I’d also like to understand better the negative correlation they found at the start that ‘explains’ (!) their outcome switching. What could it mean?

    Like

    1. Thanks for your feedback. My comment about deterioration referred to patients’ subjective reports that were mentioned in a short passage. Sure, they are only subjective reports, but so is the trial’s primary outcome with switched outcomes.

      The problem with the correlation between two components of what had been their primary outcome is fascinating. It is contradicted by a large literature, but can neither be readily explained nor dismissed in this particular study, without further data being available. I strongly suspect that it has something to do with the unusual way in which the sample was recruited. I recall a potentially relevant correlation between standardized test scores and grades of high school students accepted for college. The negative correlation there reflects some students having been accepted exclusively on the basis of their test scores or grades, but not both. I’ll ask around my methodological friends and get back to you.

      Like

  3. Thank you for clarifying my point about patient deterioration.

    Whilst analysis of the paper by Schröder et al is to be welcomed, there is an attribution oversight within your text that needs your attention.

    You have quoted three paragraphs taken from a commentary on Allen Frances’s blog at:

    https://www.psychologytoday.com/blog/dsm5-in-distress/201212/mislabeling-medical-illness-mental-disorder

    Mislabeling Medical Illness As Mental Disorder, December 8, 2012

    These extracts are presented as though they are attributed to Allen Frances.

    In fact (as is quite evident at the beginning of Frances’ blog), Prof Frances is himself quoting from correspondence with Suzy Chapman of Dx Revision Watch.

    Like

  4. I am surprised by this blog – its timing, its false claims, and its author’s attitude towards me and my work.

    In the scientific world, I live in, scientific studies are discussed within the scientific community in the scientific literature, not on random blogs. Why is this critique not put forward as a letter to the Editor? And why comes it now, years after the publication of our trial?

    I was not contacted by James Coyne with his critique of our trial, or with questions regarding our clinical work. Most points of critique in his blog are already discussed in the limitations section of the trial report (British Journal of Psychiatry 2012; 200:499-507), so they are not new in any way. Others are relevant issues regarding psychotherapy trials in general, such as lack of blinding. I will not discuss these issues here.

    The trial report’s quality has been evaluated by independent researchers in two meta-analyses that used data from the trial (Cochrane Database of Systematic Reviews 2014, Issue 11. Art. No.: CD011142, and Clinical Psychology Review 2017, 51: 142–152). Obviously, these researchers do not share James Coyne’s dismay.

    Some of this blog’s claims are clearly wrong, either deliberately, or due to superficial evaluation of my work. As they may discourage patients to seek evidence-based treatment for their functional somatic syndromes, I will comment on these claims once. However, I will not participate in further discussions on this blog.

    1. James Coyne claims that we propose bodily distress syndrome as a psychiatric disorder.

    This claim is misleading – I have never stated that bodily distress syndrome is a psychiatric disorder. Years ago, I wrote a letter to the editor regarding the problems with psychiatric diagnoses in these patients (Journal of Psychosomatic Research 2010; 68: 95–96). In the classification paper cited in the blog (Journal of Psychosomatic Research 2010; 68: 415-26), we propose – based on empirical data – a simpler classification that may help unifying research efforts in the discipline of functional somatic syndromes across medical specialties. This paper is highly cited and has received considerable attention from other scientists.

    2. James Coyne is concerned about our switch of the primary outcome. The switch of the primary outcome measure was based on findings reported in the Journal of Clinical Epidemiology (Journal of Clinical Epidemiology 2012; 65: 30-41) and is fully reported in the methods section of the trial report. The need to change the primary outcome is furthermore discussed in the limitations section. Moreover, we report results for the original primary outcome, the SF-36 PCS (British Journal of Psychiatry 2012; 200, pages 503-504):

    “The outcome measures by the more widely used SF-36 PCS were similar; these are provided here for comparison and were not part of the primary analysis. The adjusted difference in mean change from baseline to 16 months on the SF-36 PCS was 6.2 points (95% CI 2.5–9.9, P= 0.001). Participants allocated to STreSS improved by 5.6 points (95% CI 2.5–8.7, P<0.001), whereas participants allocated to usual care remained substantially unchanged (-0.6 points, 95% CI -2.7 to 1.4; P = 0.54).”

    The analysis based on the original primary outcome (SF-36 PCS) hence leads to the same conclusion as the “switched” outcome used in the primary trial analysis (the SF-36 aggregate score of the scales physical functioning, bodily pain and vitality).

    Why should we have done all the work with the publication of outcome measurement problems and reporting a switch of the primary outcome, if this was a dishonest change of the primary outcome measure?

    3. James Coyle claims that 50 % of patients in the control group deteriorated. This is misleading. The distribution of change scores in the enhanced usual care group resembles a normal distribution around zero (Figure 4), which is expected, given the flat line of the mean score in Figure 3A.

    4. James Coyle states that we have not followed patients regarding emerging physical disease. This is wrong. We have done one of the few long-term follow up studies in the field (General Hospital Psychiatry 2014; 36: 38–45), which is cited both by an expert clinical review in the Annals of Internal Medicine (Annals of Internal Medicine 2014, 161: 579-586), and by a recent systematic review (Journal of Psychosomatic Research 2016; 88: 60-67).

    5. James Coyle accuses us for conflicts of interests regarding our clinic. The Danish Health system is tax financed, and neither the authors of the trial report nor our employees have any financial interests in this work. We try to help patients, based on the best evidence available. I do not know who James Coyle is trying to help with his blog.

    Like

    1. Dear Dr. Schröder:

      Thank you for taking time out of your busy schedule to comment on my blog post. I hope this is just the beginning of a fruitful dialogue.

      I’m a bit disappointed that you did not respond to the most central issue I raised about your trial design. Namely, you incorporated the same design features into your trial that were shown to be unable to distinguish between an evidence-based treatment of asthma with albuterol and treatment with sham acupuncture. The NEJM trial to which I refer in the blog showed that an objective measurement, in contrast, readily distinguished between albuterol and both the inhaler filled with an inert substance and the sham acupuncture. It’s unfortunate that there are no objective measurements in your study to make any such comparison. However, you did measure number of visits to primary care and mental health and found no differences between the patients assigned to your intensive active treatment and those assigned only two remaining routine care. This is troubling from a health services research point of view. Your treatment is quite expensive and yet led to no reduction in less intensive and less expensive services.

      The choice of patients who remained in routine care as a comparison/control group for the active treatment is a poor one. There is a substantial imbalance in terms of the frequency and intensity of contact (especially with the over 30 hours with a psychiatrist provided to the intervention group), peer support, and encouragement of positive expectations. None of these features were present in your control comparison. Such an imbalance disallows claiming that the presumed active ingredients of your intervention were the source of any differences in outcomes.

      I’m sure you have ready responses to these issues and I’ll certainly give you a forum for providing them. But it would seem this is an inadequate way to show your treatment works, as you claim.

      I don’t know if I can satisfy the objections you raised to the blog, but I will attempt to do so.

      First, my carefully documented, long read post was certainly not random. It had origins in a recent dialogue with a Danish journalist who alerted me to your study. I don’t know why he contacting me. When I examined your study, I recognized a number of issues relevant to ongoing discussions of flawed research at this heavily visited blog site. This study is a quite an interesting example from which readers can learn a lot.

      You ask why I did not submit my critique as a letter to the editor. Welcome to the era of post-publication peer review. Its rapid development was prompted by the recognized limitations of existing journal-controlled peer review, including letters to the editor. It’s not clear that a letter to the editor would be accepted. It’s not clear that you wouldn’t have veto power over whether it got published. You would have the final word and I would have no opportunity to reply to your response. In contrast, we have the opportunity for dialogue here. I hope we can take advantage of it.

      But I have specific additional reservations about attempting to engage you and your co-authors in any letters to the editor. One of your co-authors is an investigator in a highly similar study, which also switched outcomes after data were available. After a drawn out legal battle in which your co-author’s group spent nearly 250,000 British pounds, independent investigators were able to re-analyze the data with the original scoring of the recovery criteria (the SF36, which you also altered). There was a substantial drop in group differences, arguably the claim of effectively to the treatment was effectively eliminated. However the journal refused to consider any letter to the editor pointing out the discrepancy between original and switched recovery criteria. The editor’s response to a submission:

      “Obviously the best way of addressing the truth or otherwise of the findings is to attempt to replicate them. I would therefore like to encourage you to initiate an attempted replication of the study. This would be the best way for you to contribute to the debate…Should you do this, then Psychological Medicine will be most interested in the findings either positive or negative.”

      I think you would agree that the existing system of editor-controlled review is not working in this instance. You can read an extended account of this particular situation with your co-author here.

      Note that your list of authors includes one who among the authors on another that group who have attempted to block criticism of their study, even resorting to lodging complaints with the universities of critics. You can see my documentation of their vilification one academic here and there is more. Personally, I unsuccessfully attempted to engage the author group in a live public debate. I then requested data for reanalysis that they had promised would be available as a condition for publishing in a particular journal. Not only did they refuse, but they denounced me as being vexatious for making the request. Over a year later, I am still waiting for the data. Something is rotten, not only in the State of Denmark, but elsewhere.

      So I hope you understand my taking to the blog.

      Although I have read your article a number of times, I fail to see where you have are responded to my criticisms. I would welcome you points out exactly where these criticisms have already been addressed.

      In your reply, you refer readers to an “evaluated by independent researchers” in a Cochrane systematic review. However, the review is hardly a glowing endorsement. The review finds low to moderate study quality and modest effects of interventions. Almost all comparisons involving CBT to a poorly matched comparison/control group like yours, which stacked the deck. The one exception allowed comparison to an active treatment, there were no advantages for CBT.
      Furthermore, there are serious problems in the independence of Cochrane from one of your co-authors. See if you agree with my analysis of “Probing an untrustworthy Cochrane review of exercise for “chronic fatigue syndrome”
      I am also engaged in an ongoing debate with David Tovey, Cochrane Editor in Chief and Deputy Chief Director concerning Cochrane’s acceptance of switched outcomes, such as occurred with your trial, without consideration of the risk of bias. You can find his blog post here and my response here. Incidentally, he seems to have embraced blog post as an appropriate way of discussing these issues. I think the same issue of switched outcomes as a risk of bias is true of any meta-analysis involving your study. I think there are also issues about the idiosyncratic nature of your diagnostic criteria that would unlikely not be mentioned in a meta analysis.

      You complain about me: “We try to help patients, based on the best evidence available. I do not know who James Coyle [sic] is trying to help with his blog.’

      If you review my other blog posts about many other topics, I believe I have a consistent record of demanding high quality evidence be used to evaluate what is best for patients. For the many reasons that I noted in the blog, I believe that your trial falls short.

      I agree that the issues raised in my blog post might serve to encourage patients to get second opinions before accepting treatment from your clinic, if not decline referral altogether. Declining a referral would definitely be the advice I would give to a family member, friend or colleague.

      I’m unimpressed that your multimodal intervention has much similarity to CBT as it is practiced in the United States and elsewhere. I also strongly object to the strong initial indoctrination of patients with an outdated psychosomatic model . The only pathways you highlighted as causally from the mind to the rest of the body. I find your cartoon representations of this model simplistic and lacking evidence. It seems to display a condescending attitude towards patients and the legitimacy of their suffering and complaints.

      You criticize me for claiming that you are proposing bodily distress syndrome as a psychiatric disorder. How else could your diagnosis be interpreted? You recommend more intensive psychiatric treatment that is typically provided for major depression or general anxiety disorder in clinical trials. You propose that the treatment be what you consider “cognitive behavior therapy.”

      You suggest that I should be unconcerned with your outcomes switching. I believe I am on firm ground in suggesting that that outcomes switching is a bad research practice associated with claims for both pharmacological and psychotherapeutic interventions that proved false or exaggerated in any re-analysis of original outcomes or independent efforts at replication.

      You provide no acknowledgment of the issues nor refutation of these general concerns about switched primary outcomes.

      You accuse me of making the misleading claim that 50% of the patients in the control group deteriorated. Compare what I said with what you said on page 505 . Namely, “Over half (56%) of patients in the usual care group reported their physical health to be worse than before randomisation, which was the case for only a quarter (25%) of the STreSS group.”

      I find this statement quite troubling and hardly an endorsement for the effectiveness of your approach.

      You are correct that I missed your follow up study. For that I apologize. But now that I have had a chance to examine it, I do not find it very reassuring. These patients were all treated at your clinic and all had letters to their primary care physicians indicating your diagnosis of them with what you call “body distress syndrome.” Neither established physical nor psychiatric disorder had been an initial exclusion criterion. From an international point of view, it is curious that a documented physical condition is not an exclusion for your diagnosis of a functional somatic symptom disorder. You show that it is a mixed grop of patients but most continue to be physically ill. The failure of the patients to have new diagnoses may simply be due to your letters to their primary care physicians discouraging appropriate attention to their physical health concerns.. You state “Since we did not have access to primary are records, only diagnosis made in secondary and tertiary care were obtained.” That is truly unfortunate limitation, but secondary to this being a follow-up on diagnoses made by your own group. If your diagnostic criteria are going to get any traction in the international scientific and medical community, you need to obtain blinded independent diagnoses in a sample that is not been contaminated by your feedback to primary care physicians. Ideally, this new sample with not involved in the primary care physicians or the interview who has been tipped off about your hypotheses about body distress syndrome.

      Without such independently obtained validation, I think you will have a hard time convincing others of the usefulness of applying an additional diagnosis of a functional somatic symptom disorder to a heterogeneous population with multiple confirmed physical illnesses and mental disorders.

      You complain about my bringing up the issue of conflict of interest. First, you have a strong investment in a diagnosis and a treatment that have not yet obtained much international acceptance. Second, the credibility of your maintaining such a specialist clinic depends on your ability to attract referrals and to have evidence that what you provide is effective. This trial is your evidence. .

      But there is a third issue: one of your authors acknowledged extensive conflicts of interest in publication that was basically concurrent with your article. Why was that conflict of interest not disclosed in the article with you? Your co-author also has a substantial stake in consulting with insurance companies seeking to reduce disability payments by disqualifying patients on the basis of them not being in treatment. Your trial, although flawed, has the appearance of producing such evidence. Finally, in refusing to share data your co-authors investigative group has consistently argued the transparency would risk reputational damage. Although I reject this has a consideration, their argument seems to knowledge that investigators have a lot in stake in evaluating treatments that they have personally developed.

      I look forward to your response to this blog post. Although I live in Philadelphia, I often visit Europe. Actually, I was in Copenhagen for a week working on a meta-analysis with some epidemiologists just before I was contacted by the journalist. I could arrange a side trip to Copenhagen on my next visit to Europe. II invite you to engage me in a discussion in a public forum. We could take a poll of the audience before and after the debate, English style. I previously extended such an offer to one of your co-authors, and Simon Wessely assured me it would be accepted. Unfortunately, the debate never happened.

      However, I think we have an opportunity to clarify each of our misunderstandings, and to inform and entertain a professional or lay audience.

      Like

    2. > I have never stated that bodily distress syndrome is a psychiatric disorder.

      That might be true, but it is obvious that you and your colleagues believe it to be a psychiatric disorder. Insulting the intelligence of readers (some of them patients with what you would view as functional disorder) will not earn you any respect.

      Like

  5. I would like to comment on just one of the points raised by Dr. Schröder above, because it has more general relevance to the understanding of medically unexplained symptoms and syndromes.

    Dr. Schröder says he has “never stated that bodily distress syndrome is a psychiatric disorder” In a previous paper, he proposes “a simpler classification that may help unifying research efforts in the discipline of functional somatic syndromes across medical specialties”.

    However, as James Coyne notes, it is very clear to any reader that the authors believe these conditions are perpetuated by psychological factors – so much so that CBT is viewed as not just a helpful adjunct, but as a possible cure.

    Schroeder and his colleagues are by no means the only ones guilty of these sorts of evasive manoeuvres. They are characteristic of the of field “medically unexplained syndromes” and other similar constructs. Sometimes researchers use biopsychosocial language to obscure the psychological nature of their underlying model – for example, they may claim that the illness under study is the result of “a complex interplay between physiological and psychological factors”, just as is the case for all medical conditions. These kinds of platitudes are too vague to be of any use, and although they appear at first glance to be “truisms”, in actual fact, their veracity is not well established (recent studies of CBT intervention of terminal cancer patients would suggest that changing the patient’s thoughts and feelings is unlikely to influence cancer progression).

    A related problem concerns the terminology used in this field. Much recent work avoids terms that imply psychological causation in favour of more non-specific terms (e.g. medically unexplained symptoms, bodily distress syndrome). However, one would be incorrect in thinking that this indicates researchers have moved away from largely psychological explanatory theories. In virtually every such study, researchers continue to assume that the primary perpetuating factors in these conditions are psychological. This assumption is rarely defended let alone challenged. Such empty terminology allows the researchers to sidestep any criticism of their underlying psychological model. But there is an even more worrying aspect: it absolves the researchers from any need to demonstrate actual psychopathology in these patients (since, in most patients, there is none whatsoever!). Such terms can therefore be used to widen the scope of these diagnoses, relegating even greater numbers of severely unwell patients to psychological interventions instead of actual medical treatment.

    Like

  6. Guidelines on how to manage patients with what is assumed to be psychosomatic symptoms stress that the patient must not be told that their symptoms are viewed as psychosomatic. Patients are being refused the right to informed consent.

    I think we all know why this is being done: most patients would disagree with this diagnosis, and clinics selling “treatment” for “psychosomatic symptoms” would lose income.

    Like

  7. Thank you so much for your blog ‘Danish RCT of cognitive behavior therapy for whatever ails your physician about you’. I am not able to express my gratitude in english, but it is fantastic that you took your time to look at the study.
    I myself is a danish M. Sc. in Biochemistry and I work in a private Danish company, so I am not employed in the medical world. I am very interested in ME/CFS because I am a ‘recovered’ patient – antibiotic do the trick for me. I am fighting very hard to get the danish doctors and authorities to understand that ME/CFS is a very serious physical illness, and patients need symptom treatment.
    Unfortunately the senior author and the clinic you are referring to are in charge in Denmark, so everybody believes in ‘bodily distress syndrome’ and ‘funktionelle lidelser’. The only treatment danish ME/CFS patients are offered are GET and CBT.
    In march 2014 there was a public hearing about the term ‘funktionelle lidelser’ (functional disorder) and M. Sc. H. Nielsen criticized the study exactly as you do and she also criticized the follow-up study. Nobody reacted.
    Now I have sent your critic to the danish health board and to a working group about ‘funktionelle lidelser’. It is very important for me to have all this international scientific evidence to support me in the ‘fight’. Your blog is really needed, because there is no danish doctors or scientific people that stands up for the patients in the public. I have also contacted danish journalists but they want me to have danish doctors to verify what I am saying – and nobody will do that public. Hopefully the journalists can use your blog.

    Like

Comments are closed.