Talking back to “Talking Therapy Can Literally Rewire the Brain”

This edition of Mind the Brain was prompted by an article in Huffington Post, Talking Therapy Can Literally Rewire the Brain.

The title is lame on two counts: “literally” and any suggestion that psychotherapy does something distinctive to the brain, much less “rewiring” it.

I gave the journalist the benefit of a doubt and assumed that the editor applied the title to the article without having the journalist’s permission. I know from talking to journalists, that’s a source of enormous frustration when it happens. But in this instance, the odd title came directly from a press release from King’s College London  (Study reveals for first time that talking therapy changes the brain’s wiring)which concerned an article published in the  Nature Publishing journal, Translational Psychiatry 

Hmm, authors from King’s College and published in a Nature journal suggest this might be a serious piece of science worth giving a closer look. In the end, I was reminded not to make too much of authors’ affiliation and where they publish.

I poked fun on Twitter at the title of the Huffington Post article.

literally twitter postThe retweets and likes drifted into a discussion of neuroscientists saying they really didn’t know much about the brain. Somebody threw in a link to an excellent short YouTube video by NeuroSkeptic on that topic that I highly recommend.

Anyway, I found serious problems with the Huffington Post article that should have been sufficient to stop with it.  Nonetheless, I proceeded and the problems got compounded when I turned to the press release with its direct quotes from the author. I wasn’t long into the Translational Psychiatry article before I appreciated that its abstract was misleading in claiming that there were 22 patients in the study. That is a small number, but if the abstract had stated the actual number, which was 15 patients, readers would have been warned not to take too seriously complicated multivariate statistics that were coming up.

How did a prestigious journal like Translational Psychiatry allow authors to misrepresent their sample size? I would shortly be even more puzzled about why the article was even published in Translational Psychiatry, although I formed unflattering some hypotheses about that journal. I’ll end with those hypotheses.

Talking To A Therapist Can Literally Rewire Your Brain (Huffington Post)

The opening sentence would raise the skepticism of informed reader:

If you can change the way you think, you can change your brain.

If I accept that statement, it’s going be with a broad stretching of it to meaninglessness. “If you can change the way you think..” covers lots of territory. If the statement  is going to remain the correct, then the phrase “change your brain” is going to have to be similarly broad. If the journalist wants to make a big deal of this claim, she would have to concede that reading my blog changes her brain.

That’s the conclusion of a new study, which finds that challenging unhealthy thought patterns with the help of a therapist can lead to measurable changes in brain activity.

Okay, we now know that at least a specific study with brain measurements is being discussed.

But then

In the study, psychologists at King’s College London show that Cognitive Behavioral Therapy strengthens certain healthy brain connections in patients with psychosis. This heightened connectivity was associated with long-term reductions in psychotic symptoms and recovery eight years later, according to the findings, which were published online Tuesday in the journal Translational Psychiatry.

“Over six months of therapy, we found that connections between certain key brain regions became stronger,” Dr. Liam Mason, a clinical psychologist at King’s College and the study’s lead author, told The Huffington Post in an email. “What we are really excited about here is that these stronger connections lead to long-term improvements in people’s symptoms and overall recovery across the eight years that we followed them up.”

A lot of skepticism is being raised. The article seems to be claiming that changes in brain function observed in the short term with cognitive behavior therapy for psychosis [CBTp] were associated with long-term changes over the extraordinary eight years.

The problems with this? First CBTp is not known to be particularly effective, even in the short term. Second, this a lot heterogeneity under the umbrella of “psychosis,” but in eight years, a person who has had that label appropriately applied will have a lot of experiences: recovery and relapse, and certainly other mental health treatments. How in all that noise and confusion can a signal detected that a psychotherapy that isn’t particularly effective explains any long-term improvement?

[Skeptical about my claim that CBTp is ineffective? See Effect of a missing clinical trial on what cochrane-slide-2we think about cognitive behavior therapy  and the slides about Cochrane reviews from a longer Powerpoint presentation.]

Any discussion of how CBT works and what long-term improvements it predicts has get past considerable evidence CBT doesn’t work any better than nonspecific supportive treatments. Without short-term effects, how can have long-term effects?

cbt cochrane 1

 

 

 

There is no acknowledgment in the Huffington Post article of the lack of efficacy of CBTp. Instead, we have a strong assumption that CBTp works and that the scientific paper under discussion is important because it shows that CBTp strongly works, with observable long-term effects.

The journalist claims that the present scientific paper builds on earlier one:

In the original study, patients with psychosis underwent brain imaging both before and after three months of CBT. The patients’ brains were scanned while they looked at images of faces expressing different emotions. After undergoing CBT, the patients showed marked increases in brain activity. Specifically, the brain scans showed heightened connections between the amygdala, the brain region involved in fear and threat processing, and the prefrontal cortex, which is responsible for reasoning and thinking rationally ― suggesting that the patients had an improved ability to accurately perceive social threats.

“We think that this change may be important in allowing people to consciously re-think immediate emotional reactions,” Mason said.

Readers can click back to my earlier blog post, Sex and the single amygdala: A tale almost saved by a peek at the data. The same experimental paradigms was being used to study the amygdala in terms of activity predicted changes in the number of sexual partners over time. In that particular study, p-hacking, and significance chasing and selective reporting were used by the authors to create the illusion of important findings. If you visit my blog post, check out the comments that ridiculed the study, including from two very bright undergraduates.

We don’t need to deter into a technical discussion of functional magnetic resonance imaging (fMRI) data to make a couple of points. The authors of the present study used a rather standard experimental paradigm and the focus on amygdala concerned some quite nonspecific psychological processes.

The authors of the present study soon concede this:

There’s a good chance that similar brain changes also occur in CBT patients being treated for anxiety and depression, Mason said.

“There is research showing that some of the same connections may also be strengthened by CBT for anxiety disorders,” he explained.

But wait: isn’t the lead author also saying in the Huffington Post article and the title of the press release as well that this is a first-time study ever?

For the present purposes, we need only to dispense with any notion that we’re talking about a rewiring of the brain known to be specifically associated with psychosis or even that there is reason to expect that such “rewiring” could be expected to predict long-term outcome of psychosis.

Reading further, we find that the study only involved following 15 patients from a larger study, un like the misleading abstract that claims 22.

Seriously, are we being asked to get worked up about a fMRI study with only 15 patients? Yup.

The researchers found that heightened connectivity between the amygdala and prefrontal cortex was associated with long-term recovery from psychosis. The exciting finding marks the first time scientists have been able to demonstrate that brain changes resulting from psychotherapy may be responsible for long-term recovery from mental illness.

What is going on here? The journalist next gives free reign to the lead author to climb up on a soap box and proclaim his agenda behind all of these claim:

The findings challenge the “brain bias” in psychiatry, an institutional focus on physical brain differences over psychological factors in mental illness. Thanks to this common bias, many psychiatrists are prone to recommending medication to their clients rather than psychological treatments such as CBT.

But medication has been proved to be effective with psychosis, CBTp has not.

“Psychological therapy can lead to changes in the mechanics of the brain,” Mason said. “This is especially important for conditions like psychosis which have traditionally been viewed as ‘brain diseases’ that require medication or even surgery.”

“Mechanics of the brain”?  Now we have escalated from ‘literally rewiring’ to “changes in the mechanics.” Dude, we are talking about a fMRI study. Do you think we have been transported to an auto repair shop?

“This research challenges the notion that the existence of physical brain differences in mental health disorders somehow makes psychological factors or treatments less important,” Mason added in a statement.

Clicking on the link takes one to Science Daily article which churnals (plagiarizes) a press release from Kings College,  London.

The Press Release: Study reveals for first time that talking therapy changes the brain’s wiring

There is not much in this press release that is not been regurgitated in the Huffington Post article except for some more soapbox preaching:

Unfortunately, previous research has shown that this ‘brain bias’ can make clinicians more likely to recommend medication but not psychological therapies. This is especially important in psychosis, where only one in ten people who could benefit from psychological therapies are offered them.”

But CBT, the most evaluated psychotherapy for psychosis has not been shown to be effective, by itself. Sure, patients suffering from psychosis need a lot of support, efforts to maintain positive expectations, and opportunities to talk about their experience. But in direct comparisons between such support provided by professionals or by peers, CBT has not been shown to be more effective.

The researchers now hope to confirm the results in a larger sample, and to identify the changes in the brain that differentiate people who experience improvements with CBT from those who do not. Ultimately, the results could lead to better, and more tailored, treatments for psychosis, by allowing researchers to understand what determines whether psychological therapies are effective.

Sure, we are to give a high priority to examining the mechanism by which CBT, which has not been proven effective, works its magic.

Translational Psychiatry: Brain connectivity changes occurring following cognitive behavioural therapy for psychosis predict long-term recovery

[This will be a quick tour, only highlighting some of the many problems that I found. I welcome readers probing the open access article and posting what they find.]

The Abstract misrepresents the study as having 22 patients, when it actually only had data from 15.

The Introduction largely focuses on previous work of the author group. If you bothered to check, none of it involves randomized trials, despite making claims of efficacy for CBTp. No reference is made to a large body of literature finding a lack of effectiveness for CBTp. In particular, there is no mention of the Cochrane reviews.

A close reading of the Methods indicates that what are claimed to be “objective clinical outcomes” are actually unblinded, retrospective ratings of case notes by the two raters including the first author. Unblinded ratings, particularly by an investigator, are an important source of bias in studies of CBTp and lead to exaggerated estimates of outcome.

An additional measure with inadequate validation was obtained at 7 to 8 year follow-up:

Questionnaire about the Process of Recovery (QPR,31), a service-user led instrument that follows theoretical models of recovery and provides a measure of constructs such as hope, empowerment, confidence, connectedness to others.

All patients came from clinical studies conducted by the author group that did not involve randomization. Rather, assignment to CBTp was based on provider identifying patients “deemed as suitable for CBTp.“ There is considerable risk of bias if it patient data is treated if it arose in a randomized trial. I previously raised issues about the inadequacy of routine care provided to psychotic patients both in terms of its clinical adequacy and an meaningfulness as a control/comparison group because of its lack of nonspecific factors.

All patients assigned to CBTp were receiving medication and other services. A table revealed that receipt of other services was strongly correlated with recovery status. Yet the authors are attempting to attribute any recovery across the eight years to the brief course of CBTp at the beginning. Obviously, the study is hopelessly confounded and no valid inferences possible. This alone should have gotten this study rejected.

There were data available from control subjects at follow-up, including fMRI data, but they were excluded from the present report. That is unfortunate, because these data would allow at least minimal evaluation of whether CBTp versus remaining in routine care had any difference in outcomes and – importantly – if the fMRI data similarly predicted the outcomes of patients not receiving CBTp.

Data Analysis indicates one tailed, multivariate statistical tests that are quite inappropriate and essentially meaningless with such a small data set. Bonferonni corrections, which were inconsistently applied, offer no salvation.

With such small samples and multivariate statistics, a focus on p-values is inappropriate, but the authors do just that and report p<.04 and p<.06, the latter being treated as significant. The hypothesis that this might represent significance chasing is supported when supplementary data tables are examined. When I showed them to a neuroscientist, his first response was that they were painful to look at.

longitudinalI could go on but…

Why did the authors bother with this study? Why did King’s College London publicize the study with a press release? Why was it published in Nature’s Translational Psychiatry without the editor or the reviewers catching obvious flaws?

The authors had some data lying around and selected out post-hoc a subset of patients and applied retrospective ratings and inappropriate statistics. There is no evidence of a protocol for a priority hypothesis being pursued, but strong circumstantial of p-hacking, significance chasing and selective reporting. This is not a valid study, not even an experimerciasl, it is a political, public relations effort.

soao box 2Statements in the King’s College press release echoed in the Huffington Post indicate a clear ideological agenda. Anyone who knows anything about psychiatry, neuroscience, cognitive behavior therapy for psychosis is unlikely to be persuaded. Anyone who examines the supplementary statistical tables armed with minimal statistical sophistication will be unimpressed, if not shocked. We can assume that as a group, these people would quickly leave the conversation about cognitive behavior therapy for psychosis literally rewiring the brain, if they ever got engaged.

The authors were not engaging relevant audiences in intelligent conversation. I can only presume that they were targeting naive vulnerable patients and their families having to make difficult decisions about treatment for psychosis. And the authors were preaching to the anti-psychiatry crowd. One of the authors also appears as an author of Understanding Psychosis, a strongly non-evidence-based advocacy of cognitive behavior therapy for psychosis, delivered with a hostility towards medication and psychiatrists (See my critique.) I did know that about this author until I read the materials I’ve been reviewing. It is an important bit of information and speaks to the author’s objectivity and credibility.

Obviously, the press office of King’s College London depends a lot, maybe almost entirely, on the credibility of authors associated with that institution. Maybe next time, they should seek an independent evaluation. Or maybe they are  just interested in publicity about research of any kind.

But why was this article published in the seemingly prestigious Nature journal, Translational Psychiatry? It should be noted that this journal is open access, but with exceptionally pricey Article Processing Costs (APCs) of £2,400/$3,900/€2,800. Apparently adequate screening and appropriate peer review are not including in these costs. These authors have purchased a lot of prestige. Moreover, if you want to complain about their work in a letter to the editor, you have to pay $900. So the authors have effectively insulated themselves from critics. Of course, is always blogging, PubMed Commons and PubPeer for post-publication peer review.

I previously blogged about another underpowered, misreported study claiming to have identified a biomarker blood test for depression. The authors were explicitly advertising that they were seeking commercial backers for their blood test. They published in Translational Psychiatry. Maybe that’s the place to go for placing outlandish claims into open access – where anybody can be reached – with a false assurance of a prestige protected by rigorous peer review.

 

Pay $1000 to criticize a bad ‘blood test for depression’ article?

pay to play-1No way, call for retraction.

Would you pay $1,000 for the right to criticize bad science in the journal in which it originally appeared? That is what it costs to participate in postpublication peer review at the online Nature Publishing Group (NPG) journal, Translational Psychiatry.

Damn, NPG is a high-fashion brand, but peer review is quite fallible, even at an NPG npgxJournal. Should we have to pay to point out the flawed science that even NPG inevitably delivers? You’d think we were doing them a favor in terms of quality control.

Put differently, should the self-correction on which scientific progress so thoroughly depends require critics be willing to pay, presumably out of their own personal funds? Sure, granting agencies now reimburse publication costs for the research they fund, but a critique is unlikely to qualify.

Take another perspective: Suppose you have asmall data set of patients for whom you have blood samples.  The limited value of the data set was further comporimsed by substantial, nonrandom loss to follow-up. But you nonetheless want to use it to solicit industry funding for a “blood test for depression.” Would you be willing to pay a premium of $3,600-$3,900 to publish your results in a prestigious NPG journal, with the added knowledge that it would be insulated from critics?

I was curious just who would get so worked up about an article that they would pay $1,000 to complain.

So, I put Translational Psychiatry in PUBLICATION NAME at Web of Science. It yielded 379 entries. I then applied the restriction CORRESPONDENCE and that left only two entries.

Both were presenting original data and did not even cite another article in Translational Psychiatry.  Maybe the authors were trying to get a publication into an NPG journal on the cheap, at a discount of $2,600.

P2PinvestIt appears that nobody has ever published a letter to the editor in Translational Psychiatry. Does that mean that there has never ever been anything about which to complain? Is everything we find in Translational Psychiatry perfectly trustworthy?

I recently posted at Mind the Brain and elsewhere about a carefully-orchestrated media campaign promoting some bad science published in Translational Psychiatry. An extraordinary publicity effort disseminated a Northwestern University press release and video to numerous media outlets. There was an explicit appeal for industry funding for the development of what was supposedly a nearly clinic-ready inexpensive blood test for depression.

The Translational Psychiatry website where I learned of these publication costs displays the standard NPG message that becomes mocking by a paywall that effectively blocks critics:

“A key strength of NPG is its close relationship with the scientific community. Working closely with scientists, listening to what they say, and always placing emphasis on quality rather than quantity, has made NPG the leading scientific publisher at finding innovative solutions to scientists’ information needs.”

The website also contains the standard NPG assurances about authors’ disclosures of conflicts of interest:

“The statement must contain an explicit and unambiguous statement describing any potential conflict of interest, or lack thereof, for any of the authors as it relates to the subject of the report”

The authors of this particular paper declared:

“EER is named as an inventor on two pending patent applications, filed and owned by Northwestern University. The remaining authors declare no conflict of interest.”

Does this disclosure give readers much clarity concerning the authors’ potential financial conflict of interest? Check out this marketing effort exploiting the Translational Psychiatry article.

Northwestern Researchers Develop RT-qPCR Assay for Depression Biomarkers, Seek Industry Partners

I have also raised questions about a lack of disclosures of conflicts of interest from promoters of Triple P Parenting. The developers claimed earlier that their program was owned by the University of Queensland, so there was no conflict of interest to declare. Further investigation  of the university website revealed that the promoters got a lucrative third of proceeds. Once that was revealed, a flood of erratum notices disclosing the financial conflicts of interest of Triple P promoters followed – at least 10 so far. For instance

triple P erratum PNG
Please Click to Enlarge

How bad is the bad science?

You can find the full Translational Psychiatry article here. The abstract provides a technical but misleading summary of results:

“Abundance of the DGKA, KIAA1539 and RAPH1 transcripts remained significantly different between subjects with MDD and ND controls even after post-CBT remission (defined as PHQ-9 <5). The ROC area under the curve for these transcripts demonstrated high discriminative ability between MDD and ND participants, regardless of their current clinical status. Before CBT, significant co-expression network of specific transcripts existed in MDD subjects who subsequently remitted in response to CBT, but not in those who remained depressed. Thus, blood levels of different transcript panels may identify the depressed from the nondepressed among primary care patients, during a depressive episode or in remission, or follow and predict response to CBT in depressed individuals.”

This was simplified in a press release that echoed in shamelessly churnalized media coverage. For instance:

“If the levels of five specific RNA markers line up together, that suggests that the patient will probably respond well to cognitive behavioral therapy, Redei said. “This is the first time that we can predict a response to psychotherapy,” she added.”

The unacknowledged problems of the article began with the authors only having 32 depressed primary-care patients at baseline and their diagnostic status not having been  confirmed by gold standard semi-structured interviews by professionals.

But the problems get worse. For the critical comparison of patients who recovered in cognitive behavioral therapy versus those that did not occurred in the subsample of nine recovered versus 13 unrecovered patients remaining after a loss-to-follow-up of 10 patients. Baseline results for the 9 +13= 22 patients in the follow-up sample did not even generalize back to the original full sample. How, then, could the authors argue that the results apply to the 23 million or so depressed patients in the United States? Well, they apparently felt they could better-generalize back to the original sample, if not the United States, by introducing an analysis of covariance that controlled for age, race and sex.  (For those of you who are tracking the more technical aspects of this discussion, contemplate the implications of controlling for three variables in a between-groups comparison of nine versus 13 patients. Apparently the authors believe that readers would accept the adjusted analyses in place of the unadjusted analyses which had obvious problems of generalizability. The reviewers apparently accepted this.).

Finally, treatment with cognitive behavior therapy was confounded with uncontrolled treatment with antidepressants.

I won’t discuss here the other problems of the study noted in my earlier blog posts. But I think you can see that these are analyses of a small data set truly unsuitable for publication in Translational Psychiatry and serving as a basis for seeking industry funding for a blood test for depression.

As I sometimes do, I tried to move from blog posts about what I considered problematic to a formal letter to the editor to which the authors would have an opportunity to reply. It was then that I discovered the publication costs.

So what are the alternatives to a letter to the editor?

Letters to the editor are a particularly weak form of post-publication peer review. There is little evidence that they serve as an effective self-correction mechanism for science. Letters to the editor seldom alter the patterns of citations of the articles about which they complain.

pay to paly2Even if I paid the $1,000 fee, I would only have been entitled to 700 words to make my case that this article is scientifically flawed and misleading. I’m not sure that a similar fee would be required from the authors to reply. Maybe responding to critics is part of the original package that they purchased from NPG. We cannot tell from what appears in the journal because the necessity of responding to a critic has not yet occurred.

It is quite typical across journals, even those not charging for a discussion of published papers, to limit the exchanges to a single letter per correspondent and a single response from the authors. And the window for acceptance of letters is typically limited to a few weeks or months after an article has appeared. While letters to the editor are often peer-reviewed, replies from authors typically do not receive peer review.

A different outcome, maybe

I recently followed up blogging about the serious flaws of a paper published in PNAS by Fredrickson and colleagues with a letter to the editor. They in turn responded. Compare our letters to see why the uninformed reader might infer that only confusion had been generated by either of them. But stay tuned…

The two letters would have normally ended any exchange.

However, this time my co-authors and I thoroughly re-analyzed the Fredrickson et al data and PNAS allowed us to publish our results. This time, we did not mince words:

“Not only is Fredrickson et al.’s article conceptually deficient, but more crucially statistical analyses are fatally flawed, to the point that their claimed results are in fact essentially meaningless.”

In the supplementary materials, we provided in excruciating detail our analytic strategy and results. The authors’ response was again dismissive and confusing.

The authors next refused our offer for an adversarial collaboration in which both parties would lay responses to each other with a mediator in order to allow readers to reach some resolution. However, the strengths of our arguments and reanalysis – which included thousands of regression equations, some with randomly generated data – are such others have now calling for a retraction of the original Fredrickson and Cole paper. If that occurs, it would be an extraordinarily rare event.

Limits on journals impose on post-peer-review commentary severely constrain the ability of science to self-correct.

The Reproducibility Project: Psychology is widely being hailed as a needed corrective for the crisis of credibility in science. But replications of studies such as this one involving pre-post sampling of genomic expression from an intervention trial are costly and unlikely to be undertaken. And why attempt a “replication” of findings that have no merit in the first place? After all, the authors’ results for baseline assessments did not replicate in the baseline results of patients still available at follow-up. That suggests a table problem, and that attempts at replication would be futile.

plosThe PLOS journals have introduced the innovation of allowing comments to be placed directly at the journal article’s webpage, with their existence acknowledged on the article itself. Anyone can respond and participate in a post-publication peer review process that can go on for the life of the interest in a particular article. The next stage in furthering post-publication peer review is that such comments be indexed and citable and counted in traditional metrics, as well as altmetrics. This would recognize citizen scientists’ contributions to cleaning up what appears to be a high rate of false positives and outright nonsense in the current literature.

pubmedcommonsPubMed Commons offers the opportunity to post comments on any of the over 23 million entries in PubMed, expanding the PLOS initiative to all journals, even those of the Nature Publishing Group. Currently, the only restriction is that someone attempting to place a comment have authored any of the 23,000,000+ entries in PubMed, even a letter to the editor. This represents progress.

But similar to the PLOS initiative, PubMed Commons will get more traction when it can provide conventional academic credit –countable citations– to contributors identifying and critiquing bad science. Currently authors can get credit for putting bad science into the literature that no one can get for  helping getting it recognized as such.

So, the authors of this particular article have made indefensibly bad claims about having made substantial progress toward developing an inexpensive blood test for depression. It’s not unreasonable to assume their motive is to cultivate financial support from industry for further development. What’s a critic to do?

In this case, the science is bad enough and the damage to the public and professionals’ perception of the state of the science of ‘blood test for depression’ sufficient for  a retraction is warranted. Stay tuned – unless Nature Publishing Group requires a $1,000 payment for investigating whether an article warrants retraction.

Postscript: As I was finishing this post, I discovered that the journals published by the  Modern Language Society requires  payment of a $3,000 membership fee to publish a letter to the editor in one of their journals. I guess they need to keep the discussion within the club. 

Views expressed in this blog post are entirely those of the author and not necessarily those of PLOS or its staff.

Special thanks to Skeptical Cat.skeptical sleuth cat 8 30 14-1