The PACE PLOS One data will not be released and the article won’t be retracted

PLOS One has bought into discredited arguments about patient consent forms not allowing sharing of anonymized data. PLOS One is no longer at the vanguard of open science through routine data sharing.

mind the brain logo

Two years have passed since I requested release of the PLOS One PACE data, eight months since the Expression of Concern was posted. What can we expect?

expression of concern-page-0

9 dot problem
Solving the 9-dot problem involves paying attention and thinking outside the box.

If we spot some usually unrecognized connections, we can see the PLOS One editors are biased towards the PACE investigators, favoring them over other stakeholders in whether the data are released as promised..

Spoiler: The PLOS One Senior Editors completed the pre-specified process of deciding what to do about the data not being shared.  They took no action. Months later the Senior Editors reopened the process and invited one of PACE investigators Trudy Chalder’s outspoken co-authors to help them reconsider.

A lot of us weren’t cynical enough to notice.

International trends will continue toward making uploading data into publicly accessible repositories a requirement for publication. PLOS One has slowed down by buying into discredited arguments about patient consent forms not allowing sharing of anonymized data.

PLOS One is no longer at the vanguard of open science through routine data sharing.

The expression of concern

actual display of expression of concern on PLOS article
Actual Expression of Concern on display on PLOS One article.

The editors’ section of the Expression of Concern ends with:

In spite of requests to the authors and Queen Mary University of London, we have not yet received confirmation that an institutional process compatible with the existing PLOS data policy at the time has been developed or implemented for the independent evaluation of requests for data from this study. We conclude that the lack of resolution towards release of the dataset is not in line with the journal’s editorial policy and we are thus issuing this Expression of Concern to alert readers about the concerns raised about this article.

This is followed by the PACE investigators’ response:

Statement from the authors

We disagree with the Expression of Concern about our health economic paper that PLOS ONE has issued and do not accept that it is justified. We believe that data should be made available and have shared data from the PACE trial with other researchers previously, in line with our data sharing policy. This is consistent with the data sharing policies of Queen Mary University of London, and the Medical Research Council, which funded the trial. The policy allows for the sharing of data with other researchers, so long as safeguards are agreed regarding confidentiality of the data and consent as specified by the Research Ethics Committee (REC). We have also pointed out to PLOS ONE that our policy includes an independent appeal process, if a request is declined, so this policy is consistent with the journal’s policy when the paper was published.

During negotiations with the journal over these matters, we have sought further guidance from the PACE trial REC. They have advised that public release, even of anonymised data, is not appropriate. As a consequence, we are unable to publish the individual patient data requested by the journal. However, we have offered to provide key summarised data, sufficient to provide an independent re-analysis of our main findings, so long as it is consistent with the REC decision, on the PLOS ONE website. As such we are surprised by and question the decision by the journal to issue this Expression of Concern.

Check out my critique of their claim to have shared data from the PACE trial with other researchers-

Don’t bother to apply: PACE investigators issue guidance for researchers requesting access to data.

Nothing_to_DeclareConflict of interest: Nothing to declare?

 The PACE authors were thus given an extraordinary opportunity to undermine the editors’ Expression of Concern.

It is just as extraordinary that there is no disclosure of conflict of interest. After all, it is their paper is receiving expression of concern because of the authors’ failure to provide data as promised.

In contrast, when the PLOS One editors placed a discreet Editors Note in 2015 in the comment section of the article about the data not being shared when requested, it carried a COI declaration:

Competing interests declared: PLOS ONE Staff

That COI aroused the curiosity of Retraction Watch who asked PLOS One:

We weren’t sure what the last line was referring to, so contacted Executive Editor Veronique Kiermer. She told us that staff sometimes include their byline under “competing interests,” so the authorship is immediately clear to readers who may be scanning a series of comments.

Commentary from Retraction Watch

PLOS upgrades flag on controversial PACE chronic fatigue syndrome trial; authors “surprised”

Notable excerpts:

A spokesperson for PLOS told us this is the first time the journal has included a statement from the authors in an EOC:

This has been a complex case involving many stakeholders and we wanted to document the different aspects of the case in a fair manner.


We asked if the journal plans to retract the paper if the authors fail to provide what it’s asked for; the spokesperson explained:

At this time, PLOS stands by its Expression of Concern. For now, we have exhausted the options to make the data available in accordance with our policy at the time, but PLOS still seeks a positive outcome to this case for all parties. It is our intention to update this notice when a mechanism is established that allows concerns about the article’s analyses to be addressed while protecting patient privacy. PLOS has not given the authors a deadline.

Note: “PLOS did not given the authors a deadline.”

One of the readers who has requested the data is James Coyne, a psychologist at the University Medical Center, Groningen, who submitted his request 18 months ago (and wrote about it on the PLOS blog site). Although some of the data have been released (to one person under the Freedom of Information Act), it’s not nearly enough to conduct an analysis, Coyne told us:

This small data set does not allow recalculation of original primary outcomes but did allow recalculation of recovery data. Release of the PLOS data is crucial for a better understanding of what went on in that trial. That’s why the investigators are fighting so hard.

Eventually, Coyne began suggesting to PLOS that he would organize public protests and scientific meetings attended by journal representatives.

I think it is the most significant issue in psychotherapy today, in terms of data sharing. It’s a flagrant violation of international standards.

The Retraction Watch article cited a 2015 STAT article that was written by Retraction Watch co-founders Ivan Oransky and Adam Marcus. That article was sympathetic to my request:

If the information Coyne is seeking is harmful and distressing to the staff of the university — and that’s the university’s claim, not ours — that’s only because the information is in fact harmful and distressing. In other words, revealing that you have nothing to hide is much less embarrassing than revealing that you’re hiding something.

The STAT article also said:

To be clear, Coyne’s not asking for sex tapes or pictures of lab workers taking bong hits. He’s asking for raw data so that he can evaluate whether what a group of scientists reported in print is in fact what those data show. It’s called replication, and as Richard Smith, former editor of The BMJ (and a member of our board of directors), put it last week, the refusal goes “against basic scientific principles.” But, unfortunately, stubborn researchers and institutions have used legal roadblocks before to prevent scrutiny of science.

The PLOS One Editors’ blog  post.

The Expression of Concern was accompanied by a blog post from PLOS Iratxe Puebla, Managing Editor for PLOS ONE and Joerg Heber, Editor-in-Chief on May 2, 2017

Data sharing in clinical research: challenges and open opportunities

Since we feel we have exhausted the options to make the data available responsibly, and considering the questions that were raised about the validity of the article’s conclusions, we have decided to post an Expression of Concern [5] to alert readers that the data are not available in line with the journal’s editorial policy. It is our intention to update this notice when a mechanism is established that allows concerns about the article’s analyses to be addressed while protecting patient privacy.

This statement seems to suggest that the ball is in the PACE investigators’ court and that PLOS One editors are prepared to wait. But reading the rest of the blog post, it becomes apparent that PLOS One is wavering on the data sharing policy

Current challenges and opportunities ahead

During our follow up it became clear that there is little consensus of opinion on the sharing of this particular dataset. Experts from the Data Advisory Board whom we consulted expressed different views on the stringency of the journal reaction. Overall they agreed on the need to consider the risk to confidentiality of the trial participants and on the relevance of developing mechanisms for consideration of data requests by an independent body or committee. Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.

Fact checking the PLOS One’s Editors’ blog and a rebuttal

John Peter fact checked  the PLOS One editors’ blog. It came up short on a number of points.

“Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.”

This line is misleading and reveals either ignorance or misunderstanding of the decision in Matthees.

The Information Tribunal (IT) is not a committee. It is part of the courts system of England and Wales.

…the IT’s decisions may be appealed to a higher court. As QMUL chose not to exercise this right but to opt instead to accept the decision, then clearly it considered there were no grounds for appeal. The decision stands in its entirety and applies without condition or caveat.


The court had two decisions to make:

First, could and should trial data be released and if so what test should apply to determine whether particular data should be made public? Second, when that test is applied to this particular set of data, do they meet that test?

The unanimous decision on the first question was very clear: there is no legal or ethical consideration which prevents release; release is permitted by the consent forms; there is a strong public interest in the release; making data available advances legitimate scientific debate; and the data should be released.

The test set by this unanimous decision was simple: whether data can be anonymized. Furthermore, again unanimously, the Tribunal stated that the test for anonymization is not absolute. It is whether the risk of identification is reasonably likely, not whether it is remote, and whether patients can be identified without prior knowledge, specialist knowledge or equipment, or resort to criminality.

It was on applying this test to the data requested, on whether they could be properly anonymized, that the IT reached a majority decision.

On the principles, on how these decisions should be made, on the test which should be applied and on the nature of that test, the court was unanimous.

It should also be noted that to share data which have not been anonymized would be in breach of the Data Protection Act. QMUL has shared these data with other researchers. QMUL should either report itself to the Information Commissioner’s Office or accept that the data can be anonymized. In which case, the unanimous decision of the IT is very clear: the data should be shared.

PLOS ONE should apply the IT decision and its own regulations and demand the data be shared or the paper retracted.

Data Advisory Board

The Editors’ blog referred to “Experts from the Data Advisory Board.. express[ing] different views on the stringency of the journal reaction.”

That was a source of puzzlement for me. Established procedures make no provision for an advisory board as part of the process or any appeal.

A Google Search clarified. I had been to this page a number of times before and did not remember seeing this statement. There is no date or any indication it was added after the rest of the statement.

PLOS has formed an external board of advisors across many fields of research published in PLOS journals. This board will work with us to develop community standards for data sharing across various fields, provide input and advice on especially complex data-sharing situations submitted to the journals, define data-sharing compliance, and proactively work to refine our policy. If you have any questions or feedback, we welcome you to write to us at

The availability of data from reanalysis and independent probing has lots of stakeholders. Independent investigators, policymakers, and patients all have a stake. I don’t recognize the names on this list and see no indication that consumers affected by what is reported in clinical and health services papers have role in making decisions about the release of data. But one name stands out.

Who is Malcolm Macleod and what is he doing in this decision-making process?

Malcolm Macleod is quoted in the Science Media Centre reaction to the PACEgate special issue:

 Expert reaction to Journal of Health Psychology’s Special Issue on The PACE Trial

Prof. Malcolm Macleod, Professor of Neurology and Translational Neuroscience, University of Edinburgh, said:

“The PACE trial, while not perfect, provides far and away the best evidence for the effectiveness of any intervention for chronic fatigue; and certainly is more robust than any of the other research cited. Reading the criticisms, I was struck by how little actual meat there is in them; and wondered where some of the authors came from. In fact, one of them lists as an institution a research centre (Soerabaja Research Center) which only seems to exist as an affiliation on papers he wrote criticising the PACE trial.

“Their main criticisms seem to revolve around the primary outcome was changed halfway through the trial: there are lots of reasons this can happen, some justifiable and others not; the main think is whether it was done without knowledge of the outcomes already accumulated in the trial and before data lock – which is what was done here.

“So I don’t think there is really a story here, apart from a group of authors, some of doubtful provenance, kicking up dust about a study which has a few minor wrinkles (as all do) but still provides information reliable enough to shape practice. If you substitute ‘CFS’ for ‘autism’ and ‘PACE trial’ for ‘vaccination’ you see a familiar pattern…”

The declaration of interest is revealing in what it says and what it does not say.

Prof. MacLeod: “Prof Sharpe used to have an office next to my wife’s; and I sit on the PLoS Data board that considered what to do about one of their other studies.

The declaration fails to reveal a recent publication co-authored by Macleod and Trudy  Chalder.

Wu S, Mead G, Macleod M, Chalder T. Model of understanding fatigue after stroke. Stroke. 2015 Mar 1;46(3):893-8.

This press release comes from an organization strongly committed to the protection of the PACE trial from independent scrutiny. The SMC even organized a letter writing campaign headed by Peter White to petition Parliament to exclude universities for Freedom of Information Act requests. Of course, that will effectively block request for data.

Why would the PLOS One editors involved such a person to reconsider what been a decision in favor of releasing the data?

Connect the dots.

Trends will continue toward making uploading data into publicly accessible repositories a requirement for publication. PLOS One has bought into discredited arguments about patient consent forms not allowing sharing of anonymized data. PLOS One is no longer at the vanguard of open science through routine data sharing.

Better days: When PLOS Blogs honored my post about fatal flaws in the PACE chronic fatigue syndrome follow-up study (2015)

The back story on my receiving this honor was that PLOS Blogs only days before had shut down the blog site because of complaints from someone associated with the PACE trial. I was asked to resign. I refused. PLOS Blogs relented when I said it would be a publicity disaster for PLOS Blogs.

mind the brain logoThe back story on my receiving this honor was that PLOS Blogs only days before had shut down the blog site because of complaints from someone associated with the PACE trial. I was asked to resign. I refused. PLOS Blogs relented when I said it would be a publicity disaster for PLOS Blogs.

screen shot 11th most accessedA Facebook memory of what I was posting two years ago reminded me of better days when PLOS Blogs honored my post about the PACE trial.

Your Top 15 in ’15: Most popular on PLOS BLOGS Network

I was included in a list of the most popular blog posts in a network that received over 2.3 million visitors reading more than 600 new posts. [It is curious that the sixth and seventh most popular posts were omitted from this list, but that’s another story]

I was mentioned for number 11:

11) Uninterpretable: Fatal flaws in PACE Chronic Fatigue Syndrome follow-up study Mind the Brain 10/29/15

Investigating and sharing potential errors in scientific methods and findings, particularly involving psychological research, is the primary reason Clinical Health Psychologist (and PLOS ONE AE) Jim Coyne blogs on Mind the Brain and elsewhere. This closely followed post is one such example.

Earlier decisions by the investigator group preclude valid long-term follow-up evaluation of CBT for chronic fatigue syndrome (CFS). At the outset, let me say that I’m skeptical whether we can hold the PACE investigators responsible… Read more

The back story was that only days before, I had gotten complaints from readers of Mind the Brain who found they were blocked from leaving comments at my blog site. I checked and found that I couldn’t even access the blog as an author.

I immediately emailed Victoria Costello and asked her what it happened. We agreed to talk by telephone, even though it was already late night where I was in Philadelphia. She was in the San Francisco PLOS office.

In the telephone conversation,  I was reminded me that there were some topics about which was not supposed to blog. Senior management at PLOS found me in violation of that prohibition and wanted me to stop blogging.

As is often the case with communication with the senior management of PLOS, no specifics had been given.  There was no formal notice or disclosure about what topics I couldn’t blog or who had complained. And there had been no warning when my access to the blog site was cut. Anything that I might say publicly could be met with a plausible denial.

I reminded Victoria that I had never received any formal specification about what I could blog nor from whom the complaint hand come. There had been a vague communication from her about not blogging about certain topics. I knew that complaints from either Gabrielle Oettingen or her family members had led to request the blog about the flaws in her book,  Rethinking Positive Thinking . That was easy to do because I was not planning another post about that dreadful self-help book.  Any other prohibition was left so vague that had no idea that I couldn’t blog about the PACE trial. I had known that the authors of the British Psychological Society’s Understanding Psychosis were quite upset with what I had said in heavily accessed blog posts. Maybe that was the source of the other prohibition, but no one made that clear. And I wasn’t sure I wanted to honor it, anyway.

I pressed Victoria Costello for details. She said an editor had complained. When I asked if it was Richard Horton, she paused and mumbled something that I took as an affirmative. Victoria then suggested that  it would be best for the blog network and myself if we had a mutually agreed-upon parting of ways. I told her that I would probably publicly comment that the breakup was not mutual and it would be a publicity disaster for the blog.

igagged_jpg-scaled500Why I was even blogging for PLOS Blogs? Victoria Costello had recruited me over after I expressed discontent with the censorship that I was receiving at Psychology Today. The PT editors there had complained that some of my blogging about antidepressants might discourage ads from pharmaceutical companies for which they depended for revenue. The editors had insisted on  the right to approve my posts before I uploaded them. In inviting me to PLOS Blogs, Victoria told me that she too was a refugee from blogging at Psychology Today.  I wouldn’t have to worry about restrictions on what I could say at Mind the Brain, beyond avoiding libel.

I ended the conversation accepting the prohibition about blogging about the PACE trial. This is was despite disagreeing with the rationale that it would be a conflict of interest for me to blog about it after requesting the data from the PLOS One paper.

Since then, I repeatedly requested that the PLOS management acknowledge the prohibition on my blogging or at least put it in writing. My request was met with repeated refusals from Managing Editor Iratxe Puebla, who always cited my conflict of interest.

In early 2017, I began publicly tweeting about the issue, stimulating some curiosity others about whether there was a prohibition. InJuly 2017, the entire Mind the Brain site, not just my blog, was shut.

In early 2018, I will provide more backstory on that shutdown and dispute what was said in the blog post below. And more about the collusion between PLOS One senior management and the PACE investigators in the data not being available 2 years after I requested it.

Message for Mind the Brain readers from PLOSBLOGS

blank plos blogs thumb nail
This strange thumbnail is the default for when no preferred image is provided. It could indicate the haste with which this blog was posted.

Posted July 31, 2017 by Victoria Costello in Uncategorized

After five years and over a hundred posts, PLOSBLOGS is retiring its psychology blog, Mind the Brain, from our PLOS-hosted blog network. By mutual agreement with the primary Mind the Brain blogger, James Coyne, Professor Coyne will retain the name of this blog and will take his archive of posts for reuse on his independent website,

According to PLOSBLOGS’ policy for all our retired (inactive) blogs, any and all original posts published on Mind the Brain will retain their PLOS web addresses as intact urls, so links made previously from other sites will not be broken. In addition, PLOS will supply the archive of his posts directly to Prof Coyne so that he may repost them anywhere he may wish.

PLOS honors James Coyne’s voice as an important one in peer-to-peer scientific criticism. As discussed with Professor Coyne in recent days, after careful consideration PLOSBLOGS has concluded that it does not have the staff resources required to vet the sources, claims and tone contained in his posts, to assure they are aligned with our PLOSBLOGS Community Guidelines. This has lead us to the conclusion that Professor Coyne and his content would be better served on his own independent blog platform. We wish James Coyne the best with his future blogging.

—Victoria Costello, Senior Editor, PLOSBLOGS & Communities


“It’s certainly not bareknuckle:” Comments to a journalist about a critique of mindfulness research

We can’t assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results.

mind the brain logo

I recently had a Skype interview with science journalist Peter Hess concerning an article in Psychological Science.

Peter was exceptionally prepared, had a definite point of view, but was open to what I said. In the end seem to be persuaded by me on a number of points.  The resulting article in Inverse  faithfully conveyed my perspective and juxtaposed quotes from me with those from an author of the Psych Science piece in a kind of debate.

My point of view

larger dogWhen evaluating an article about mindfulness in a peer-reviewed journal, we need to take into account that authors may not necessarily be striving to do the best science, but to maximally benefit their particular brand of mindfulness, their products, or the settings in which they operate. Many studies of mindfulness are a little more than infomercials, weak research intended only to get mindfulness promoters’ advertisement of themselves into print or to allow the labeling of claims as “peer-reviewed”. Caveat Lector.

We cannot assume authors of mindfulness studies are striving to do the best possible science, including being prepared for the possibility of being proven incorrect by their results. Rather they may be simply try to get the strongest possible claims through peer review, ignoring best research practices and best publication practices.

Psychologists Express Growing Concern With Mindfulness Meditation

“It’s not bare-knuckle, that’s for sure.”

There was much from the author of the Psych Science article with which  I would agree:

“In my opinion, there are far too many organizations, companies, and therapists moving forward with the implementation of ‘mindfulness-based’ treatments, apps, et cetera before the research can actually tell us whether it actually works, and what the risk-reward ratio is,” corresponding author and University of Melbourne research fellow Nicholas Van Dam, Ph.D. tells Inverse.

Bravo! And

“People are spending a lot of money and time learning to meditate, listening to guest speakers about corporate integration of mindfulness, and watching TED talks about how mindfulness is going to supercharge their brain and help them live longer. Best case scenario, some of the advertising is true. Worst case scenario: very little to none of the advertising is true and people may actually get hurt (e.g., experience serious adverse effects).”

But there were some statements that renewed the discomfort and disappointment I experienced when I read the original article in Psychological Science:

 “I think the biggest concern among my co-authors and I is that people will give up on mindfulness and/or meditation because they try it and it doesn’t work as promised,” says Van Dam.

“There may really be something to mindfulness, but it will be hard for us to find out if everyone gives up before we’ve even started to explore its best potential uses.”

So, how long before we “give up” on thousands of studies pouring out of an industry? In the meantime, should consumers act on what seem to be extravagant claims?

The Inverse article segued into some quotes from me after delivering another statement from the author which I could agree:

The authors of the study make their attitudes clear when it comes to the current state of the mindfulness industry: “Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed,” they write. And while this comes off as unequivocal, some think they don’t go far enough in calling out specific instances of quackery.

“It’s not bare-knuckle, that’s for sure. I’m sure it got watered down in the review process,” James Coyne, Ph.D., an outspoken psychologist who’s extensively criticized the mindfulness industry, tells Inverse.

Coyne agrees with the conceptual issues outlined in the paper, specifically the fact that many mindfulness therapies are based on science that doesn’t really prove their efficacy, as well as the fact that researchers with copyrights on mindfulness therapies have financial conflicts of interest that could influence their research. But he thinks the authors are too concerned with tone policing.

“I do appreciate that they acknowledged other views, but they kept out anybody who would have challenged their perspective,” he says.

Regarding Coyne’s criticism about calling out individuals, Van Dam says the authors avoided doing that so as not to alienate people and stifle dialogue.

“I honestly don’t think that my providing a list of ‘quacks’ would stop people from listening to them,” says Van Dam. “Moreover, I suspect my doing so would damage the possibility of having a real conversation with them and the people that have been charmed by them.” If you need any evidence of this, look at David “Avocado” Wolfe, whose notoriety as a quack seems to make him even more popular as a victim of “the establishment.” So yes, this paper may not go so far as some would like, but it is a first step toward drawing attention to the often flawed science underlying mindfulness therapies.

To whom is the dialogue directed about unwarranted claims from the mindfulness industry?

As one of the authors of an article claiming to be an authoritative review from a group of psychologists with diverse expertise, Van Dam says he is speaking to consumers. Why won’t he and his co-authors provide citations and name names so that readers can evaluate for themselves what they are being told? Is the risk of reputational damage and embarrassment to the psychologists so great as to cause Van Dam to protect them versus protecting consumers from the exaggerated and even fraudulent claims of psychologists hawking their products branded as ‘peer-reviewed psychological and brain science’.

I use the term ‘quack’ sparingly outside of discussing unproven and unlikely-to-be-proven products supposed to promote physical health and well-being or to prevent or cure disease and distress.

I think Harvard psychologist Ellen Langer deserves the term “quack” for her selling of expensive trips to spas in Mexico to women with advanced cancer so that they can change their mind set to reverse the course of their disease. Strong evidence, please! Given that this self-proclaimed mother of mindfulness gets her claims promoted through the Association for Psychological Science website, I think it particularly appropriate for Van Dam and his coauthors to name her in their publication in an APS journal. Were they censored or only censoring themselves?

Let’s put aside psychologists who can be readily named as quacks. How about Van Dam and co-authors naming names of psychologists claiming to alter the brains and immune systems of cancer patients with mindfulness practices so that they improve their physical health and fight cancer, not just cope better with a life-altering disease?

I simply don’t buy Van Dam’s suggestion that to name names promotes quackery any more than I believe exposing anti-vaxxers promotes the anti-vaccine cause.

Is Van Dam only engaged in a polite discussion with fellow psychologists that needs to be strictly tone-policed to avoid offense or is he trying to reach, educate, and protect consumers as citizen scientists looking after their health and well-being? Maybe that is where we parted ways.

Power pose: I. Demonstrating that replication initiatives won’t salvage the trustworthiness of psychology

An ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science.


mind the brain logo

Bad publication practices keep good scientists unnecessarily busy, as in replicability projects.- Bjoern Brembs

Power-PoseAn ambitious multisite initiative showcases how inefficient and ineffective replication is in correcting bad science. Psychologists need to reconsider pitfalls of an exclusive reliance on this strategy to improve lay persons’ trust in their field.

Despite the consistency of null findings across seven attempted replications of the original power pose study, editorial commentaries in Comprehensive Results in Social Psychology left some claims intact and called for further research.

Editorial commentaries on the seven null studies set the stage for continued marketing of self-help products, mainly to women, grounded in junk psychological pseudoscience.

Watch for repackaging and rebranding in next year’s new and improved model. Marketing campaigns will undoubtedly include direct quotes from the commentaries as endorsements.

We need to re-examine basic assumptions behind replication initiatives. Currently, these efforts  suffer from prioritizing of the reputations and egos of those misusing psychological science to market junk and quack claims versus protecting the consumers whom these gurus target.

In the absence of a critical response from within the profession to these persons prominently identifying themselves as psychologists, it is inevitable that the void be filled from those outside the field who have no investment in preserving the image of psychology research.

In the case of power posing, watchdog critics might be recruited from:

Consumer advocates concerned about just another effort to defraud consumers.

Science-based skeptics who see in the marketing of the power posing familiar quackery in the same category as hawkers using pseudoscience to promote homeopathy, acupuncture, and detox supplements.

Feminists who decry the message that women need to get some balls (testosterone) if they want to compete with men and overcome gender disparities in pay. Feminists should be further outraged by the marketing of junk science to vulnerable women with an ugly message of self-blame: It is so easy to meet and overcome social inequalities that they have only themselves to blame if they do not do so by power posing.

As reported in Comprehensive Results in Social Psychology,  a coordinated effort to examine the replicability of results reported in Psychological Science concerning power posing left the phenomenon a candidate for future research.

I will be blogging more about that later, but for now let’s look at a commentary from three of the over 20 authors get reveals an inherent limitation to such ambitious initiatives in tackling the untrustworthiness of psychology.

Cesario J, Jonas KJ, Carney DR. CRSP special issue on power poses: what was the point and what did we learn?.  Comprehensive Results in Social Psychology. 2017


Let’s start with the wrap up:

The very costly expense (in terms of time, money, and effort) required to chip away at published effects, needed to attain a “critical mass” of evidence given current publishing and statistical standards, is a highly inefficient use of resources in psychological science. Of course, science is to advance incrementally, but it should do so efficiently if possible. One cannot help but wonder whether the field would look different today had peer-reviewed preregistration been widely implemented a decade ago.

 We should consider the first sentence with some recognition of just how much untrustworthy psychological science is out there. Must we mobilize similar resources in every instance or can we develop some criteria to decide what is on worthy of replication? As I have argued previously, there are excellent reasons for deciding that the original power pose study could not contribute a credible effect size to the literature. There is no there to replicate.

The authors assume preregistration of the power pose study would have solved problems. In clinical and health psychology, long-standing recommendations to preregister trials are acquiring new urgency. But the record is that motivated researchers routinely ignore requirements to preregister and ignore the primary outcomes and analytic plans to which they have committed themselves. Editors and journals let them get away with it.

What measures do the replicationados have to ensure the same things are not being said about bad psychological science a decade from now? Rather than urging uniform adoption and enforcement of preregistration, replicationados urged the gentle nudge of badges for studies which are preregistered.

Just prior to the last passage:

Moreover, it is obvious that the researchers contributing to this special issue framed their research as a productive and generative enterprise, not one designed to destroy or undermine past research. We are compelled to make this point given the tendency for researchers to react to failed replications by maligning the intentions or integrity of those researchers who fail to support past research, as though the desires of the researchers are fully responsible for the outcome of the research.

There are multiple reasons not to give the authors of the power pose paper such a break. There is abundant evidence of undeclared conflicts of interest in the huge financial rewards for publishing false and outrageous claims. Psychological Science about the abstract of the original paper to leave out any embarrassing details of the study design and results and end with a marketing slogan:

That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.

 Then the Association for Psychological Science gave a boost to the marketing of this junk science with a Rising Star Award to two of the authors of this paper for having “already made great advancements in science.”

As seen in this special issue of Comprehensive Results in Social Psychology, the replicationados share responsibility with Psychological Science and APS for keeping keep this system of perverse incentives intact. At least they are guaranteeing plenty of junk science in the pipeline to replicate.

But in the next installment on power posing I will raise the question of whether early career researchers are hurting their prospects for advancement by getting involved in such efforts.

How many replicationados does it take to change a lightbulb? Who knows, but a multisite initiative can be combined with a Bayesian meta-analysis to give a tentative and unsatisfying answer.

Coyne JC. Replication initiatives will not salvage the trustworthiness of psychology. BMC Psychology. 2016 May 31;4(1):28.

The following can be interpreted as a declaration of financial interests or a sales pitch:

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

 Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at


“ACT: The best thing [for pain] since sliced bread or the Emperor’s new clothes?”

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

mind the brain logo

Reflections on the debate with David Gillanders about Acceptance and Commitment Therapy at the British Pain Society, Glasgow, September 15, 2017

my title slideDavid Gillanders  and I held our debate “ACT: best thing since sliced bread or the Emperor’s new clothes?” at the British Pain Society meeting on Thursday, September 15, 2017 in Glasgow. We will eventually make our slides and a digital recording of the debate available.

I enjoyed hanging out with David Gillanders. He is a great guy who talks the talk, but also walks the walk. He lives ACT as a life philosophy. He was an ACT trainer speaking before a sympathetic audience, many who had been trained by him.

Some reflections from a few days later.

I was surprised how much Acceptance and Commitment Therapy (along with #mindfulness) has taken over UK pain services. A pre-debste poll showed most of the  audience  came convinced that indeed, ACT was the best thing since sliced bread.

I was confident that my skepticism was firmly rooted in the evidence. I don’t think there is debate about that. David Gillanders agreed that higher quality studies were needed.

But in the end, even I did not convert many, I came away quite pleased with the debate.

Standards for evaluating the  evidence for ACT for pain

 I recently wrote that ACT may have moved into a post-evidence phase, with its chief proponents switching from citing evidence to making claims about love, suffering, and the meaning of life. Seriously.

Steve Hayes prompted me on Twitter to take a closer look at the most recent evidence for ACT. As reported in an earlier blog, I took a close look.  I was not impressed that proponents of ACT are making much progress in developing evidence in any way as strong as their claims. We need a lot less ACT research that doesn’t add any quality evidence despite ACT being promoted enthusiastically as if it does. We need more sobriety from the promoters of ACT, particularly those in academia, like Steve Hayes and Kelly Wilson who know something about how to evaluate evidence. They should not patronize workshop goers with fanciful claims.

David Gillanders talked a lot about the philosophy and values that are expressed in ACT, but he also made claims about its research base, echoing the claims made by Steve Hayes and other prominent ACT promoters.

Standards for evaluating research exist independent of any discussion of ACT

There are standards for interpreting clinical trials and integration of their results in meta analysis that exist independent of the ACT literature. It is not a good idea to challenge these standards in the context of defending ACT against unfavorable evaluations, although that is exactly how Hayes and his colleagues often respond. I will get around to blogging about the most recent example of this.

Atkins PW, Ciarrochi J, Gaudiano BA, Bricker JB, Donald J, Rovner G, Smout M, Livheim F, Lundgren T, Hayes SC. Departing from the essential features of a high quality systematic review of psychotherapy: A response to Öst (2014) and recommendations for improvement. Behaviour Research and Therapy. 2017 May 29.

Within-group (pre-post) differences in outcome. David Gillanders echoed Hayes in using within-group effects sizes to describe the effectiveness of ACT. Results presented in this way are better and may look impressive, but they are exaggerated when compared to results obtained between groups. I am not making that up. Changes within the group of patients who received ACT reflect the specific effects of ACT plus whatever nonspecific factors were operating. That is why we need an appropriate comparison-control group to examine between-group differences, which are always more modest than just looking at the within-group effects.

Compared to what? Most randomized trials of ACT involve a wait list, no-treatment, or ill-described standard care (which often represents no treatment). Such comparisons are methodologically weak, especially when patients and providers know what is going on-called an unblinded trial– and when outcomes are subjective self-report measures.

homeopathyA clever study in New England Journal of Medicine showed that with such subjective self-report measures, one cannot distinguish between a proven effective inhaled medication for asthma, an inert substance simply inhaled, and sham acupuncture. In contrast, objective measures of breathing clearly distinguish the medication from the comparison-control conditions.

So, it is not an exaggeration to say that most evaluations of ACT are conducted under circumstances that even sham acupuncture or homeopathy would look effective.

Not superior to other treatments. There are no trials comparing ACT to a credible active treatment in which ACT proves superior, either for pain or other clinical problems. So, we are left saying ACT is better than doing nothing, at least in trials where any nonspecific effects are concentrated among the patients receiving ACT.

Rampant investigator bias. A lot of trials of ACT are conducted by researchers having an investment in showing that ACT is effective. That is a conflict of interest. Sometimes it is called investigator allegiance, or a promoter or originator bias.

Regardless, when drugs are being evaluated in a clinical trial, it is recognized that there will be a bias toward the drug favored by the manufacturer conducting the trial. It is increasingly recognized that meta analyses conducted by promoters should also be viewed with extra skepticism. And that trials conducted with researchers having such conflicts of interest should be considered separately to see if they produced exaggerated.

ACT desperately needs randomized trials conducted by researchers who don’t have a dog in the fight, who lack the motivation to torture findings to give positive results when they are simply not present. There’s a strong confirmation bias in current ACT trials, with promoter/researchers embarrassing themselves in their maneuvers to show strong, positive effects when their only weak or null findings available. I have documented [ 1, 2 ] how this trend started with Steve Hayes dropping two patients from his study of effects of brief ACT on re-hospitalization of inpatients with Patricia Bach. One patient had died by suicide and another was in jail and so they couldn’t be rehospitalized and were drop from the analyses. The deed could only be noticed by comparing the published paper with Patricia Bach’s dissertation. It allowed an otherwise nonsignificant finding a small trial significant.

Trials that are too small to matter. A lot of ACT trials have too few patients to produce a reliable, generalizable effect size. Lots of us in situations far removed from ACT trials have shown justification for the rule of thumb that we should consider effect sizes from trials having less than 35 patients per treatment of comparison cell. Even this standard is quite liberal. Even if a moderate effect would be significantly larger trial, there is less than a 50% probability it be detected the trial this small. To be significant with such a small sample size, differences between treatments have to be large, and there probably either due to chance or something dodgy that the investigators did.

Many claims for the effectiveness of ACT for particular clinical problems come from trials too small to generate a reliable effect sizes. I invite readers to undertake the simple exercise of looking at the sample sizes in a study cited has support of the effectiveness of ACT. If you exclude such small studies, there is not much research left to talk about.

Too much flexibility in what researchers report in publications. Many trials of ACT involve researchers administering a whole battery of outcome measures and then emphasizing those that make ACT look best and either downplaying or not mentioning further the rest. Similarly, many trials of ACT deemphasize whether the time X treatment interaction is significant in and simply ignore it if it is not all focus on the within-group differences. I know, we’re getting a big tactical here. But this is another way of saying is that many trials of ACT gives researchers too much latitude in choosing what variables to report and what statistics are used to evaluate them.

Under similar circumstances, showed that listening to the Beatles song When I’m 64 left undergraduates 18 months younger than when they listen to the song Karamba. Of course, the researchers knew damn well that the Beatles song didn’t have this effect, but they indicated they were doing what lots of investigators due to get significant results, what they call p-hacking.

Many randomized trials of ACT are conducted with the same researcher flexibility that would allow a demonstration that listening to a Beatles song drops the age of undergraduates 18 months.

Many of the problems with ACT research could be avoided if researchers were required to publish ahead of time their primary outcome variables and plans for analyzing them. Such preregistration is increasingly recognized as best research practices, including by NIMH. There is  no excuse not to do it.

My take away message?

ACT gurus have been able to dodge the need to develop quality data to support their claims that their treatment is effective (and their sometime claim it is more effective than other approaches). A number of them are university-based academics and have ample resources to develop better quality evidence.

Workshop and weekend retreat attendees are convinced that ACT works on the strength of experiential learning and a lot of theoretical mumbo jumbo.

But the ACT promoters also make a lot of dodgy claims that there is strong evidence that the specific ingredients of ACT, techniques and values, account for the power of ACT. But some of the ACT gurus, Steve Hayes and Kelly Wilson at least, are academics and should limit their claims of being ‘evidence-based” to what is supported by strong, quality evidence. They don’t. I think they are being irresponsible in throwing in “evidence-based’ with all the

What should I do as an evidence-based skeptic wanting to improve the conversation about ACT?

 Earlier in my career, I spent six years in live supervision in some world-renowned therapists behind the one-way mirror including John Weakland, Paul Watzlawick, and Dick Fisch. I gave workshops world wide on how to do brief strategic therapies with individuals, couples, and families. I chose not to continue because (1) I didn’t like the pressure for drama and exciting interventions when I interviewed patients in front of large groups; (2) Even when there was a logic and appearance of effectiveness to what I did, I didn’t believe it could be manualized; and (3) My group didn’t have the resources to conduct proper outcome studies.

But I got it that workshop attendees like drama, exciting interventions, and emotional experiences. They go to trainings expecting to be entertained, as much as informed. I don’t think I can change that.

Many therapists have not had the training to evaluate claims about research, even if they accept that being backed by research findings is important. They depend on presenters to tell them about research and tend to trust what they say. Even therapist to know something about research, tennis and critical judgment when caught up in emotionality provided by some training experiences. Experiential learning can be powerful, even when it is used to promote interventions that are not supported by evidence.

I can’t change the training of therapists nor the culture of workshops and training experience. But I can reach out to therapist who want to develop skills to evaluate research for themselves. I think some of the things that point out in this blog post are quite teachable as things to look for.

I hope I can connect with therapists who want to become citizen scientists who are skeptical about what they hear and want to become equipped to think for themselves and look for effective resources when they don’t know how to interpret claims.

This is certainly not all therapists and may only be a minority. But such opinion leaders can be champions for the others in facilitating intelligent discussions of research concerning the effectiveness of psychotherapies. And they can prepare their colleagues to appreciate that most change in psychotherapy is not as dramatic or immediate as seen in therapy workshops.

eBook_PositivePsychology_345x550I will soon be offering e-books providing skeptical looks at positive psychology and mindfulness, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at


Embargo broken: Bristol University Professor to discuss trial of quack chronic fatigue syndrome treatment.

An alternative press briefing to compare and contrast with what is being provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

mind the brain logo

This blog post provides an alternative press briefing to compare and contrast with what was provided by the Science Media Centre for a press conference on Wednesday September 20, 2017.

The press release attached at the bottom of the post announces the publication of results of highly controversial trial that many would argue should never have occurred. The trial exposed children to an untested treatment with a quack explanation delivered by unqualified persons. Lots of money was earned from the trial by the promoters of the quack treatment beyond the boost in credibility for their quack treatment.

Note to journalists and the media: for further information email

This trial involved quackery delivered by unqualified practitioners who are otherwise untrained and insensitive to any harm to patients.

The UK Advertising Standards Authority had previously ruled that Lightning Process could not be advertised as a treatment. [ 1 ]

The Lightning is billed as mixing elements from osteopathy, life coaching and neuro-linguistic programming. That is far from having a mechanism of action based in science or evidence. [2] Neuro-linguistic programming (NLP) has been thoroughly debunked for its pseudoscientific references to brain science and ceased to be discussed in the scientific literature. [3]

Many experts would consider the trial unethical. It involved exposing children and adolescents to an unproven treatment with no prior evidence of effectiveness or safety nor any scientific basis for the mechanism by which it is claimed to work.

 As an American who has decades served on of experience with Committees for the Protection of Human Subjects and Data Safety and Monitoring Boards, I don’t understand how this trial was approved to recruit human subjects, and particularly children and adolescents.

I don’t understand why a physician who cared about her patients would seek approval to conduct such a trial.

Participation in the trial violated patients’ trust that medical settings and personnel will protect them from such risks.

Participation in the trial is time-consuming and involves loss of opportunity to obtain less risky treatment or simply not endure the inconvenience and burden of a treatment for which there is no scientific basis to expect would work.

Esther Crawley has said “If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research.”  I would like to see her try out that rationale in some of the patient safety and human subjects committee meetings I have attended. The response would not likely be very polite.

Patients and their parents should have been informed of an undisclosed conflict of interest.

phil parker NHSThis trial served as basis for advertising Lightning Process on the Web as being offered in NHS clinics and as being evaluated in a randomized controlled trial. [4]

Promoters of the Lightning Process received substantial payments from this trial. Although a promoter of the treatment was listed on the application for the project, she was not among the paper’s authors, so there will probably be no conflict of interest declared.

The providers were not qualified medical personnel, but were working for an organization that would financially benefit from positive findings.

It is expected that children who received the treatment as part of the trial would continue to receive it from providers who were trained and certified by promoters of the Lightning Process,

By analogy, think of a pharmaceutical trial in which the influence of drug company and that it would profit from positive results was not indicated in patient consent forms. There would be a public outcry and likely legal action.

astonishingWhy might the SMILE create the illusion that Lightning Process is effective for chronic fatigue syndrome?

There were multiple weaknesses in the trial design that would likely generate a false impression that the Lightning Process works. Under similar conditions, homeopathy and sham acupuncture appear effective [5]. Experts know to reject such results because (1) more rigorous designs are required to evaluate efficacy of treatment in order to rule out placebo effects; and (b) there must be a scientific basis for the mechanism of change claimed for how the treatment works. 

Indoctrination of parents and patients with pseudoscientific information. Advertisements for the Lightning Process on the Internet, including YouTube videos, and created a demand for this treatment among patients but it’s cost (£620) is prohibitive for many.

Selection Bias. Participation in the trial involved a 50% probability the treatment would be received for free. (Promoters of the Lightning Process received £567 for each patient who received the treatment in the trial). Parents who believed in the power of the the Lightning Process would be motived to enroll in the trial in order to obtain the treatment free for their children.

The trial was unblinded. Patients and treatment providers knew to which group patients were assigned. Not only with patients getting the Lightning Process be exposed to the providers’ positive expectations and encouragement, those assigned to the control group could register the disappointment when completing outcome measures.

The self-report subjective outcomes of this trial are susceptible to nonspecific factors (placebo effects). These include positive expectations, increased contact and support, and a rationale for what was being done, even if scientifically unsound. These nonspecific factors were concentrated in the group receiving the Lightning Process intervention. This serves to stack the deck in any evaluation of the Lightning Process and inflate differences with the patients who didn’t get into this group.

There were no objective measures of outcome. The one measure with a semblance of objectivity, school attendance, was eliminated in a pilot study. Objective measures would have provided a check on the likely exaggerated effects obtained with subjective seif-report measures.

The providers were not qualified medical, but were working for an organization that would financially benefit from positive findings. The providers were highly motivated to obtain positive results.

During treatment, the  Lightning Process further indoctrinates child and adolescent patients with pseudoscience [ 6 ] and involves coercion to fake that they are getting well [7 ]. Such coercion can interfere with the patients getting appropriate help when they need it, their establishing appropriate expectations with parental and school authorities, and even their responding honestly to outcome assessments.

 It’s not just patients and patient family members activists who object to the trial. As professionals have gotten more informed, there’s been increasing international concern about the ethics and safety of this trial.

The Science Media Centre has consistently portrayed critics of Esther Crawley’s work as being a disturbed minority of patients and patients’ family members. Smearing and vilification of patients and parents who object to the trial is unprecedented.

Particularly with the international controversy over the PACE trial of cognitive behavior therapy  and graded exercise therapy for chronic fatigue syndrome, the patients have been joined by non-patient scientists and clinicians in their concerns.

Really, if you were a fully informed parent of a child who was being pressured to participate in the trial with false claims of the potential benefits, wouldn’t you object?

embargoed news briefing


[1] “To date, neither the ASA nor CAP [Committee of Advertising Practice] has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.”

[2] The respected Skeptics Dictionary offers a scathing critique of Phil Parker’s Lightning Process. The critique specifically cites concerns that Crawley’s SMILE trial switched outcomes to increase the likelihood of obtaining evidence of effectiveness.

[3] The entry for Neuro-linguistic programming (NLP) inWikipedia states:

There is no scientific evidence supporting the claims made by NLP advocates and it has been discredited as a pseudoscience by experts.[1][12] Scientific reviews state that NLP is based on outdated metaphors of how the brain works that are inconsistent with current neurological theory and contain numerous factual errors.[13][14

[4] NHS and LP    Phil Parker’s webpage announces the collaboration with Bristol University and provides a link to the officialSMILE  trial website.

{5] A provocative New England Journal of Medicine article, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma study showed that sham acupuncture as effective as an established medical treatment – an albuterol inhaler – for asthma when judged with subjective measures, but there was a large superiority for the established medical treatment obtained with objective measures.

[6] Instructional materials that patient are required to read during treatment include:

LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues.

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

The Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

[7] Responsibility of patients:

Believe that Lightning Process will heal you. Tell everyone that you have been healed. Perform magic rituals like standing in circles drawn on paper with positive Keywords stated on them. Learn to render short rhyme when you feel symptoms, no matter where you are, as many times as required for the symptoms to disappear. Speak only in positive terms and think only positive thoughts. If symptoms or negative thoughts come, you must stretch forth your arms with palms facing outward and shout “Stop!” You are solely responsible for ME. You can choose to have ME. But you are free to choose a life without ME if you wish. If the method does not work, it is you who are doing something wrong.

skeptical-cat-is-fraught-with-skepticism-300x225Special thanks to the Skeptical Cat who provided me with an advance copy of the press release from Science Media Centre.








Creating illusions of wondrous effects of yoga and meditation on health: A skeptic exposes tricks

The tour of the sausage factory is starting, here’s your brochure telling you’ll see.


A recent review has received a lot of attention with it being used for claims that mind-body interventions have distinct molecular signatures that point to potentially dramatic health benefits for those who take up these practices.

What Is the Molecular Signature of Mind–Body Interventions? A Systematic Review of Gene Expression Changes Induced by Meditation and Related Practices.  Frontiers in Immunology. 2017;8.

Few who are tweeting about this review or its press coverage are likely to have read it or to understand it, if they read it. Most of the new agey coverage in social media does nothing more than echo or amplify the message of the review’s press release.  Lazy journalists and bloggers can simply pass on direct quotes from the lead author or even just the press release’s title, ‘Meditation and yoga can ‘reverse’ DNA reactions which cause stress, new study suggests’:

“These activities are leaving what we call a molecular signature in our cells, which reverses the effect that stress or anxiety would have on the body by changing how our genes are expressed.”


“Millions of people around the world already enjoy the health benefits of mind-body interventions like yoga or meditation, but what they perhaps don’t realise is that these benefits begin at a molecular level and can change the way our genetic code goes about its business.”

[The authors of this review actually identified some serious shortcomings to the studies they reviewed. I’ll be getting to some excellent points at the end of this post that run quite counter to the hype. But the lead author’s press release emphasized unwarranted positive conclusions about the health benefits of these practices. That is what is most popular in media coverage, especially from those who have stuff to sell.]

Interpretation of the press release and review authors’ claims requires going back to the original studies, which most enthusiasts are unlikely to do. If readers do go back, they will have trouble interpreting some of the deceptive claims that are made.

Yet, a lot is at stake. This review is being used to recommend mind-body interventions for people having or who are at risk of serious health problems. In particular, unfounded claims that yoga and mindfulness can increase the survival of cancer patients are sometimes hinted at, but occasionally made outright.

This blog post is written with the intent of protecting consumers from such false claims and providing tools so they can spot pseudoscience for themselves.

Discussion in the media of the review speaks broadly of alternative and complementary interventions. The coverage is aimed at inspiring  confidence in this broad range of treatments and to encourage people who are facing health crises investing time and money in outright quackery. Seemingly benign recommendations for yoga, tai chi, and mindfulness (after all, what’s the harm?) often become the entry point to more dubious and expensive treatments that substitute for established treatments.  Once they are drawn to centers for integrative health care for classes, cancer patients are likely to spend hundreds or even thousands on other products and services that are unlikely to benefit them. One study reported:

More than 72 oral or topical, nutritional, botanical, fungal and bacterial-based medicines were prescribed to the cohort during their first year of IO care…Costs ranged from $1594/year for early-stage breast cancer to $6200/year for stage 4 breast cancer patients. Of the total amount billed for IO care for 1 year for breast cancer patients, 21% was out-of-pocket.

Coming up, I will take a skeptical look at the six randomized trials that were highlighted by this review.  But in this post, I will provide you with some tools and insights so that you do not have to make such an effort in order to make an informed decision.

Like many of the other studies cited in the review, these randomized trials were quite small and underpowered. But I will focus on the six because they are as good as it gets. Randomized trials are considered a higher form of evidence than simple observational studies or case reports [It is too bad the authors of the review don’t even highlight what studies are randomized trials. They are lumped with others as “longitudinal studies.]

As a group, the six studies do not actually add any credibility to the claims that mind-body interventions – specifically yoga, tai chi, and mindfulness training or retreats improve health by altering DNA.  We can be no more confident with what the trials provide than we would be without them ever having been done.

I found the task of probing and interpreting the studies quite labor-intensive and ultimately unrewarding.

I had to get past poor reporting of what was actually done in the trials, to which patients, and with what results. My task often involved seeing through cover ups with authors exercising considerable flexibility in reporting what measures were they actually collected and what analyses were attempted, before arriving at the best possible tale of the wondrous effects of these interventions.

Interpreting clinical trials should not be so hard, because they should be honestly and transparently reported and have a registered protocol and stick to it. These reports of trials were sorely lacking, The full extent of the problems took some digging to uncover, but some things emerged before I got to the methods and results.

The introductions of these studies consistently exaggerated the strength of existing evidence for the effects of these interventions on health, even while somehow coming to the conclusion that this particular study was urgently needed and it might even be the “first ever”. The introductions to the six papers typically cross-referenced each other, without giving any indication of how poor quality the evidence was from the other papers. What a mutual admiration society these authors are.

One giveaway is how the introductions  referred to the biggest, most badass, comprehensive and well-done review, that of Goyal and colleagues.

That review clearly states that the evidence for the effects of mindfulness is poor quality because of the lack of comparisons with credible active treatments. The typical randomized trial of mindfulness involves a comparison with no-treatment, a waiting list, or patients remaining in routine care where the target problem is likely to be ignored.  If we depend on the bulk of the existing literature, we cannot rule out the likelihood that any apparent benefits of mindfulness are due to having more positive expectations, attention, and support over simply getting nothing.  Only a handful  of hundreds of trials of mindfulness include appropriate, active treatment comparison/control groups. The results of those studies are not encouraging.

One of the first things I do in probing the introduction of a study claiming health benefits for mindfulness is see how they deal with the Goyal et al review. Did the study cite it, and if so, how accurately? How did the authors deal with its message, which undermines claims of the uniqueness or specificity of any benefits to practicing mindfulness?

For yoga, we cannot yet rule out that it is better than regular exercising – in groups or alone – having relaxing routines. The literature concerning tai chi is even smaller and poorer quality, but there is the same need to show that practicing tai chi has any benefits over exercising in groups with comparable positive expectations and support.

Even more than mindfulness, yoga and tai chi attract a lot of pseudoscientific mumbo jumbo about integrating Eastern wisdom and Western science. We need to look past that and insist on evidence.

Like their introductions, the discussion sections of these articles are quite prone to exaggerating how strong and consistent the evidence is from existing studies. The discussion sections cherry pick positive findings in the existing literature, sometimes recklessly distorting them. The authors then discuss how their own positively spun findings fit with what is already known, while minimizing or outright neglecting discussion of any of their negative findings. I was not surprised to see one trial of mindfulness for cancer patients obtain no effects on depressive symptoms or perceived stress, but then go on to explain mindfulness might powerfully affect the expression of DNA.

If you want to dig into the details of these studies, the going can get rough and the yield for doing a lot of mental labor is low. For instance, these studies involved drawing blood and analyzing gene expression. Readers will inevitably encounter passages like:

In response to KKM treatment, 68 genes were found to be differentially expressed (19 up-regulated, 49 down-regulated) after adjusting for potentially confounded differences in sex, illness burden, and BMI. Up-regulated genes included immunoglobulin-related transcripts. Down-regulated transcripts included pro-inflammatory cytokines and activation-related immediate-early genes. Transcript origin analyses identified plasmacytoid dendritic cells and B lymphocytes as the primary cellular context of these transcriptional alterations (both p < .001). Promoter-based bioinformatic analysis implicated reduced NF-κB signaling and increased activity of IRF1 in structuring those effects (both p < .05).

Intimidated? Before you defer to the “experts” doing these studies, I will show you some things I noticed in the six studies and how you can debunk the relevance of these studies for promoting health and dealing with illness. Actually, I will show that even if these 6 studies got the results that the authors claimed- and they did not- at best, the effects would trivial and lost among the other things going on in patients’ lives.

Fortunately, there are lots of signs that you can dismiss such studies and go on to something more useful, if you know what to look for.

Some general rules:

  1. Don’t accept claims of efficacy/effectiveness based on underpowered randomized trials. Dismiss them. The rule of thumb is reliable to dismiss trials that have less than 35 patients in the smallest group. Over half the time, true moderate sized effects will be missed in such studies, even if they are actually there.

Due to publication bias, most of the positive effects that are published from such sized trials will be false positives and won’t hold up in well-designed, larger trials.

When significant positive effects from such trials are reported in published papers, they have to be large to have reached significance. If not outright false, these effect sizes won’t be matched in larger trials. So, significant, positive effect sizes from small trials are likely to be false positives and exaggerated and probably won’t replicate. For that reason, we can consider small studies to be pilot or feasibility studies, but not as providing estimates of how large an effect size we should expect from a larger study. Investigators do it all the time, but they should not: They do power calculations estimating how many patients they need for a larger trial from results of such small studies. No, no, no!

Having spent decades examining clinical trials, I am generally comfortable dismissing effect sizes that come from trials with less than 35 patients in the smaller group. I agree with a suggestion that if there are two larger trials are available in a given literature, go with those and ignore the smaller studies. If there are not at least two larger studies, keep the jury out on whether there is a significant effect.

Applying the Rule of 35, 5 of the 6 trials can be dismissed and the sixth is ambiguous because of loss of patients to follow up.  If promoters of mind-body interventions want to convince us that they have beneficial effects on physical health by conducting trials like these, they have to do better. None of the individual trials should increase our confidence in their claims. Collectively, the trials collapse in a mess without providing a single credible estimate of effect size. This attests to the poor quality of evidence and disrespect for methodology that characterizes this literature.

  1. Don’t be taken in by titles to peer-reviewed articles that are themselves an announcement that these interventions work. Titles may not be telling the truth.

What I found extraordinary is that five of the six randomized trials had a title that indicating a positive effect was found. I suspect that most people encountering the title will not actually go on to read the study. So, they will be left with the false impression that positive results were indeed obtained. It’s quite a clever trick to make the title of an article, by which most people will remember it, into a false advertisement for what was actually found.

For a start, we can simply remind ourselves that with these underpowered studies, investigators should not even be making claims about efficacy/effectiveness. So, one trick of the developing skeptic is to confirm that the claims being made in the title don’t fit with the size of the study. However, actually going to the results section one can find other evidence of discrepancies between what was found in what is being claimed.

I think it’s a general rule of thumb that we should be careful of titles for reports of randomized that declare results. Even when what is claimed in the title fits with the actual results, it often creates the illusion of a greater consistency with what already exists in the literature. Furthermore, even when future studies inevitably fail to replicate what is claimed in the title, the false claim lives on, because failing to replicate key findings is almost never a condition for retracting a paper.

  1. Check the institutional affiliations of the authors. These 6 trials serve as a depressing reminder that we can’t go on researchers’ institutional affiliation or having federal grants to reassure us of the validity of their claims. These authors are not from Quack-Quack University and they get funding for their research.

In all cases, the investigators had excellent university affiliations, mostly in California. Most studies were conducted with some form of funding, often federal grants.  A quick check of Google would reveal from at least one of the authors on a study, usually more, had federal funding.

  1. Check the conflicts of interest, but don’t expect the declarations to be informative. But be skeptical of what you find. It is also disappointing that a check of conflict of interest statements for these articles would be unlikely to arouse the suspicion that the results that were claimed might have been influenced by financial interests. One cannot readily see that the studies were generally done settings promoting alternative, unproven treatments that would benefit from the publicity generated from the studies. One cannot see that some of the authors have lucrative book contracts and speaking tours that require making claims for dramatic effects of mind-body treatments could not possibly be supported by: transparent reporting of the results of these studies. As we will see, one of the studies was actually conducted in collaboration with Deepak Chopra and with money from his institution. That would definitely raise flags in the skeptic community. But the dubious tie might be missed by patients in their families vulnerable to unwarranted claims and unrealistic expectations of what can be obtained outside of conventional medicine, like chemotherapy, surgery, and pharmaceuticals.

Based on what I found probing these six trials, I can suggest some further rules of thumb. (1) Don’t assume for articles about health effects of alternative treatments that all relevant conflicts of interest are disclosed. Check the setting in which the study was conducted and whether it was in an integrative [complementary and alternative, meaning mostly unproven.] care setting was used for recruiting or running the trial. Not only would this represent potential bias on the part of the authors, it would represent selection bias in recruitment of patients and their responsiveness to placebo effects consistent with the marketing themes of these settings.(2) Google authors and see if they have lucrative pop psychology book contracts, Ted talks, or speaking gigs at positive psychology or complementary and alternative medicine gatherings. None of these lucrative activities are typically expected to be disclosed as conflicts of interest, but all require making strong claims that are not supported by available data. Such rewards are perverse incentives for authors to distort and exaggerate positive findings and to suppress negative findings in peer-reviewed reports of clinical trials. (3) Check and see if known quacks have prepared recruitment videos for the study, informing patients what will be found (Serious, I was tipped off to look and I found that).

  1. Look for the usual suspects. A surprisingly small, tight, interconnected group is generating this research. You could look the authors up on Google or Google Scholar or  browse through my previous blog posts and see what I have said about them. As I will point out in my next blog, one got withering criticism for her claim that drinking carbonated sodas but not sweetened fruit drinks shortened your telomeres so that drinking soda was worse than smoking. My colleagues and I re-analyzed the data of another of the authors. We found contrary to what he claimed, that pursuing meaning, rather than pleasure in your life, affected gene expression related to immune function. We also showed that substituting randomly generated data worked as well as what he got from blood samples in replicating his original results. I don’t think it is ad hominem to point out a history for both of the authors of making implausible claims. It speaks to source credibility.
  1. Check and see if there is a trial registration for a study, but don’t stop there. You can quickly check with PubMed if a report of a randomized trial is registered. Trial registration is intended to ensure that investigators commit themselves to a primary outcome or maybe two and whether that is what they emphasized in their paper. You can then check to see if what is said in the report of the trial fits with what was promised in the protocol. Unfortunately, I could find only one of these was registered. The trial registration was vague on what outcome variables would be assessed and did not mention the outcome emphasized in the published paper (!). The registration also said the sample would be larger than what was reported in the published study. When researchers have difficulty in recruitment, their study is often compromised in other ways. I’ll show how this study was compromised.

Well, it looks like applying these generally useful rules of thumb is not always so easy with these studies. I think the small sample size across all of the studies would be enough to decide this research has yet to yield meaningful results and certainly does not support the claims that are being made.

But readers who are motivated to put in the time of probing deeper come up with strong signs of p-hacking and questionable research practices.

  1. Check the report of the randomized trial and see if you can find any declaration of one or two primary outcomes and a limited number of secondary outcomes. What you will find instead is that the studies always have more outcome variables than patients receiving these interventions. The opportunities for cherry picking positive findings and discarding the rest are huge, especially because it is so hard to assess what data were collected but not reported.
  1. Check and see if you can find tables of unadjusted primary and secondary outcomes. Honest and transparent reporting involves giving readers a look at simple statistics so they can decide if results are meaningful. For instance, if effects on stress and depressive symptoms are claimed, are the results impressive and clinically relevant? Almost in all cases, there is no peeking allowed. Instead, authors provide analyses and statistics with lots of adjustments made. They break lots of rules in doing so, especially with such a small sample. These authors are virtually assured to get results to crow about.

Famously, Joe Simmons and Leif Nelson hilariously published claims that briefly listening to the Beatles’ “When I’m 64” left students a year and a half older younger than if they were assigned to listening to “Kalimba.”  Simmons and Leif Nelson knew this was nonsense, but their intent was to show what researchers can do if they have free reign with how they analyze their data and what they report and  . They revealed the tricks they used, but they were so minor league and amateurish compared to what the authors of these trials consistently did in claiming that yoga, tai chi, and mindfulness modified expression of DNA.

Stay tuned for my next blog post where I go through the six studies. But consider this, if you or a loved one have to make an immediate decision about whether to plunge into the world of woo woo unproven medicine in hopes of  altering DNA expression. I will show the authors of these studies did not get the results they claimed. But who should care if they did? Effects were laughably trivial. As the authors of this review about which I have been complaining noted:

One other problem to consider are the various environmental and lifestyle factors that may change gene expression in similar ways to MBIs [Mind-Body Interventions]. For example, similar differences can be observed when analyzing gene expression from peripheral blood mononuclear cells (PBMCs) after exercise. Although at first there is an increase in the expression of pro-inflammatory genes due to regeneration of muscles after exercise, the long-term effects show a decrease in the expression of pro-inflammatory genes (55). In fact, 44% of interventions in this systematic review included a physical component, thus making it very difficult, if not impossible, to discern between the effects of MBIs from the effects of exercise. Similarly, food can contribute to inflammation. Diets rich in saturated fats are associated with pro-inflammatory gene expression profile, which is commonly observed in obese people (56). On the other hand, consuming some foods might reduce inflammatory gene expression, e.g., drinking 1 l of blueberry and grape juice daily for 4 weeks changes the expression of the genes related to apoptosis, immune response, cell adhesion, and lipid metabolism (57). Similarly, a diet rich in vegetables, fruits, fish, and unsaturated fats is associated with anti-inflammatory gene profile, while the opposite has been found for Western diet consisting of saturated fats, sugars, and refined food products (58). Similar changes have been observed in older adults after just one Mediterranean diet meal (59) or in healthy adults after consuming 250 ml of red wine (60) or 50 ml of olive oil (61). However, in spite of this literature, only two of the studies we reviewed tested if the MBIs had any influence on lifestyle (e.g., sleep, diet, and exercise) that may have explained gene expression changes.

How about taking tango lessons instead? You would at least learn dance steps, get exercise, and decrease any social isolation. And so what if there were more benefits than taking up these other activities?