The PACE PLOS One data will not be released and the article won’t be retracted

PLOS One has bought into discredited arguments about patient consent forms not allowing sharing of anonymized data. PLOS One is no longer at the vanguard of open science through routine data sharing.

mind the brain logo

Two years have passed since I requested release of the PLOS One PACE data, eight months since the Expression of Concern was posted. What can we expect?

expression of concern-page-0

9 dot problem
Solving the 9-dot problem involves paying attention and thinking outside the box.

If we spot some usually unrecognized connections, we can see the PLOS One editors are biased towards the PACE investigators, favoring them over other stakeholders in whether the data are released as promised..

Spoiler: The PLOS One Senior Editors completed the pre-specified process of deciding what to do about the data not being shared.  They took no action. Months later the Senior Editors reopened the process and invited one of PACE investigators Trudy Chalder’s outspoken co-authors to help them reconsider.

A lot of us weren’t cynical enough to notice.

International trends will continue toward making uploading data into publicly accessible repositories a requirement for publication. PLOS One has slowed down by buying into discredited arguments about patient consent forms not allowing sharing of anonymized data.

PLOS One is no longer at the vanguard of open science through routine data sharing.

The expression of concern

actual display of expression of concern on PLOS article
Actual Expression of Concern on display on PLOS One article.

The editors’ section of the Expression of Concern ends with:

In spite of requests to the authors and Queen Mary University of London, we have not yet received confirmation that an institutional process compatible with the existing PLOS data policy at the time has been developed or implemented for the independent evaluation of requests for data from this study. We conclude that the lack of resolution towards release of the dataset is not in line with the journal’s editorial policy and we are thus issuing this Expression of Concern to alert readers about the concerns raised about this article.

This is followed by the PACE investigators’ response:

Statement from the authors

We disagree with the Expression of Concern about our health economic paper that PLOS ONE has issued and do not accept that it is justified. We believe that data should be made available and have shared data from the PACE trial with other researchers previously, in line with our data sharing policy. This is consistent with the data sharing policies of Queen Mary University of London, and the Medical Research Council, which funded the trial. The policy allows for the sharing of data with other researchers, so long as safeguards are agreed regarding confidentiality of the data and consent as specified by the Research Ethics Committee (REC). We have also pointed out to PLOS ONE that our policy includes an independent appeal process, if a request is declined, so this policy is consistent with the journal’s policy when the paper was published.

During negotiations with the journal over these matters, we have sought further guidance from the PACE trial REC. They have advised that public release, even of anonymised data, is not appropriate. As a consequence, we are unable to publish the individual patient data requested by the journal. However, we have offered to provide key summarised data, sufficient to provide an independent re-analysis of our main findings, so long as it is consistent with the REC decision, on the PLOS ONE website. As such we are surprised by and question the decision by the journal to issue this Expression of Concern.

Check out my critique of their claim to have shared data from the PACE trial with other researchers-

Don’t bother to apply: PACE investigators issue guidance for researchers requesting access to data.

Nothing_to_DeclareConflict of interest: Nothing to declare?

 The PACE authors were thus given an extraordinary opportunity to undermine the editors’ Expression of Concern.

It is just as extraordinary that there is no disclosure of conflict of interest. After all, it is their paper is receiving expression of concern because of the authors’ failure to provide data as promised.

In contrast, when the PLOS One editors placed a discreet Editors Note in 2015 in the comment section of the article about the data not being shared when requested, it carried a COI declaration:

Competing interests declared: PLOS ONE Staff

That COI aroused the curiosity of Retraction Watch who asked PLOS One:

We weren’t sure what the last line was referring to, so contacted Executive Editor Veronique Kiermer. She told us that staff sometimes include their byline under “competing interests,” so the authorship is immediately clear to readers who may be scanning a series of comments.

Commentary from Retraction Watch

PLOS upgrades flag on controversial PACE chronic fatigue syndrome trial; authors “surprised”

Notable excerpts:

A spokesperson for PLOS told us this is the first time the journal has included a statement from the authors in an EOC:

This has been a complex case involving many stakeholders and we wanted to document the different aspects of the case in a fair manner.

And

We asked if the journal plans to retract the paper if the authors fail to provide what it’s asked for; the spokesperson explained:

At this time, PLOS stands by its Expression of Concern. For now, we have exhausted the options to make the data available in accordance with our policy at the time, but PLOS still seeks a positive outcome to this case for all parties. It is our intention to update this notice when a mechanism is established that allows concerns about the article’s analyses to be addressed while protecting patient privacy. PLOS has not given the authors a deadline.

Note: “PLOS did not given the authors a deadline.”

One of the readers who has requested the data is James Coyne, a psychologist at the University Medical Center, Groningen, who submitted his request 18 months ago (and wrote about it on the PLOS blog site). Although some of the data have been released (to one person under the Freedom of Information Act), it’s not nearly enough to conduct an analysis, Coyne told us:

This small data set does not allow recalculation of original primary outcomes but did allow recalculation of recovery data. Release of the PLOS data is crucial for a better understanding of what went on in that trial. That’s why the investigators are fighting so hard.

Eventually, Coyne began suggesting to PLOS that he would organize public protests and scientific meetings attended by journal representatives.

I think it is the most significant issue in psychotherapy today, in terms of data sharing. It’s a flagrant violation of international standards.

The Retraction Watch article cited a 2015 STAT article that was written by Retraction Watch co-founders Ivan Oransky and Adam Marcus. That article was sympathetic to my request:

If the information Coyne is seeking is harmful and distressing to the staff of the university — and that’s the university’s claim, not ours — that’s only because the information is in fact harmful and distressing. In other words, revealing that you have nothing to hide is much less embarrassing than revealing that you’re hiding something.

The STAT article also said:

To be clear, Coyne’s not asking for sex tapes or pictures of lab workers taking bong hits. He’s asking for raw data so that he can evaluate whether what a group of scientists reported in print is in fact what those data show. It’s called replication, and as Richard Smith, former editor of The BMJ (and a member of our board of directors), put it last week, the refusal goes “against basic scientific principles.” But, unfortunately, stubborn researchers and institutions have used legal roadblocks before to prevent scrutiny of science.

The PLOS One Editors’ blog  post.

The Expression of Concern was accompanied by a blog post from PLOS Iratxe Puebla, Managing Editor for PLOS ONE and Joerg Heber, Editor-in-Chief on May 2, 2017

Data sharing in clinical research: challenges and open opportunities

Since we feel we have exhausted the options to make the data available responsibly, and considering the questions that were raised about the validity of the article’s conclusions, we have decided to post an Expression of Concern [5] to alert readers that the data are not available in line with the journal’s editorial policy. It is our intention to update this notice when a mechanism is established that allows concerns about the article’s analyses to be addressed while protecting patient privacy.

This statement seems to suggest that the ball is in the PACE investigators’ court and that PLOS One editors are prepared to wait. But reading the rest of the blog post, it becomes apparent that PLOS One is wavering on the data sharing policy

Current challenges and opportunities ahead

During our follow up it became clear that there is little consensus of opinion on the sharing of this particular dataset. Experts from the Data Advisory Board whom we consulted expressed different views on the stringency of the journal reaction. Overall they agreed on the need to consider the risk to confidentiality of the trial participants and on the relevance of developing mechanisms for consideration of data requests by an independent body or committee. Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.

Fact checking the PLOS One’s Editors’ blog and a rebuttal

John Peter fact checked  the PLOS One editors’ blog. It came up short on a number of points.

“Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.”

This line is misleading and reveals either ignorance or misunderstanding of the decision in Matthees.

The Information Tribunal (IT) is not a committee. It is part of the courts system of England and Wales.

…the IT’s decisions may be appealed to a higher court. As QMUL chose not to exercise this right but to opt instead to accept the decision, then clearly it considered there were no grounds for appeal. The decision stands in its entirety and applies without condition or caveat.

And

The court had two decisions to make:

First, could and should trial data be released and if so what test should apply to determine whether particular data should be made public? Second, when that test is applied to this particular set of data, do they meet that test?

The unanimous decision on the first question was very clear: there is no legal or ethical consideration which prevents release; release is permitted by the consent forms; there is a strong public interest in the release; making data available advances legitimate scientific debate; and the data should be released.

The test set by this unanimous decision was simple: whether data can be anonymized. Furthermore, again unanimously, the Tribunal stated that the test for anonymization is not absolute. It is whether the risk of identification is reasonably likely, not whether it is remote, and whether patients can be identified without prior knowledge, specialist knowledge or equipment, or resort to criminality.

It was on applying this test to the data requested, on whether they could be properly anonymized, that the IT reached a majority decision.

On the principles, on how these decisions should be made, on the test which should be applied and on the nature of that test, the court was unanimous.

It should also be noted that to share data which have not been anonymized would be in breach of the Data Protection Act. QMUL has shared these data with other researchers. QMUL should either report itself to the Information Commissioner’s Office or accept that the data can be anonymized. In which case, the unanimous decision of the IT is very clear: the data should be shared.

PLOS ONE should apply the IT decision and its own regulations and demand the data be shared or the paper retracted.

Data Advisory Board

The Editors’ blog referred to “Experts from the Data Advisory Board.. express[ing] different views on the stringency of the journal reaction.”

That was a source of puzzlement for me. Established procedures make no provision for an advisory board as part of the process or any appeal.

A Google Search clarified. I had been to this page a number of times before and did not remember seeing this statement. There is no date or any indication it was added after the rest of the statement.

PLOS has formed an external board of advisors across many fields of research published in PLOS journals. This board will work with us to develop community standards for data sharing across various fields, provide input and advice on especially complex data-sharing situations submitted to the journals, define data-sharing compliance, and proactively work to refine our policy. If you have any questions or feedback, we welcome you to write to us at data@plos.org.

The availability of data from reanalysis and independent probing has lots of stakeholders. Independent investigators, policymakers, and patients all have a stake. I don’t recognize the names on this list and see no indication that consumers affected by what is reported in clinical and health services papers have role in making decisions about the release of data. But one name stands out.

Who is Malcolm Macleod and what is he doing in this decision-making process?

Malcolm Macleod is quoted in the Science Media Centre reaction to the PACEgate special issue:

 Expert reaction to Journal of Health Psychology’s Special Issue on The PACE Trial

Prof. Malcolm Macleod, Professor of Neurology and Translational Neuroscience, University of Edinburgh, said:

“The PACE trial, while not perfect, provides far and away the best evidence for the effectiveness of any intervention for chronic fatigue; and certainly is more robust than any of the other research cited. Reading the criticisms, I was struck by how little actual meat there is in them; and wondered where some of the authors came from. In fact, one of them lists as an institution a research centre (Soerabaja Research Center) which only seems to exist as an affiliation on papers he wrote criticising the PACE trial.

“Their main criticisms seem to revolve around the primary outcome was changed halfway through the trial: there are lots of reasons this can happen, some justifiable and others not; the main think is whether it was done without knowledge of the outcomes already accumulated in the trial and before data lock – which is what was done here.

“So I don’t think there is really a story here, apart from a group of authors, some of doubtful provenance, kicking up dust about a study which has a few minor wrinkles (as all do) but still provides information reliable enough to shape practice. If you substitute ‘CFS’ for ‘autism’ and ‘PACE trial’ for ‘vaccination’ you see a familiar pattern…”

The declaration of interest is revealing in what it says and what it does not say.

Prof. MacLeod: “Prof Sharpe used to have an office next to my wife’s; and I sit on the PLoS Data board that considered what to do about one of their other studies.

The declaration fails to reveal a recent publication co-authored by Macleod and Trudy  Chalder.

Wu S, Mead G, Macleod M, Chalder T. Model of understanding fatigue after stroke. Stroke. 2015 Mar 1;46(3):893-8.

This press release comes from an organization strongly committed to the protection of the PACE trial from independent scrutiny. The SMC even organized a letter writing campaign headed by Peter White to petition Parliament to exclude universities for Freedom of Information Act requests. Of course, that will effectively block request for data.

Why would the PLOS One editors involved such a person to reconsider what been a decision in favor of releasing the data?

Connect the dots.

Trends will continue toward making uploading data into publicly accessible repositories a requirement for publication. PLOS One has bought into discredited arguments about patient consent forms not allowing sharing of anonymized data. PLOS One is no longer at the vanguard of open science through routine data sharing.

Sordid tale of a study of cognitive behavioral therapy for schizophrenia gone bad

What motivates someone to publish that paper without checking it? Laziness? Naivety? Greed? Now that’s one to ponder. – Neuroskeptic, Science needs vigilantes.

feared_and_hated_by_a_world_they_have_sworn_to_pro_by_itomibhaa-d4kx9bd.pngWe need to

  • Make the world safe for post-publication peer review (PPR) commentary.
  • Ensure appropriate rewards for those who do it.
  • Take action against those who try to make life unpleasant for those who are toil hard for a scientific literature that is more trustworthy.

In this issue of Mind the Brain, I set the stage for my teaming up with Magneto to bring some bullies to justice.

The background tale of a modest study of cognitive behavior therapy (CBT) for patients with schizophrenia has been told in bits and pieces elsewhere.

The story at first looked like it was heading for a positive outcome more worthy of a blog post than the shortcomings of a study in an obscure journal. The tale would go

A group organized on the internet called attention to serious flaws in the reporting of a study. We then witnessed the self-correcting of science in action.

If only this story was complete and accurately described scientific publishing today

Daniel Lakens’ blog post, How a Twitter HIBAR [Had I Been A Reviewer] ends up as a published letter to the editor recounts the story beginning with expressions of puzzlement and skepticism on Twitter.

Gross errors were made in a table and a figure. These were bad enough in themselves, but seemed to point to reported results not seem supporting the claims made in the article.

A Swedish lecturer blogged Through the looking glass into an oddly analyzed clinical paper .

Some of those involved in the Twitter exchange banded together in writing a letter to the editor.

Smits, T., Lakens, D., Ritchie, S. J., & Laws, K. R. (2014). Statistical errors and omissions in a trial of cognitive behavior techniques for psychosis: commentary on Turkington et al. The Journal of Nervous and Mental Disease, 202(7), 566.

Lakens explained in his blog

Now I understand that getting criticism on your work is never fun. In my personal experience, it very often takes a dinner conversation with my wife before I’m convinced that if people took the effort to criticize my work, there must be something that can be improved. What I like about this commentary is that is shows how Twitter is making post-publication reviews possible. It’s easy to get in contact with other researchers to discuss any concerns you might have (as Keith did in his first Tweet). Note that I have never met any of my co-authors in real life, demonstrating how Twitter can greatly extend your network and allows you to meet interesting and smart people who share your interests. Twitter provides a first test bed for your criticisms to see if they hold up (or if the problem lies in your own interpretation), and if a criticism is widely shared, can make it fun to actually take the effort to do something about a paper that contains errors.

Furthermore,

It might be slightly weird that Tim, Stuart, and myself publish a comment in the Journal of Nervous and Mental Disease, a journal I guess none of us has ever read before. It also shows how Twitter extends the boundaries between scientific disciplines. This can bring new insights about reporting standards  from one discipline to the next. Perhaps our comment has made researchers, reviewers, and editors who do research on cognitive behavioral therapy aware of the need to make sure they raise the bar on how they report statistics (if only so pesky researchers on Twitter leave you alone!). I think this would be great, and I can’t wait until researchers from another discipline point out statistical errors in my own articles that I and my closer peers did not recognize, because anything that improves the way we do science (such as Twitter!) is a good thing.

Hindsight: If the internet group had been the original reviewers of the article…

The letter was low key and calmly pointed out obvious errors. You can see it here. Tim Smit’s blog Don’t get all psychotic on this paper: Had I (or we) Been A Reviewer (HIBAR) describes what had to be left out to keep within the word limit.

the actual table originalTable 2 had lots of problems –

  • The confidence intervals were suspiciously wide.
  • The effect sizes seemed too large for what the modest sample size should yield.
  • The table was inconsistent with information in the abstract.
  • Neither they table nor the accompanying text had any test of significance nor reporting of means and standard deviations.
  • Confidence intervals for two different outcomes were identical, yet one had the same value for its effect size as its lower bound.

Figure 5 Click to Enlarge

Figure 5 was missing labels and definitions on both axes, rendering it uninterpretable. Duh?

The authors of the letter were behaving like a blue helmeted international peacekeeping force, not warriors attacking bad science.

peacekeepersBut you don’t send peacekeeping troops into an active war zone.

In making recommendations, the Internet group did politely introduce the R word:

We believe the above concerns mandate either an extensive correction, or perhaps a retraction, of the article by Turkington et al. (2014). At the very least, the authors should reanalyze their data and report the findings in a transparent and accurate manner.

Fair enough, but I doubt the authors of the letter appreciated how upsetting this reasonable advice was or anticipated what reaction would be coming.

A response from an author of the article and a late night challenge to debate

The first author of the article published a reply

Turkington, D. (2014). The reporting of confidence intervals in exploratory clinical trials and professional insecurity: a response to Ritchie et al. The Journal of Nervous and Mental Disease, 202(7), 567.

He seemed to claim to re-examine the study data and

  • The findings were accurately reported.
  • A table of means and standard deviations was unnecessary because of the comprehensive reporting of confidence intervals and p-values in the article.
  • The missing details from the figure were self-evident.

The group who had assembled on the internet was not satisfied. An email exchange with Turkington and the editor of the journal confirmed that Turkington had not actually re-examined the raw file data, but only a summary with statistical tables.

The group requested the raw data. In a subsequent letter to the editor, they would describe Turkington as timely the providing the data, but the exchange between them was anything but cordial. Turkington at first balked, saying that the data were not readily available because the statistician had retired. He nonetheless eventually provided the data, but not before first sending off a snotty email –

Click to Enlarge
Click to Enlarge

Tim Smit declined:

Dear Douglas,

Thanks for providing the available data as quick as possible. Based on this and the tables in the article, we will try to reconstruct the analysis and evaluate our concerns with it.

With regard to your recent invitation to “slaughter” me at Newcastle University, I politely want to decline that invitation. I did not have any personal issue in mind when initiating the comment on your article, so a personal attack is the least of my priorities. It is just from a scientific perspective (but an outsider to the research topic) that I was very confused/astonished about the lack of reporting precision and what appears to be statistical errors. So, if our re-analysis confirms that first perception, then I am of course willing to accept your invitation at Newcastle university to elaborate on proper methodology in intervention studies, since science ranks among the highest of my priorities.

Best regards,

Tim Smits

When I later learned of this email exchange, I wrote to Turkington and offered to go to Newcastle to debate either as Tim Smits’ second or to come alone. Turkington asked me to submit my CV to show that I wasn’t a crank. I complied, but he has yet to accept my offer.

A reanalysis of the data and a new table

Smits, T., Lakens, D., Ritchie, S. J., & Laws, K. R. (2015). Correcting Errors in Turkington et al.(2014): Taking Criticism Seriously. The Journal of nervous and mental disease, 203(4), 302-303.

The group reanalyzed the data and the title of their report leaked some frustration.

We confirmed that all the errors identified by Smits et al. (2014) were indeed errors. In addition, we observed that the reported effect sizes in Turkington et al. (2014) were incorrect by a considerable margin. To correct these errors, Table 2 and all the figures in Turkington et al. (2014) need to be changed.

The sentence in the Abstract where effect sizes are specified needs to be rewritten.

A revised table based on their reanalyses was included:

new tableGiven the recommendation of their first letter was apparently dismissed –

To conclude, our recommendation for the Journal and the authors would now be to acknowledge that there are clear errors in the original Turkington et al. (2014) article and either accept our corrections or publish their own corrigendum. Moreover, we urge authors, editors, and reviewers to be rigorous in their research and reviewing, while at the same time being eager to reflect on and scrutinize their own research when colleagues point out potential errors. It is clear that the authors and editors should have taken more care when checking the validity of our criticisms. The fact that a rejoinder with the title “A Response to Ritchie et al. [sic]” was accepted for publication in reply to a letter by Smits et al. (2014) gives the impression that our commentary did not receive the attention it deserved. If we want science to be self-correcting, it is important that we follow ethical guidelines when substantial errors in the published literature are identified.

Sound and fury signifying nothing

Publication of their letter was accompanied by a blustery commentary from the statistical editor for the journal full of innuendo and pomposity.

quote-a-harmless-hilarity-and-a-buoyant-cheerfulness-are-not-infrequent-concomitants-of-genius-and-we-charles-caleb-colton-294969

Cicchetti, D. V. (2015). Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective. The Journal of Nervous and Mental Disease, 203(4), 304-305.

He suggested that the team assembled on the internet

reanalyzed the data of Turkington et al. on the basis that it contained some serious errors that needed to be corrected. They also reported that the statistic that Turkington et al. had used to assess effect sizes (ESs) was an inappropriate metric.

Well, did Turkington’s table contain errors and was the metric inappropriate? If so, was a formal correction or even retraction needed? Cicchetti reproduced the internet groups’ table, but did not immediately offer his opinion. So, the uncorrected article stands as published. Interested persons downloading it from behind the journal’s paywall won’t be alerted to the controversy.

hello potInstead of dealing with the issues at hand, Cicchetti launched into an irrelevant lecture about Jacob Cohen’s arbitrary designation of effect sizes as small, medium, or large. Anything he said had already appeared clearer and more accurately in an article by Daniel Laken, one of the internet group authors. Cicchetti cited that article, but only as a basis for libeling the open access journal in which it appeared.

To be perfectly candid, the reader needs to be informed that the journal that published the Lakens (2013) article, Frontiers in Psychology, is one of an increasing number of journals that charge exorbitant publication fees in exchange for free open access to published articles. Some of the author costs are used to pay reviewers, causing one to question whether the process is always unbiased, as is the desideratum. For further information, the reader is referred to the following Web site: http://www.frontiersin.org/Psychology/fees.

love pomposityCicchetti further chastised the internet group for disrespecting the saints of power analysis.

As an additional comment, the stellar contributions of Helena Kraemer and Sue Thiemann (1987) were noticeable by their very absence in the Smits et al. critique. The authors, although genuinely acknowledging the lasting contributions of Jacob Cohen to our understanding of ES and power analysis, sought to simplify the entire enterprise

Jacob Cohen is dead and cannot speak. But good Queen Mother Helena is very much alive and would surely object to being drawn into this nonsense. I encourage Cicchetti to ask what she thinks.

Ah, but what about the table based on the re-analyses of the internet group that Cicchetti had reproduced?

The reader should also be advised that this comment rests upon the assumption that the revised data analyses are indeed accurate because I was not privy to the original data.

Actually, when Turkington sent the internet group the study data, he included Cicchetti in the email.

The internet group experienced one more indignity from the journal that they had politely tried to correct. They had reproduced Turkington’s original table in their letter. The journal sent them an invoice for 106 euros because the table was copyrighted. It took a long email exchange before this billing was rescinded.

Science Needs Vigilantes

Imagine a world where we no longer depend on a few cronies of an editor to decide once and forever the value of a paper. This would replace the present order in which much of the scientific literature is untrustworthy, where novelty and sheer outrageousness of claims are valued over robustness.

Imagine we have constructed a world where post publication commentary is welcomed and valued. Data are freely available for reanalysis and the rewards are there for performing those re-analyses.

We clearly are not there yet and certainly not with this flawed article. The sequence of events that I have described has so far not produced a correction of a paper. As it stands, the paper concludes that nurses can and should be given a brief training that will allow them to effectively treat patients with severe and chronic mental disorder. This paper encourages actions that may put such patients and society at risk because of ineffectual and neglectful treatment.

The authors of the original paper and the editor responded with dismissal of the criticisms, ridicule, and, the editor at least, libeling open access journals. Obviously, we have not reached the point at which those willing to re-examine and if necessary, re-analyze data, are appropriately respected and protected from unfair criticism. The current system of publishing gives authors who have been questions and editors who are defensive of their work, no matter how incompetent and inept it may be, the last word. But there is always the force of social media- tweets and blogs.

The critics were actually much too kind and restrained in a critique narrowly based on re-analyses. They ignored so much about

  • The target paper as an underpowered feasibility study being passed off a source of estimates of what a sufficiently sized randomized trial would yield.
  • The continuity between the mischief done in this article with tricks and spin in the past work of the author Turkington.
  • The laughably inaccurate lecture of the editor.
  • The lowlife journal in which the article was published.

These problems deserve a more unrestrained and thorough trashing. Journals may not yet be self-correcting, but blogs can do a reasonable job of exposing bad science.

Science needs vigilantes, because of the intransigence of those pumping crap into the literature.

Coming up next

In my next issue of Mind the Brain I’m going to team up with Magneto. You may recall I previously collaborated with him and Neurocritic to scrutinize some junk science that Jim Coan and Susan Johnson had published in PLOS One. Their article crassly promoted to clinicians what they claimed was a brain-soothing couples therapy. We obtained an apology and a correction in the journal for undeclared conflict of interest.

Magneto_430But that incident left Magneto upset with me. He felt I did not give sufficient attention to the continuity between how Coan had slipped post hoc statistical manipulations in the PLOS article to get positive results and what he had done in a past paper with Richard Davison. Worse, I had tipped off Jim Coan about our checking his work. Coan launched a pre-emptive tirade against post-publication scrutiny, his now infamous Negative Psychology rant  He focused his rage on Neuroskeptic, not Neurocritic or me, but the timing was not a coincidence. He then followed up by denouncing me on Facebook as the Chopra Deepak of skepticism.

I still have not unpacked that oxymoronic statement and decided if it was a compliment.

OK, Magneto, I will be less naïve and more thorough this round. I will pass on whatever you uncover.

Check back if you just want to augment your critical appraisal skills with some unconventional ones or if you just enjoy a spectacle. If you want to arrive at your own opinions ahead of time, email Douglas Turkington douglas.turkington@ntw.nhs.uk and for a PDF of his paywalled article. Tell him I said hello. The offer of a debate still stands.