Sordid tale of a study of cognitive behavioral therapy for schizophrenia gone bad

What motivates someone to publish that paper without checking it? Laziness? Naivety? Greed? Now that’s one to ponder. – Neuroskeptic, Science needs vigilantes.

feared_and_hated_by_a_world_they_have_sworn_to_pro_by_itomibhaa-d4kx9bd.pngWe need to

  • Make the world safe for post-publication peer review (PPR) commentary.
  • Ensure appropriate rewards for those who do it.
  • Take action against those who try to make life unpleasant for those who are toil hard for a scientific literature that is more trustworthy.

In this issue of Mind the Brain, I set the stage for my teaming up with Magneto to bring some bullies to justice.

The background tale of a modest study of cognitive behavior therapy (CBT) for patients with schizophrenia has been told in bits and pieces elsewhere.

The story at first looked like it was heading for a positive outcome more worthy of a blog post than the shortcomings of a study in an obscure journal. The tale would go

A group organized on the internet called attention to serious flaws in the reporting of a study. We then witnessed the self-correcting of science in action.

If only this story was complete and accurately described scientific publishing today

Daniel Lakens’ blog post, How a Twitter HIBAR [Had I Been A Reviewer] ends up as a published letter to the editor recounts the story beginning with expressions of puzzlement and skepticism on Twitter.

Gross errors were made in a table and a figure. These were bad enough in themselves, but seemed to point to reported results not seem supporting the claims made in the article.

A Swedish lecturer blogged Through the looking glass into an oddly analyzed clinical paper .

Some of those involved in the Twitter exchange banded together in writing a letter to the editor.

Smits, T., Lakens, D., Ritchie, S. J., & Laws, K. R. (2014). Statistical errors and omissions in a trial of cognitive behavior techniques for psychosis: commentary on Turkington et al. The Journal of Nervous and Mental Disease, 202(7), 566.

Lakens explained in his blog

Now I understand that getting criticism on your work is never fun. In my personal experience, it very often takes a dinner conversation with my wife before I’m convinced that if people took the effort to criticize my work, there must be something that can be improved. What I like about this commentary is that is shows how Twitter is making post-publication reviews possible. It’s easy to get in contact with other researchers to discuss any concerns you might have (as Keith did in his first Tweet). Note that I have never met any of my co-authors in real life, demonstrating how Twitter can greatly extend your network and allows you to meet interesting and smart people who share your interests. Twitter provides a first test bed for your criticisms to see if they hold up (or if the problem lies in your own interpretation), and if a criticism is widely shared, can make it fun to actually take the effort to do something about a paper that contains errors.

Furthermore,

It might be slightly weird that Tim, Stuart, and myself publish a comment in the Journal of Nervous and Mental Disease, a journal I guess none of us has ever read before. It also shows how Twitter extends the boundaries between scientific disciplines. This can bring new insights about reporting standards  from one discipline to the next. Perhaps our comment has made researchers, reviewers, and editors who do research on cognitive behavioral therapy aware of the need to make sure they raise the bar on how they report statistics (if only so pesky researchers on Twitter leave you alone!). I think this would be great, and I can’t wait until researchers from another discipline point out statistical errors in my own articles that I and my closer peers did not recognize, because anything that improves the way we do science (such as Twitter!) is a good thing.

Hindsight: If the internet group had been the original reviewers of the article…

The letter was low key and calmly pointed out obvious errors. You can see it here. Tim Smit’s blog Don’t get all psychotic on this paper: Had I (or we) Been A Reviewer (HIBAR) describes what had to be left out to keep within the word limit.

the actual table originalTable 2 had lots of problems –

  • The confidence intervals were suspiciously wide.
  • The effect sizes seemed too large for what the modest sample size should yield.
  • The table was inconsistent with information in the abstract.
  • Neither they table nor the accompanying text had any test of significance nor reporting of means and standard deviations.
  • Confidence intervals for two different outcomes were identical, yet one had the same value for its effect size as its lower bound.

Figure 5 Click to Enlarge

Figure 5 was missing labels and definitions on both axes, rendering it uninterpretable. Duh?

The authors of the letter were behaving like a blue helmeted international peacekeeping force, not warriors attacking bad science.

peacekeepersBut you don’t send peacekeeping troops into an active war zone.

In making recommendations, the Internet group did politely introduce the R word:

We believe the above concerns mandate either an extensive correction, or perhaps a retraction, of the article by Turkington et al. (2014). At the very least, the authors should reanalyze their data and report the findings in a transparent and accurate manner.

Fair enough, but I doubt the authors of the letter appreciated how upsetting this reasonable advice was or anticipated what reaction would be coming.

A response from an author of the article and a late night challenge to debate

The first author of the article published a reply

Turkington, D. (2014). The reporting of confidence intervals in exploratory clinical trials and professional insecurity: a response to Ritchie et al. The Journal of Nervous and Mental Disease, 202(7), 567.

He seemed to claim to re-examine the study data and

  • The findings were accurately reported.
  • A table of means and standard deviations was unnecessary because of the comprehensive reporting of confidence intervals and p-values in the article.
  • The missing details from the figure were self-evident.

The group who had assembled on the internet was not satisfied. An email exchange with Turkington and the editor of the journal confirmed that Turkington had not actually re-examined the raw file data, but only a summary with statistical tables.

The group requested the raw data. In a subsequent letter to the editor, they would describe Turkington as timely the providing the data, but the exchange between them was anything but cordial. Turkington at first balked, saying that the data were not readily available because the statistician had retired. He nonetheless eventually provided the data, but not before first sending off a snotty email –

Click to Enlarge
Click to Enlarge

Tim Smit declined:

Dear Douglas,

Thanks for providing the available data as quick as possible. Based on this and the tables in the article, we will try to reconstruct the analysis and evaluate our concerns with it.

With regard to your recent invitation to “slaughter” me at Newcastle University, I politely want to decline that invitation. I did not have any personal issue in mind when initiating the comment on your article, so a personal attack is the least of my priorities. It is just from a scientific perspective (but an outsider to the research topic) that I was very confused/astonished about the lack of reporting precision and what appears to be statistical errors. So, if our re-analysis confirms that first perception, then I am of course willing to accept your invitation at Newcastle university to elaborate on proper methodology in intervention studies, since science ranks among the highest of my priorities.

Best regards,

Tim Smits

When I later learned of this email exchange, I wrote to Turkington and offered to go to Newcastle to debate either as Tim Smits’ second or to come alone. Turkington asked me to submit my CV to show that I wasn’t a crank. I complied, but he has yet to accept my offer.

A reanalysis of the data and a new table

Smits, T., Lakens, D., Ritchie, S. J., & Laws, K. R. (2015). Correcting Errors in Turkington et al.(2014): Taking Criticism Seriously. The Journal of nervous and mental disease, 203(4), 302-303.

The group reanalyzed the data and the title of their report leaked some frustration.

We confirmed that all the errors identified by Smits et al. (2014) were indeed errors. In addition, we observed that the reported effect sizes in Turkington et al. (2014) were incorrect by a considerable margin. To correct these errors, Table 2 and all the figures in Turkington et al. (2014) need to be changed.

The sentence in the Abstract where effect sizes are specified needs to be rewritten.

A revised table based on their reanalyses was included:

new tableGiven the recommendation of their first letter was apparently dismissed –

To conclude, our recommendation for the Journal and the authors would now be to acknowledge that there are clear errors in the original Turkington et al. (2014) article and either accept our corrections or publish their own corrigendum. Moreover, we urge authors, editors, and reviewers to be rigorous in their research and reviewing, while at the same time being eager to reflect on and scrutinize their own research when colleagues point out potential errors. It is clear that the authors and editors should have taken more care when checking the validity of our criticisms. The fact that a rejoinder with the title “A Response to Ritchie et al. [sic]” was accepted for publication in reply to a letter by Smits et al. (2014) gives the impression that our commentary did not receive the attention it deserved. If we want science to be self-correcting, it is important that we follow ethical guidelines when substantial errors in the published literature are identified.

Sound and fury signifying nothing

Publication of their letter was accompanied by a blustery commentary from the statistical editor for the journal full of innuendo and pomposity.

quote-a-harmless-hilarity-and-a-buoyant-cheerfulness-are-not-infrequent-concomitants-of-genius-and-we-charles-caleb-colton-294969

Cicchetti, D. V. (2015). Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective. The Journal of Nervous and Mental Disease, 203(4), 304-305.

He suggested that the team assembled on the internet

reanalyzed the data of Turkington et al. on the basis that it contained some serious errors that needed to be corrected. They also reported that the statistic that Turkington et al. had used to assess effect sizes (ESs) was an inappropriate metric.

Well, did Turkington’s table contain errors and was the metric inappropriate? If so, was a formal correction or even retraction needed? Cicchetti reproduced the internet groups’ table, but did not immediately offer his opinion. So, the uncorrected article stands as published. Interested persons downloading it from behind the journal’s paywall won’t be alerted to the controversy.

hello potInstead of dealing with the issues at hand, Cicchetti launched into an irrelevant lecture about Jacob Cohen’s arbitrary designation of effect sizes as small, medium, or large. Anything he said had already appeared clearer and more accurately in an article by Daniel Laken, one of the internet group authors. Cicchetti cited that article, but only as a basis for libeling the open access journal in which it appeared.

To be perfectly candid, the reader needs to be informed that the journal that published the Lakens (2013) article, Frontiers in Psychology, is one of an increasing number of journals that charge exorbitant publication fees in exchange for free open access to published articles. Some of the author costs are used to pay reviewers, causing one to question whether the process is always unbiased, as is the desideratum. For further information, the reader is referred to the following Web site: http://www.frontiersin.org/Psychology/fees.

love pomposityCicchetti further chastised the internet group for disrespecting the saints of power analysis.

As an additional comment, the stellar contributions of Helena Kraemer and Sue Thiemann (1987) were noticeable by their very absence in the Smits et al. critique. The authors, although genuinely acknowledging the lasting contributions of Jacob Cohen to our understanding of ES and power analysis, sought to simplify the entire enterprise

Jacob Cohen is dead and cannot speak. But good Queen Mother Helena is very much alive and would surely object to being drawn into this nonsense. I encourage Cicchetti to ask what she thinks.

Ah, but what about the table based on the re-analyses of the internet group that Cicchetti had reproduced?

The reader should also be advised that this comment rests upon the assumption that the revised data analyses are indeed accurate because I was not privy to the original data.

Actually, when Turkington sent the internet group the study data, he included Cicchetti in the email.

The internet group experienced one more indignity from the journal that they had politely tried to correct. They had reproduced Turkington’s original table in their letter. The journal sent them an invoice for 106 euros because the table was copyrighted. It took a long email exchange before this billing was rescinded.

Science Needs Vigilantes

Imagine a world where we no longer depend on a few cronies of an editor to decide once and forever the value of a paper. This would replace the present order in which much of the scientific literature is untrustworthy, where novelty and sheer outrageousness of claims are valued over robustness.

Imagine we have constructed a world where post publication commentary is welcomed and valued. Data are freely available for reanalysis and the rewards are there for performing those re-analyses.

We clearly are not there yet and certainly not with this flawed article. The sequence of events that I have described has so far not produced a correction of a paper. As it stands, the paper concludes that nurses can and should be given a brief training that will allow them to effectively treat patients with severe and chronic mental disorder. This paper encourages actions that may put such patients and society at risk because of ineffectual and neglectful treatment.

The authors of the original paper and the editor responded with dismissal of the criticisms, ridicule, and, the editor at least, libeling open access journals. Obviously, we have not reached the point at which those willing to re-examine and if necessary, re-analyze data, are appropriately respected and protected from unfair criticism. The current system of publishing gives authors who have been questions and editors who are defensive of their work, no matter how incompetent and inept it may be, the last word. But there is always the force of social media- tweets and blogs.

The critics were actually much too kind and restrained in a critique narrowly based on re-analyses. They ignored so much about

  • The target paper as an underpowered feasibility study being passed off a source of estimates of what a sufficiently sized randomized trial would yield.
  • The continuity between the mischief done in this article with tricks and spin in the past work of the author Turkington.
  • The laughably inaccurate lecture of the editor.
  • The lowlife journal in which the article was published.

These problems deserve a more unrestrained and thorough trashing. Journals may not yet be self-correcting, but blogs can do a reasonable job of exposing bad science.

Science needs vigilantes, because of the intransigence of those pumping crap into the literature.

Coming up next

In my next issue of Mind the Brain I’m going to team up with Magneto. You may recall I previously collaborated with him and Neurocritic to scrutinize some junk science that Jim Coan and Susan Johnson had published in PLOS One. Their article crassly promoted to clinicians what they claimed was a brain-soothing couples therapy. We obtained an apology and a correction in the journal for undeclared conflict of interest.

Magneto_430But that incident left Magneto upset with me. He felt I did not give sufficient attention to the continuity between how Coan had slipped post hoc statistical manipulations in the PLOS article to get positive results and what he had done in a past paper with Richard Davison. Worse, I had tipped off Jim Coan about our checking his work. Coan launched a pre-emptive tirade against post-publication scrutiny, his now infamous Negative Psychology rant  He focused his rage on Neuroskeptic, not Neurocritic or me, but the timing was not a coincidence. He then followed up by denouncing me on Facebook as the Chopra Deepak of skepticism.

I still have not unpacked that oxymoronic statement and decided if it was a compliment.

OK, Magneto, I will be less naïve and more thorough this round. I will pass on whatever you uncover.

Check back if you just want to augment your critical appraisal skills with some unconventional ones or if you just enjoy a spectacle. If you want to arrive at your own opinions ahead of time, email Douglas Turkington douglas.turkington@ntw.nhs.uk and for a PDF of his paywalled article. Tell him I said hello. The offer of a debate still stands.

 

Pay $1000 to criticize a bad ‘blood test for depression’ article?

pay to play-1No way, call for retraction.

Would you pay $1,000 for the right to criticize bad science in the journal in which it originally appeared? That is what it costs to participate in postpublication peer review at the online Nature Publishing Group (NPG) journal, Translational Psychiatry.

Damn, NPG is a high-fashion brand, but peer review is quite fallible, even at an NPG npgxJournal. Should we have to pay to point out the flawed science that even NPG inevitably delivers? You’d think we were doing them a favor in terms of quality control.

Put differently, should the self-correction on which scientific progress so thoroughly depends require critics be willing to pay, presumably out of their own personal funds? Sure, granting agencies now reimburse publication costs for the research they fund, but a critique is unlikely to qualify.

Take another perspective: Suppose you have asmall data set of patients for whom you have blood samples.  The limited value of the data set was further comporimsed by substantial, nonrandom loss to follow-up. But you nonetheless want to use it to solicit industry funding for a “blood test for depression.” Would you be willing to pay a premium of $3,600-$3,900 to publish your results in a prestigious NPG journal, with the added knowledge that it would be insulated from critics?

I was curious just who would get so worked up about an article that they would pay $1,000 to complain.

So, I put Translational Psychiatry in PUBLICATION NAME at Web of Science. It yielded 379 entries. I then applied the restriction CORRESPONDENCE and that left only two entries.

Both were presenting original data and did not even cite another article in Translational Psychiatry.  Maybe the authors were trying to get a publication into an NPG journal on the cheap, at a discount of $2,600.

P2PinvestIt appears that nobody has ever published a letter to the editor in Translational Psychiatry. Does that mean that there has never ever been anything about which to complain? Is everything we find in Translational Psychiatry perfectly trustworthy?

I recently posted at Mind the Brain and elsewhere about a carefully-orchestrated media campaign promoting some bad science published in Translational Psychiatry. An extraordinary publicity effort disseminated a Northwestern University press release and video to numerous media outlets. There was an explicit appeal for industry funding for the development of what was supposedly a nearly clinic-ready inexpensive blood test for depression.

The Translational Psychiatry website where I learned of these publication costs displays the standard NPG message that becomes mocking by a paywall that effectively blocks critics:

“A key strength of NPG is its close relationship with the scientific community. Working closely with scientists, listening to what they say, and always placing emphasis on quality rather than quantity, has made NPG the leading scientific publisher at finding innovative solutions to scientists’ information needs.”

The website also contains the standard NPG assurances about authors’ disclosures of conflicts of interest:

“The statement must contain an explicit and unambiguous statement describing any potential conflict of interest, or lack thereof, for any of the authors as it relates to the subject of the report”

The authors of this particular paper declared:

“EER is named as an inventor on two pending patent applications, filed and owned by Northwestern University. The remaining authors declare no conflict of interest.”

Does this disclosure give readers much clarity concerning the authors’ potential financial conflict of interest? Check out this marketing effort exploiting the Translational Psychiatry article.

Northwestern Researchers Develop RT-qPCR Assay for Depression Biomarkers, Seek Industry Partners

I have also raised questions about a lack of disclosures of conflicts of interest from promoters of Triple P Parenting. The developers claimed earlier that their program was owned by the University of Queensland, so there was no conflict of interest to declare. Further investigation  of the university website revealed that the promoters got a lucrative third of proceeds. Once that was revealed, a flood of erratum notices disclosing the financial conflicts of interest of Triple P promoters followed – at least 10 so far. For instance

triple P erratum PNG
Please Click to Enlarge

How bad is the bad science?

You can find the full Translational Psychiatry article here. The abstract provides a technical but misleading summary of results:

“Abundance of the DGKA, KIAA1539 and RAPH1 transcripts remained significantly different between subjects with MDD and ND controls even after post-CBT remission (defined as PHQ-9 <5). The ROC area under the curve for these transcripts demonstrated high discriminative ability between MDD and ND participants, regardless of their current clinical status. Before CBT, significant co-expression network of specific transcripts existed in MDD subjects who subsequently remitted in response to CBT, but not in those who remained depressed. Thus, blood levels of different transcript panels may identify the depressed from the nondepressed among primary care patients, during a depressive episode or in remission, or follow and predict response to CBT in depressed individuals.”

This was simplified in a press release that echoed in shamelessly churnalized media coverage. For instance:

“If the levels of five specific RNA markers line up together, that suggests that the patient will probably respond well to cognitive behavioral therapy, Redei said. “This is the first time that we can predict a response to psychotherapy,” she added.”

The unacknowledged problems of the article began with the authors only having 32 depressed primary-care patients at baseline and their diagnostic status not having been  confirmed by gold standard semi-structured interviews by professionals.

But the problems get worse. For the critical comparison of patients who recovered in cognitive behavioral therapy versus those that did not occurred in the subsample of nine recovered versus 13 unrecovered patients remaining after a loss-to-follow-up of 10 patients. Baseline results for the 9 +13= 22 patients in the follow-up sample did not even generalize back to the original full sample. How, then, could the authors argue that the results apply to the 23 million or so depressed patients in the United States? Well, they apparently felt they could better-generalize back to the original sample, if not the United States, by introducing an analysis of covariance that controlled for age, race and sex.  (For those of you who are tracking the more technical aspects of this discussion, contemplate the implications of controlling for three variables in a between-groups comparison of nine versus 13 patients. Apparently the authors believe that readers would accept the adjusted analyses in place of the unadjusted analyses which had obvious problems of generalizability. The reviewers apparently accepted this.).

Finally, treatment with cognitive behavior therapy was confounded with uncontrolled treatment with antidepressants.

I won’t discuss here the other problems of the study noted in my earlier blog posts. But I think you can see that these are analyses of a small data set truly unsuitable for publication in Translational Psychiatry and serving as a basis for seeking industry funding for a blood test for depression.

As I sometimes do, I tried to move from blog posts about what I considered problematic to a formal letter to the editor to which the authors would have an opportunity to reply. It was then that I discovered the publication costs.

So what are the alternatives to a letter to the editor?

Letters to the editor are a particularly weak form of post-publication peer review. There is little evidence that they serve as an effective self-correction mechanism for science. Letters to the editor seldom alter the patterns of citations of the articles about which they complain.

pay to paly2Even if I paid the $1,000 fee, I would only have been entitled to 700 words to make my case that this article is scientifically flawed and misleading. I’m not sure that a similar fee would be required from the authors to reply. Maybe responding to critics is part of the original package that they purchased from NPG. We cannot tell from what appears in the journal because the necessity of responding to a critic has not yet occurred.

It is quite typical across journals, even those not charging for a discussion of published papers, to limit the exchanges to a single letter per correspondent and a single response from the authors. And the window for acceptance of letters is typically limited to a few weeks or months after an article has appeared. While letters to the editor are often peer-reviewed, replies from authors typically do not receive peer review.

A different outcome, maybe

I recently followed up blogging about the serious flaws of a paper published in PNAS by Fredrickson and colleagues with a letter to the editor. They in turn responded. Compare our letters to see why the uninformed reader might infer that only confusion had been generated by either of them. But stay tuned…

The two letters would have normally ended any exchange.

However, this time my co-authors and I thoroughly re-analyzed the Fredrickson et al data and PNAS allowed us to publish our results. This time, we did not mince words:

“Not only is Fredrickson et al.’s article conceptually deficient, but more crucially statistical analyses are fatally flawed, to the point that their claimed results are in fact essentially meaningless.”

In the supplementary materials, we provided in excruciating detail our analytic strategy and results. The authors’ response was again dismissive and confusing.

The authors next refused our offer for an adversarial collaboration in which both parties would lay responses to each other with a mediator in order to allow readers to reach some resolution. However, the strengths of our arguments and reanalysis – which included thousands of regression equations, some with randomly generated data – are such others have now calling for a retraction of the original Fredrickson and Cole paper. If that occurs, it would be an extraordinarily rare event.

Limits on journals impose on post-peer-review commentary severely constrain the ability of science to self-correct.

The Reproducibility Project: Psychology is widely being hailed as a needed corrective for the crisis of credibility in science. But replications of studies such as this one involving pre-post sampling of genomic expression from an intervention trial are costly and unlikely to be undertaken. And why attempt a “replication” of findings that have no merit in the first place? After all, the authors’ results for baseline assessments did not replicate in the baseline results of patients still available at follow-up. That suggests a table problem, and that attempts at replication would be futile.

plosThe PLOS journals have introduced the innovation of allowing comments to be placed directly at the journal article’s webpage, with their existence acknowledged on the article itself. Anyone can respond and participate in a post-publication peer review process that can go on for the life of the interest in a particular article. The next stage in furthering post-publication peer review is that such comments be indexed and citable and counted in traditional metrics, as well as altmetrics. This would recognize citizen scientists’ contributions to cleaning up what appears to be a high rate of false positives and outright nonsense in the current literature.

pubmedcommonsPubMed Commons offers the opportunity to post comments on any of the over 23 million entries in PubMed, expanding the PLOS initiative to all journals, even those of the Nature Publishing Group. Currently, the only restriction is that someone attempting to place a comment have authored any of the 23,000,000+ entries in PubMed, even a letter to the editor. This represents progress.

But similar to the PLOS initiative, PubMed Commons will get more traction when it can provide conventional academic credit –countable citations– to contributors identifying and critiquing bad science. Currently authors can get credit for putting bad science into the literature that no one can get for  helping getting it recognized as such.

So, the authors of this particular article have made indefensibly bad claims about having made substantial progress toward developing an inexpensive blood test for depression. It’s not unreasonable to assume their motive is to cultivate financial support from industry for further development. What’s a critic to do?

In this case, the science is bad enough and the damage to the public and professionals’ perception of the state of the science of ‘blood test for depression’ sufficient for  a retraction is warranted. Stay tuned – unless Nature Publishing Group requires a $1,000 payment for investigating whether an article warrants retraction.

Postscript: As I was finishing this post, I discovered that the journals published by the  Modern Language Society requires  payment of a $3,000 membership fee to publish a letter to the editor in one of their journals. I guess they need to keep the discussion within the club. 

Views expressed in this blog post are entirely those of the author and not necessarily those of PLOS or its staff.

Special thanks to Skeptical Cat.skeptical sleuth cat 8 30 14-1

Keeping zombie ideas about personality and health awalkin’: A teaching example

Reverse engineer my criticisms of this article and you will discover a strategy to turn your own null findings into a publishable paper.

TYPE-D-HeartcurrentsHere’s a modest little study with null findings, at least before it got all gussied up for publication. It has no clear-cut clinical or public health implications. Yet, it is valuable as a teaching example showing how such studies get published. That’s why I found it interesting enough to blog about it at length.

 

van de Ven, M. O., Witteman, C. L., & Tiggelman, D. (2013). Effect of Type D personality on medication adherence in early adolescents with asthma. Journal of Psychosomatic Research, 75(6), 572-576. Abstract available here and fulltext here]

As I often do, I am going to get quite critical in this blog post, maybe even making some readers wince. But if you hang in there, you will see some strategies for publishing negative results as if they were positive that are widely used throughout personality, health, and positive psychology. Your critical skills will be sharpened, but you will also be able to reverse engineer my criticisms to get papers with null findings published.

Read on and you’ll see things that the reviewers at Journal of Psychosomatic Research apparently did not see, nor the editors, but they should have.  I have emailed the editors inviting them to join in this discussion and I am expecting them to respond. I have had lots of dealings with them and actually find them to be quite reasonable fellows. But peer review is imperfect, and one of the good things about blogging is that I can get the space to call out when it fails us.

The study examined whether some measures of negative emotion predicted adherence in early adolescents with asthma. A measure of negative affectivity (sample item: “I often make a fuss about unimportant things”) and what was termed social inhibition (sample item “I would rather keep other people at a distance”) were examined separately and when combined in a categorical measure of Type D personality (the D in Type D stands for distress).

Type D personality studies were once flourishing, even getting coverage in Time and Newsweek and discussion by Dr. Oz.  The claim was that a Type D personality predicted death among congestive heart failure patients so well that clinicians should begin screening for it. Type D was supposed to be a stable personality trait, so it was not  clear what clinicians could do with the information from screening. But I will be discussing in a later blog post why the whole area of research can itself be declared dead because of fundamental, inescapable problems in the conception and measurement of Type D. When I do that, I will draw on an article co-authored with Niels de Voorgd,  “Are we witnessing the decline effect in the Type D personality literature?”

John Ioannidis providing an approving commentary on my paper with Niels, with the provocative title of “Scientific inbreeding and same-team replication: Type D personality as an example.” Among the ideas attributable to Ioannidis are that most positive findings are false, as well as that most “discoveries” are subsequently proven to be false or at least exaggerated. He calls for a greater value being given to replication, rather than discovery.

Yet in his commentary on our paper, he uses the Type D personality literature as a case example of how the replication process can go awry. A false credibility for a hypothesis is created by false replications. He documented is significant inbreeding of investigators of type D personality: a quite small number of connected investigators are associated with studies with statistically improbable positive findings. And then he introduced some concepts that can be used to understand processes by which the small group could have undue influence on replication attempts by others:

… Obedient replication, where investigators feel that the prevailing school of thought is so dominant that finding consistent results is perceived as a sign of being a good scientist and there is no room for dissenting results and objections; or obliged replication, where the proponents of the original theory are so strong in shaping the literature and controlling the publication venues that they can largely select and mold the results, wording, and interpretation of studies eventually published.

Ioannidis’ commentary also predicted that regardless of any merits, our arguments would be studiously ignored and even suppressed by proponents of Type D personality. Vested interests use the review process to do that with articles that are inconvenient and embarrassing. Reviewing manuscripts has its advantages in terms of controlling the content of what is ultimately published.

Don’t get me wrong. Niels and I really did not expect everyone to immediately stop doing Type D research just because we published this article. After all, a lot of data have already been collected. In Europe, where most Type D personality data get collected, PhD students are waiting to publish their Type P articles in order to complete their dissertations.

We were very open to having Type D personality researchers pointing out why we wrong very wrongwere wrong, very wrong, and even stupidly wrong. But that is not what we are not seeing. Instead, it is like our article never appeared, with little trace of it in terms of citations even in, ah,  Journal of Psychosomatic Research, where our article and Ioannidis’ commentary appeared. According to ISI Web of Science, our article has been cited an overall whopping 6 times as of April 2014. And there have been lots more Type D studies published since our article first appeared.

Anyway, the authors of the study under discussion adopted what has become known as the “standardized method” (that means that they don’t have to justify it) for identifying “categorical” Type D personality. They took their two continuous measures of negative affectivity and social inhibition and split (dichotomized) them. They then crossed them, creating a four cell, 2 x 2 matrix.

Chart One2

 Next, they then selected out the high/high quadrant for comparison to the three other groups combined as one.

Chart 2

So, the authors made the “standardized” assumption that only the difference between a high/high group and everyone else was interesting. That means that persons who are low/low will be treated just the same as persons who are high in negative affectivity and low in social inhibition. Those who were low in negative affectivity but high in social inhibition are simply treated the same as those who are low on both variables. The authors apparently did not even bother to check– no one usually does– whether some of the people who were high in negative affectivity and low in social inhibition actually had higher scores on negative affectivity than those assigned to the high/high group.

I have been doing my own studies and reviews of personality and abnormal behavior for decades. I am not aware of any other example where personality types are created in which the high/high group is compared to everybody else lumped together. As we will see in a later blog, there are lots of reasons not to do this, but for Type D personality, it is the “standardized” method.

Adherence was measured twice in this study. At one point we readers are told that negative emotion variables were also assessed twice, but the second assessment never comes up again.

The abstract concludes that

categorical Type D personality predicts medication adherence of adolescents with asthma over time, [but] dimensional analyses suggest this is due to negative affectivity only, and not to the combination of negative affectivity and social inhibition.

Let’s see how Type D personality was made to look like a predictor and what was done wrong to achieve this. To enlarge Table 2 just below, double click on it.

table page TypeDJPR-page-0

Some interesting things about Table 2 that reviewers apparently missed:

  • At time T1, adherence was not related to negative affectivity, social inhibition, or Type D personality. There is not much prediction going on here.
  • At time T2, adherence was related to the earlier measured negative affectivity, but not to social inhibition or Type D personality.

Okay, if the authors were searching for significant associations, we have one, only one, here. But why should we ignore the failure of personality variables to predict adherence measured at the same time and concentrate on the prediction of later adherence? Basically, the authors have examined 2×3=6 associations, and seem to be getting ready to make a fuss about the one that proved significant, but was not predicted to stand alone.

Most likely this statistical significance is due to chance– it certainly was not replicated in same-time assessments of negative affectivity and adherence at T1. But this Association seems to be the only basis of claiming one of these negative emotion variables are actually predictors.

  • Adherence at time T2 is strongly predicted by adherence at time T1.

The authors apparently don’t consider this particularly interesting, but it is the strongest association in the data set. They want instead to predict change in adherence from T1 to T2 from trait negative emotion. But why should change in the relatively stable adherence be predicted by negative emotion when negative emotion does not predict adherence measured at the same time?

We need to keep in mind that these adolescents have been diagnosed with diabetes for a while. They are being assessed for adherence at two arbitrary time points. This no indication that something has happened in between those points that might strongly affect their adherence. So, we are trying to predict fluctuations in a relatively stable adherence from a trait, not any upward or downward spiral.

Next, some things we are not told that might further change our opinions about what the authors say is going on in their study.

magician_rabbit_hatLike pulling a rabbit out of a hat, the authors suddenly tell us that they measured self-reported depressive symptoms. The introduction explains this article is about negative affectivity, social inhibition or Type D personality, but only mentions depression in passing. So, depression was never given the explanatory status that the authors give to these other three variables. Why not?

Readers should have been shown the correlation of depression with the other three negative emotion variables. We could expect from a large literature that the correlation is quite high, probably as high as their respective reliabilities allow—as good, or as bad as it gets.

This no particular reason why this study could not have focused on depressive symptoms as predictors of later adherence, but maybe that story would not have been so interesting, in terms of results.

Actually, most of the explanations offered in the introduction as to why measures of negative emotion should be related to adherence would seem to apply to depression. Just go back to the explanations and substitute depression for whatever variables being discussed. See, doesn’t depression work as well?

One of the problems in using measures of negative emotion to predict other things is that these measures are related so much to each other that we can’t count on them to measure only the variable we are trying to emphasize and not something else.

Proponents of Type D personality like these authors want to assert that their favored variable does something that depression does not do in terms of predictions. But in actual data sets, it may prove tough to draw such distinctions because depressive symptoms are so highly correlated with components of Type D.

Some previous investigators of negative emotion have thrown up their hands in despair, complaining about the “crud factor” or “big mess” of intercorrelated measures of negative emotion ruining their ability to test their seemingly elegant ideas about supposedly distinctly different negative emotion variables. When one of the first Type D papers was published,   an insightful commentary complained that the concept was entering an already crowded field of negative emotion variables and asked whether we really needed another one.

In this study, the authors depressive symptoms with the self-report Hospital Anxiety and Depression Scale (HADS) The name of the scale suggests that it separately measures anxiety and depression. Who can argue with the authority of a scale’s name? But using a variety of simple and complicated statistical techniques like different variants of factor analysis, investigators have not been able to show consistently that the separates subscales for anxiety and depression actually measure something different from each other– or that the two scales should not be combined into a general measure of negative emotion/distress.

So talk about measuring “depressive symptoms” with the HADS is wrong, or at least inaccurate. But there are a lot of HADS data sets out there, and so it would be inconvenient to acknowledge what we said in the title of another Journal of Psychosomatic Research article,

The Hospital Anxiety and Depression Scale elivs citings(HADS) is dead, but like Elvis, there will still be citings.

Back to this article, if readers had gotten to see the basic correlations of depression with the other variables in Table 2, we might have seen how high the correlation of depression was with negative affectivity. This would have sent us off in a very different direction than the authors took.

To put my concerns in simple form,  data that are available to the authors but hidden from the readers’ view probably do not allow making the clean kind of distinctions that the authors would need to make if they are going to pursue their intended storyline.

Depressive symptoms are like the heal in rigged American wrestling matches, a foil for Type D personality..
Depressive symptoms are like the heal in rigged American wrestling matches, a foil for Type D personality.

But, uh, measures of depressive symptoms show up all

Type D personality is the face, intended to win against depressive symptoms.
Type D personality is the face, intended to win against depressive symptoms.

the time in studies of Type D personality. Think of such studies as if they are like the rigged American wrestling matches. Depressive symptoms are the heel (or rudo in lucha libre) that always shows up as a looking mean and threatening contender, but most always loses to the face, Type D personality. Read on and find out how supposedly head-to-head comparisons are rigged so this dependably happens.

The authors  eventually tell us that they assessed (1) asthma duration, (2) control, and (3) severity. But we were not allowed to examine whether any of these variables were related to the other variables in Table 2. So, we cannot see whether it is appropriate to consider them as “control variables” or more accurately, confounds.

There is good reason to doubt that these asthma variables are suitable “control variables” or candidates for a confounding variable in predicting adherence.

First, for asthma control to serve as a “control variable” we must assume that it is not an effect of adherence. If it is, it makes no sense to try to eliminate asthma control’s influence on adherence with statistics. It sure seems logical that if these teenagers adhere well to what they are supposed to do to deal with their asthma, asthma control will be better.

Simply put, if we can reasonably suspect that asthma control is a daughter of adherence, we cannot keep treating it as if it is the mother that needs to be controlled in order to figure out what is going on. So there is first a theoretical or simple logical objection to treating asthma control as a “control” variable.

Second, authors are not free to simply designate whatever variables they would like as control variables and throw them into multiple regression equations to control a confound. This is done all the time in the published literature, but it is WRONG!

Rather, authors are supposed to check first and determine if two conditions are met. The variables should be significantly related to the predictor variables. In the case of this study, asthma control should be shown to be associated with one or all of the negative emotion variables. But then the authors would also have to show that it was also related to subsequent adherence. If both conditions are not met, the variable should not be included as a control variable.

Reviewers should have insisted on seeing these associations among asthma duration, control, severity, and adherence. While the reviewers were at it, they should have required that the correlations be available to other readers, if the article is to be published.

We need to move on. I am already taxing readers’ patience with what is becoming a longread. But if I have really got you hooked into thinking about the appropriateness of controlling for particular confounds, you can digress to a wonderful slide show telling more.

So far, we have examined a table of basic correlations, not finding some things that we really need to decide what is going on here, but we seem to be getting into trouble. But multivariate analyses will be brought in to save this effort.

The magic of misapplied multivariate regression.

The authors deftly save their storyline and get a publishable paper with “significant” findings in two brief paragraphs

The decrease in adherence between T1 and T2 was predicted by categorical Type D personality (Table 3), and this remained significant after controlling for demographic and clinical information and for depressive symptoms. Adolescents with a Type D personality showed a larger decrease in adherence rates fromT1 to T2 than adolescents without a Type D personality.

And

The results of testing the dimensions NA and SI separately as well as their interaction showed that there was a main effect of NA on changes in adherence over time (Table 4), and this remained significant after controlling for demographic and clinical information and for depressive symptoms. Higher scores on NA at T1 predicted a stronger decrease in adherence over time. Neither SI nor the interaction between NA and SI predicted changes in adherence.

Wow! But before we congratulate the authors and join in the celebration we should note a few things. From now on in the article, they are going to be discussing their multivariate regressions, not the basically null findings obtained with the simple bivariate correlations. But these regression equations do not undo the basic findings with the bivariate correlations. Type D personality did not predict adherence, but it only appears to do so in the context of some arbitrariy and ill-chosen covariates. But now they can claim that Type  D won the match fair and square, without cheating.

But don’t get down on these authors. They probably even believe in their results. They have merely following the strong precedent of what almost everybody else seems do in the published literature. They did not get caught by the reviewers or editors of Journal of Psychosomatic Research.

Whatever happened to depressive symptoms as a contender for predicting adherence? It was not let into the ring until after Type D personality and its components had secured the match. These other variables got to do all the predicting they could do, and only then depressive symptoms were entered the ring. That is what happens when you have highly correlated variables and manipulate the match by picking one to go first.

And there is a second trick guaranteeing that Type D will win over depressive symptoms. Recall that to be called Type D personality, research subjects had to be high on negative affectivity and high on social inhibition. Scoring high on two (imperfectly reliable) measures of negative emotion usually bests scoring high on only (imperfectly reliable) one. But if the authors had used two measures of depressive symptoms, they could have had a more even match.

The big question: so what?

Type D personality is not so much a theory, as a tried-and-true method for getting flawed analyses published. Look at what the authors of this paper said about it in the introduction in their discussion. They really did not present a theory, but rather cited precedent and made some unsubstantiated speculations about why past results may have been obtained.

Any theory about Type D personality and adherence really does not make predictions with substantial clinical and public health implications. Think about it: if this study had worked out as the authors intended, what difference would it have made? Type D personality is supposedly a stable trait, and so the authors could not have proposed psychological interventions to change it. That has been done and does not work in other contexts.

What, then, could authors have proposed, other than that more research is needed? Should the mothers of these teenagers be warned that their adolescents had Type D personality and so might have trouble with their adherence? Why not just focus on the adherence problems, if they are actually there, and not get caught up in blaming the teens’ personality?

But Type D has been thung.

Because the authors have been saying in lots of articles that they have been studying Type D, it is tough to get heard saying “No, pal, you have studying statistical mischief. Type D does not exist except for statistical mischief.” Type D has been thung, and who can undo that?

Thing (v). to thing, thinging.   1. To create an object by defining a boundary around some portion of reality separating it from everything else and then labeling that portion of reality with a name.

One of the greatest human skills is the ability to thing. We are thinging beings. We thing all the time.

And

Yes, yes, you might think, but we are not really “thinging.” After all trees, branches and leaves already existed before we named them. We are not creating things we are just labeling things that already exist. Ahhh…but that is the question. Did the things that we named exist before they were named? Or more precisely, in what sense did they exist before they were named, and how did their existence change after they were named?

…And confused part-whole relationships become science and publishable.

Once we have convincingly thung Type D personality, we can fool ourselves and convince others about there being a sharp distinction with the similarly thung “depressive symptoms.”

Boundaries between concepts are real because we make them so, just like between Canada and the United States, even if particular items are arbitrarily assigned to one or the other questionnaire. Without our thinging, we do not as easily forget the various items come from the same “crud factor,” “big mess,” and could have been lumped or split in other ways.

Part-whole relationships become entities interacting with entities in the most sciencey and publishable ways. See for instance

Romppel, et al. (2012). Type D personality and persistence of depressive symptoms in a German cohort of cardiac patients. Journal of Affective Disorders, 136(3), 1183-1187.

Which compares the effectiveness of Type D as a screening tool compared to established measures of depressive symptoms measured with the (ugh) HADS for predicting subsequent HADS depression.

Lo and behold, Type D personality works and we have a screening measure on our hands! Aside from the other advantages that I noted for Type D as a predictor, negative affectivity items going into the ype D categorization are phrased as if they refer to enduring characteristics whereas items on the HADS are phrased to refer to the last week.

Let us get out of the mesmerizing realm of psychological assessment. Suppose we ask a question about whether someone ate meatballs last week or whether they generally eat meatballs. Which would question you guess better predicts meatball consumption over the next year?

And then there is

Michal,  et al. (2011). Type D personality is independently associated with major psychosocial stressors and increased health care utilization in the general population. Journal of Afective Disorders, 134(1), 396-403.

Which finds in a sample of 2495 subjects that

Individuals with Type D had an increased risk for clinically significant depression, panic disorder, somatization and alcohol abuse. After adjustment for these mental disorders Type D was still robustly associated with all major psychosocial stressors. The strongest associations emerged for feelings of social isolation and for traumatic events. After comprehensive adjustment Type D still remained associated with increased help seeking behavior and utilization of health care, especially of mental health care.

The main limitation is the reliance on self-report measures and the lack of information about the medical history and clinical diagnosis of the participants.

Yup, relied on self-report questionnaires in multivariate analyses, not interview-based diagnosis and the measure of “depression” or “depressive symptoms” asked about the last 2 weeks.

2-15-13-Zombie-run_full_600Keeping zombie ideas awalkin’

How did the study of negative emotion and adherence get published with basically null findings? With chutzpah and by the authors following the formulaic D personality  strategy for getting published. This study did not really obtain significant findings, but the availability of the precedent of  many studies of type D personality  to support claims they achieved a conceptual replication, even if not an empirical one. And these claims were very likely evaluated by members of the type D community making similar claims. In his commentary, Ioannidis pointed to how null Type D findings are gussied up with “approached significance” or, better, was “independently related to blah, blah, when x,y, and z are controlled.”

Strong precedents are often are confused with validity, and the availability of past claims relaxes the standards for making subsequent claims.

The authors were only doing what authors try to do, their damnedest to get their article published. Maybe the reviewers are from the Type D community and can cite the authority of hundreds of studies were only doing what the community tries to do– keep the cheering going for the power of Type D personality and adding another study to the hundreds. But where were the editors of Journal of Psychosomatic Research?

Just because the journal published our paper, for which we remain grateful, I do not assume that they will require authors who submit new papers to agree with us. But you would think, if the editors are committed to the advancement of science, they would request that authors of manuscripts at least relate their findings to the existing conversation, particularly in the Journal of Psychosomatic Research. Authors should dispute our paper before going about their business. If it does not happen in this journal, how can we expect to happen elsewhere?