Busting foes of post-publication peer review of a psychotherapy study

title_vigilante_blu-rayAs described in the last issue of Mind the Brain, peaceful post-publication peer reviewers (PPPRs) were ambushed by an author and an editor. They used the usual home team advantages that journals have – they had the last word in an exchange that was not peer-reviewed.

As also promised, I will team up in this issue with Magneto to bust them.

Attacks on PPPRs threaten a desperately needed effort to clean up the integrity of the published literature.

The attacks are getting more common and sometimes vicious. Vague threats of legal action caused an open access journal to remove an article delivering fair and balanced criticism.

In a later issue of Mind the Brain, I will describe an  incident in which authors of a published paper had uploaded their data set, but then  modified it without notice after PPPRs used the data for re-analyses. The authors then used the modified data for new analyses and then claimed the PPPRs were grossly mistaken. Fortunately, the PPPRs retained time stamped copies of both data sets. You may like to think that such precautions are unnecessary, but just imagine what critics of PPPR would be saying if they had not saved this evidence.

Until journals get more supportive of post publication peer review, we need repeated vigilante actions, striking from Twitter, Facebook pages, and blogs. Unless readers acquire basic critical appraisal skills and take the time to apply them, they will have to keep turning to the social media for credible filters of all the crap that is flooding the scientific literature.

MagnetoYardinI’ve enlisted Magneto because he is a mutant. He does not have any extraordinary powers of critical appraisal. To the contrary, he unflinchingly applies what we should all acquire. As a mutant, he can apply his critical appraisal skills without the mental anguish and physiological damage that could beset humans appreciating just how bad the literature really is. He doesn’t need to maintain his faith in the scientific literature or the dubious assumption that what he is seeing is just a matter of repeat offender authors, editors, and journals making innocent mistakes.

Humans with critical appraisal risk demoralization and too often shirk from the task of telling it like it is. Some who used their skills too often were devastated by what they found and fled academia. More than a few are now working in California in espresso bars and escort services.

Thank you, Magneto. And yes, I again apologize for having tipped off Jim Coan about our analyses of his spinning and statistical manipulations of his work to get newsworthy finding. Sure, it was an accomplishment to get a published apology and correction from him and Susan Johnson. I am so proud of Coan’s subsequent condemnation of me on Facebook as the Deepak Chopra of Skepticism  that I will display it as an endorsement on my webpage. But it was unfortunate that PPPRs had to endure his nonsensical Negative Psychology rant, especially without readers knowing what precipitated it.

shakespeareanThe following commentary on the exchange in Journal of Nervous and Mental Disease makes direct use of your critique. I have interspersed gratuitous insults generated by Literary Genius’ Shakespearean insult generator and Reocities’ Random Insult Generator.

How could I maintain the pretense of scholarly discourse when I am dealing with an author who repeatedly violates basic conventions like ensuring tables and figures correspond to what is claimed in the abstract? Or an arrogant editor who responds so nastily when his slipups are gently brought to his attention and won’t fix the mess he is presenting to his readership?

As a mere human, I needed all the help I could get in keeping my bearings amidst such overwhelming evidence of authorial and editorial ineptness. A little Shakespeare and Monty Python helped.

The statistical editor for this journal is a saucy full-gorged apple-john.

 

Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective

Domenic V. Cicchetti, PhD, quintessential  biostatistician
Domenic V. Cicchetti, PhD, quintessential biostatistician

Domenic V. Cicchetti, You may be, as your website claims

 A psychological methodologist and research collaborator who has made numerous biostatistical contributions to the development of major clinical instruments in behavioral science and medicine, as well as the application of state-of-the-art techniques for assessing their psychometric properties.

But you must have been out of “the quintessential role of the research biostatistician” when you drafted your editorial. Please reread it. Anyone armed with an undergraduate education in psychology and Google Scholar can readily cut through your ridiculous pomposity, you undisciplined sliver of wild belly-button fluff.

You make it sound like the Internet PPPRs misunderstood Jacob Cohen’s designation of effect sizes as small, medium, and large. But if you read a much-accessed article that one of them wrote, you will find a clear exposition of the problems with these arbitrary distinctions. I know, it is in an open access journal, but what you say is sheer bollocks about it paying reviewers. Do you get paid by Journal of Nervous and Mental Disease? Why otherwise would you be a statistical editor for a journal with such low standards? Surely, someone who has made “numerous biostatistical contributions” has better things to do, thou dissembling swag-bellied pignut.

More importantly, you ignore that Jacob Cohen himself said

The terms ‘small’, ‘medium’, and ‘large’ are relative . . . to each other . . . the definitions are arbitrary . . . these proposed conventions were set forth throughout with much diffidence, qualifications, and invitations not to employ them if possible.

Cohen J. Statistical power analysis for the behavioural sciences. Second edition, 1988. Hillsdale, NJ: Lawrence Earlbaum Associates. p. 532.

Could it be any clearer, Dommie?

Click to enlarge

You suggest that the internet PPPRs were disrespectful of Queen Mother Kraemer in not citing her work. Have you recently read it? Ask her yourself, but she seems quite upset about the practice of using effects generated from feasibility studies to estimate what would be obtained in an adequately powered randomized trial.

Pilot studies cannot estimate the effect size with sufficient accuracy to serve as a basis of decision making as to whether a subsequent study should or should not be funded or as a basis of power computation for that study.

Okay you missed that, but how about:

A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study.

Dommie, although you never mention it, surely you must appreciate the difference between a within-group effect size and a between-group effect size.

  1. Interventions do not have meaningful effect sizes, between-group comparisons do.
  2. As I have previously pointed out

 When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

Turkington recruited a small, convenience sample of older patients from community care who averaged over 20 years of treatment. It is likely that they were not getting much support and attention anymore, whether or not they ever were. The intervention that Turkington’s study provided that attention. Maybe some or all of any effects were due to simply compensating for what was missing from from inadequate routines care. So, aside from all the other problems, anything going on in Turkington’s study could have been nonspecific.

Recall that in promoting his ideas that antidepressants are no better than acupuncture for depression, Irving Kirsh tried to pass off within-group as equivalent to between-group effect sizes, despite repeated criticisms. Similarly, long term psychodynamic psychotherapists tried to use effect sizes from wretched case series for comparison with those obtained in well conducted studies of other psychotherapies. Perhaps you should send such folks a call for papers so that they can find an outlet in Journal of Nervous and Mental Disease with you as a Special Editor in your quintessential role as biostatistician.

Douglas Turkington’s call for a debate

Professor Douglas Turkington: "The effect size that got away was this big."
Professor Douglas Turkington: “The effect size that got away was this big.”

Doug, as you requested, I sent you a link to my Google Scholar list of publications. But you still did not respond to my offer to come to Newcastle and debate you. Maybe you were not impressed. Nor did you respond to Keith Law’s repeated request to debate. Yet you insulted internet PPPR Tim Smits with the taunt,

Click to Enlarge

 

You congealed accumulation of fresh cooking fat.

I recommend that you review the recording of the Maudsley debate. Note how the moderator Sir Robin Murray boldly announced at the beginning that the vote on the debate was rigged by your cronies.

Do you really think Laws and McKenna got their asses whipped? Then why didn’t you accept Laws’ offer to debate you at a British Psychological Society event, after he offered to pay your travel expenses?

High-Yield Cognitive Behavioral Techniques for Psychosis Delivered by Case Managers…

Dougie, we were alerted that bollacks would follow with the “high yield” of the title. Just what distinguishes this CBT approach from any other intervention to justify “high yield” except your marketing effort? Certainly, not the results you have obtained from an earlier trial, which we will get to.

Where do I begin? Can you dispute what I said to Dommie about the folly of estimating effect sizes for an adequately powered randomized trial from a pathetically small feasibility study?

I know you were looking for a convenience sample, but how did you get from Newcastle, England to rural Ohio and recruit such an unrepresentative sample of 40 year olds with 20 years of experience with mental health services? You don’t tell us much about them, not even a breakdown of their diagnoses. But would you really expect that the routine care they were currently receiving was even adequate? Sure, why wouldn’t you expect to improve upon that with your nurses? But would you be demonstrating?

insult 1

 

The PPPR boys from the internet made noise about Table 2 and passing reference to the totally nude Figure 5 and how claims in the abstract had no apparent relationship to what was presented in the results section. And how nowhere did you provide means or standard deviations. But they did not get to Figure 2 Notice anything strange?

figure 2Despite what you claim in the abstract, none of the outcomes appear significant. Did you really mean standard error of measurement (SEMs), not standard deviations (SDs)? People did not think so to whom I showed the figure.

mike miller

 

And I found this advice on the internet:

If you want to create persuasive propaganda:

If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD.

If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.

Why did you expect to be able to talk about effect sizes of the kind you claim you were seeking? The best meta analysis suggests an effect size of only .17 with blind assessment of outcome. Did you expect that unblinding assessors would lead to that much more improvement? Oh yeh, you cited your own previous work in support:

That intervention improved overall symptoms, insight, and depression and had a significant benefit on negative symptoms at follow-up (Turkington et al., 2006).

Let’s look at Table 1 from Turkington et al., 2006.

A consistent spinning of results

Table 1 2006

Don’t you just love those three digit significance levels that allow us to see that p =.099 for overall symptoms meets the apparent criteria of p < .10 in this large sample? Clever, but it doesn’t work for depression with p = .128. But you have a track record of being sloppy with tables. Maybe we should give you the benefit of a doubt and ignore the table.

But Dougie, this is not some social priming experiment with college students getting course credit. This is a study that took up the time of patients with serious mental disorder. You left some of them in the squalor of inadequate routine care after gaining their consent with the prospect that they might get more attention from nurses. And then with great carelessness, you put the data into tables that had no relationship to the claims you were making in the abstract. Or in your attempts to get more funding for future such ineptitude. If you drove your car like you write up clinical trials, you’d lose your license, if not go to jail.

insult babbling

 

 

The 2014 Lancet study of cognitive therapy for patients with psychosis

Forgive me that I missed until Magneto reminded me that you were an author on the, ah, controversial paper

Morrison, A. P., Turkington, D., Pyle, M., Spencer, H., Brabban, A., Dunn, G., … & Hutton, P. (2014). Cognitive therapy for people with schizophrenia spectrum disorders not taking antipsychotic drugs: a single-blind randomised controlled trial. The Lancet, 383(9926), 1395-1403.

But with more authors than patients remaining in the intervention group at follow up, it is easy to lose track.

You and your co-authors made some wildly inaccurate claims about having shown that cognitive therapy was as effective as antipsychotics. Why, by the end of the trial, most of the patients remaining in follow up were on antipsychotic medication. Is that how you obtained your effectiveness?

In our exchange of letters in The Lancet, you finally had to admit

We claimed the trial showed that cognitive therapy was safe and acceptable, not safe and effective.

Maybe you should similarly be retreating from your claims in the Journal of Nervous and Mental Disease article? Or just take refuge in the figures and tables being uninterpretable.

No wonder you don’t want to debate Keith Laws or me.

insult 3

 

 

A retraction for High-Yield Cognitive Behavioral Techniques for Psychosis…?

The Turkington article meets the Committee on Publication Ethics (COPE) guidelines for an immediate retraction (http://publicationethics.org/files/retraction%20guidelines.pdf).

But neither a retraction nor even a formal expression of concern has appeared.

Toilet-outoforderMaybe matters can be left as they now are. In the social media, we can point to the many problems of the article like a clogged toilet warning that Journal of Nervous and Mental Disease is not a fit place to publish – unless you are seeking exceeding inept or nonexistent editing and peer review.

 

 

 

Vigilantes can periodically tweet Tripadvisor style warnings, like

toilets still not working

 

 

Now, Dommie and Dougie, before you again set upon some PPPRs just trying to do their jobs for little respect or incentive, consider what happened this time.

Special thanks are due for Magneto, but Jim Coyne has sole responsibility for the final content. It  does not necessarily represent the views of PLOS blogs or other individuals or entities, human or mutant.

Whomp! Using invited editorial commentary to neutralize negative findings

WHOMP!LDGWilliam Hollingworth and his colleagues must been pleased when they were notified that their manuscript had been accepted for publication in the prestigious (Journal impact factor =18!) Journal of Clinical Oncology. Their study examined whether screening for distress increased cancer patients’ uptake of services and improved their mood. The study also examined a neglected topic: how much did screening cost and was it cost-effective?

These authors presented their negative findings in a straightforward and transparent fashion: screening didn’t have a significant effect on patient mood. Patients were not particularly interested in specialized psychosocial services. And costing $28 per patient screened, screening did not lower healthcare costs, nor prove cost-effective.

This finding has significant implications for clinical and public policy. But the manuscript risked rejection because it violated proponents of screening’s strictly enforced confirmation bias and obliged conclusion that screening is cheap and effective.

Whomp!

Hollingsworth and colleagues were surely disappointed to discover that their article was whomp2jpgaccompanied by a negative editorial commentary. They had not been alerted or given an opportunity to offer a rebuttal. Their manuscript had made it through peer review, only to get whomped by a major proponent of screening, Linda Carlson.

After some faint praise, Carlson tried to neutralize the negative finding

despite several strengths, major study design limitations may explain this result, temper interpretations, and inform further clinical implementation of screening for distress programs.

And if anyone tries to access Hollingworth’s  article through Google Scholar or the Journal of Clinical Oncology website, they run smack into a paywall. Yet they can get through to Carlson’s commentary without obstruction and download a PDF for free.  So, easier to access the trashing of the article than the article itself. Doubly unfair!

Pigs must wear lipstick to win acceptance

lipstick-pigAdvocates from professional organizations insist on conclusions supporting  screening and that negative findings be made up to appear to support their views as conditions for getting publishing. Reflecting these pressures, I have described the sandbagging of a paper I had been invited to submit, with reviewers insisting I not be so critical of the promotion of screening.

Try this experiment: Ignore what is said in abstracts of screening studies and instead check the results section carefully. You will see that there are actually lots of negative studies out there, but they have been spun into positive studies. This can easily be accomplished by authors ignoring results obtained for primary outcomes at pre-specified follow-up periods. They can hedge their bets and assess outcome with a full battery of measures at multiple timepoints and then choose the findings that make screening looked best.  Or they can just ignore their actual results when writing abstracts and discussion sections.

Especially in their abstracts, articles report only the strongest results at the particular time point that make the study looked best. They emphasize  unplanned subgroup analyses. Thus, they report that breast cancer patients did particularly well at 6 months, and ignore that was not true for 3 or 12 month follow up. Clever authors interested in getting published ignore other groups of cancer patients who did not benefit, even when their actual hypothesis had been that all patients would show an improvement and breast cancer patients had not been singled out ahead of time. With lots of opportunities to lump, split, and selectively report the data, such results can be obtained by chance, not fraud, but won’t replicate.

When my colleagues and I undertook a systematic review of the screening literature, we were unable to identify a single study that demonstrated that  screening improved cancer patient outcomes in a comparison to patients having access to the same discussions and services without having to be screened. But there are  four other reviews out there, all done by proponents of screening, that gloss over this lack of evidence for screening. The strong confirmatory bias extends to reviews.

Doing wrong by following recommended guidelines.

Hollingworth and colleagues implemented procedures that followed published guidelines for screening. They trained screening staff with audiovisual aids and role-playing. They developed guides to referral sources. They tracked numbers of discussions with distressed patients and referrals. Mirroring clinical realities, many other screening studies involve similar levels of training and resources. Unless cancer centers have special grants or gifts from donors, they probably cannot afford much more than this. And besides, advocates of screening have always emphasize that it is a no or low cost procedure to implement.

The invited commentary.

Carlson’s title seems to represent a revision in what implementation of screening requires and may represent more than many settings can afford.

Screening Alone Is Not Enough: The Importance of Appropriate Triage, Referral, and Evidence-Based Treatment of Distress and Common Problems

Perhaps these  more expensive requirements  will prompt a closer look whether screening actually improves patient outcomes, and at acceptable costs—a refocusing on the evidence whether screening actually benefits patients is overdue.

The 16 references of Carlson’s invited commentary involve eight citations of her work and her close colleague Barry Bultz. She is fending off a negative finding and collecting self-citations too.

Like many such commentaries. Carlson’s creates false authority by selective and inaccurate citation. If you take the trouble to actually look at the work that is cited, you will find much of it does not present original data and citations are  not accurate or relevant– although it is not obvious from the commentary.

For instance, at the outset, Carlson claims “psychosocial interventions tend to pay for themselves. Many times over in subsequent cost offsets” and cites two of her own papers with strikingly similar abstracts. Neither of these papers presents original data, but instead rely on Nick Cummings’ claims made when he was promoting his efforts to earn millions organizing behavioral health carveout companies. These claims are now considered as dubious as Cummings’ claims that he and his staff “cured” thousands of gays and lesbians of their sexual orientation.

Carlson sees particularly upset that the efforts of Hollingsworth and colleagues resulted in so few referrals to psychologists. She claims that this

represents a substantial departure from evidence-based treatment for distress, a significant failure of screening and triage.

Actually, these rates are quite consistent with other studies, including Carlson’s own. Most cancer patients who are found to be distressed with screening are not interested in intensive specialty psychosocial or mental health services. Rather, they are more interested in getting informal support and information, and, among specialized services, nutritional guidance and physical therapy. Much of the advocacy for screening has simply assumed that the services cancer patients want are primarily psychosocial or mental health services, and particularly formal psychotherapy. This can lead to misallocating scarce resources.

Our various studies in the Netherlands find that the proportion of cancer patients seeking specialty mental health services after diagnosis is about the same as the proportion who were getting those services beforehand. We find it takes about 28 hours of screening to produce one referral to a specially psychotherapy. Not very efficient.

The big picture.

Invited commentaries represent one form of privileged access publishing by which articles come to be found in prestigious, high impact journals with no or only minimal peer-reviewed. When they are listed on PubMed or other electronic bibliographic resources, there are typically no indications that commentaries evaded peer review, nor is there usually any indication provided in article itself. One has to learn to be skeptical and to look for evidence, like gratuitous inaccurate citations.

Invited commentaries come about with reviewers indicating a wish to comment on an article that seems likely to be accepted. Most typically, there is a certain cronyism in lavishing praise on articles done by colleagues doing similar work. Carlson’s commentary is less common in that it is intended to neutralize the impact of a manuscript that was apparently going to be accepted.

We need to better understand such distortions in the process by which “peer review” controls which papers get published and what they are required to say to get published. Articles published in high impact journals are not necessarily the best papers.  They do not necessarily represent an adequate sampling of available data.

The Hollingworth study is only one example of a transparently negative study that made it through the editorial process at a high impact journal. But it is also an example of a study successfully defying confirmation bias and getting whomped.  It remains to be seen whether this study suffers a subsequent selective ignoring in citations, like some other negative studies in psycho-oncology.

A black swan, a member of the species Cygnus atratus (from wikipedia)
A black swan, a member of the species Cygnus atratus (from wikipedia)

We don’t know how many such studies don’t get through.  Or in order to get through had to get a makeover with selective reporting, perhaps at the insistence of reviewers. It is thus impossible to quantify the distorting impact of confirmatory bias on the published literature. But sightings of black swans like this one clearly indicate that not all swans are white.  We need to be skeptical about whether published studies represent all of the available evidence.

I recommend skeptical readers look for other commentaries, particularly in Journal of Clinical Oncology. I have documented this high impact  journal as not having the best and most accurately reported psychosocial studies of cancer patients. It is no coincidence that many of the flawed studies about which I’ve complained were accompanied by laudatory commentaries. Check and you will find that commentators have often published similarly flawed studies with a positive spin.

What’s a reader to do?

Readers can write letters to the editor, but Journal of Clinical Oncology has a policy of letter-to-editor-scaled500allowing authors to veto publication of letters critical of their work. Letters to the editor are usually impotent form of protest anyway. They are seldom read by anyone except for the authors who are being criticized. And authors do agree to be criticized, they get the last word, often even ignoring what is said in a critical letter to the editor.

But fortunately, there is now the option of continued post publication peer review through PubMed Commons. Once you register, you can go to PubMed and leave comments about both the Hollingworth study and the unfairness of the commentary by Carlson. And others can express approval of what you write or add their own comment. Look for mine already there, challenging the unfair editorial commentary and expressing concern for the unfair treatment of the paper by Hollingworth and colleagues. You can come and dispute or agree with what I say.

The journals no longer control the post-publication review process. Linda Carlson can get involved in the discussion at PubMed of Hollingworth’s article in JCO, but she cannot have the last word.

pubmedcommons

This work  is (CC-BY-NC-SA).

Junior researchers face a choice: a high or low road to success?

November 8, 2013.

 

This is a  presentation from the International Psycho-Oncology Society Conference in Rotterdam, November 8, 2013 invited by the Early Career Professionals Special Interest Group.* I am grateful for such a relaxed opportunity to speak my mind about some issues that junior researchers in psycho-oncology, like those in many fields, are facing. Senior members of the field have failed you. We need you to undo some of the damage that is being done.

As you enter the field, recognize that you are different from cohorts of researchers who have come before you. On the one hand, you are more methodologically and statistically sophisticated. You are also more digitally savvy, although I am sometimes bewildered by how little you as yet take advantage of the resources of the Internet and social media.

On the other hand, you face new accountability and pressures in terms of the monitoring of the impact factor of journals in which you publish, as well as you having to adhere to reporting standards and preregister your clinical trials before you even run the first patient. Researches who came before you had it easier in these respects.

I’ve done this kind of talk before and I recognize there is an expected obsolescence to what I present. I recall way back when I was junior person in the field, senior faculty warned me not to start using email because it was a total waste of time and inferior to communicating by snail mail. I am sure that much of the advice being offered to you is just as valuable and soon to be obsolete. And, similarly, many of the tools and strategies you will need to acquire first seem a waste of time.

Five years ago, I would have encouraged you to get more comfortable communicating about your work and even self-promoting. I would have suggested you use the now obsolete means, listserves to do so.  I would have encouraged you to challenge the gross inadequacies of peer review by writing letters to the editor, which also have the advantage of cultivating critical skills better than journal clubs do. Both listserves and letters to the editor are now obsolete, but the ideas behind these recommendations still hold, maybe even more. You just have to pursue these goals differently and certainly with different tools.

As for myself, I’m undergone a lot of changes in the past five years. Some of my best recent papers have been written with authors gathered from the Internet, often without me first meeting all of them. I was honored that one of these papers won the Cochrane Collaboration’s Bill Silverman Prize, which I guess makes my co-authors and myself certified disruptive innovators.

I now tweet, blog, use Facebook, and champion open access publishing. Later in this talk, I will provide the exciting details of the launch of a trial of PubMed Commons. I had been afraid of having to observe an embargo on discussing this. But fortunately the shutdown of the US federal government ended, and PubMed Commons was launched just in time for me to talk about it in this presentation.

Tweeting and blogging are not distractions or alternatives to writing peer-reviewed papers, they can become the means of doing so. Tweets may grow into blog posts, then a series of blog posts, and eventually even a peer-reviewed journal article. No guarantees, but looking back, that’s how a number of my peer-reviewed papers have developed.

On the other hand, the process can work in reverse. Blogging and tweeting about recent and forthcoming papers is a very important part of how to be a scholar and how to promote yourself in the current digital moment.

Here are some examples of me self-consciously and experimentally promoting recent papers with blogging.

My first bit of advice to junior investigators is figure out where such action is occurring. The form and format it takes is constantly shifting. Observe, experiment, and get involved, consistent with your own comfort level. Remain lurking if you’d like, reading blogs, and occasionally expressing approval clicking “like” or “favorite” until you are ready for get more involved.

I invite all of you to join me in participating in disruptive innovation. On the other hand, I realize this is not for everyone, and so I will spell out alternative low road.

The state of the field being what it is, offers clear opportunities for you to conform, and play the game according to the rules that work. Many of you will do so and some of you can rise to the top of a mediocrity.

My second bit of advice is that if everyone likes your work, you can be certain that that you are not doing anything important. That sage advice I got from Andrew Oswald.

The behavioral and social sciences are a mess. We have four or five times the rate of positive findings relative to some of the hard sciences, and I don’t think is because our theories and methods are more advanced.

The field of psycho-oncology is particularly a mess, as seen in rampant confirmation bias and many of our widely acclaimed papers presenting evidence that is interpreted in ways that are exaggerated or outright false. The bulk of intervention studies in psycho-oncology are underpowered and the flaws in their designs provide a high risk of bias. Studies consistently obtain significant results at an impressive, but statistically improbable rate.

At the heart of the special problems of the field is the consistent subordination of a commitment to evidence-based science to the vested interests of those who want to promote and secure opportunities for the clinical services of their professions, regardless of what the evidence suggests. This is most notably seen in the relentless promotion of screening for distress in the absence of evidence that actually improves patient outcomes.

Ultimately, data will provide the basis for deciding whether screening is a cost-effectivedistress way of improvements and whether it represents the best use of scarce resources. I think that the evidence will be negative. But I am more worried about the lasting effects on the credibility and integrity of a field in which editing and peer review have been so distorted by the felt need to demonstrate that screening has benefit.

Many celebrated findings in the field of psycho-oncology are really null findings, if you carefully look at them.

These include

  • Spiegel, D., Kraemer, H., Bloom, J., & Gottheil, E. (1989). Effect of psychosocial treatment on survival of patients with metastatic breast cancer. The Lancet, 334(8668), 888-891.
  • Fawzy, F. I., Fawzy, N. W., Hyun, C. S., Elashoff, R., Guthrie, D., Fahey, J. L., & Morton, D. L. (1993). Malignant melanoma: effects of an early structured psychiatric intervention, coping, and affective state on recurrence and survival 6 years later. Archives of General Psychiatry, 50(9), 681.
  • Antoni, M. H., Lehman, J. M., Klibourn, K. M., Boyers, A. E., Culver, J. L., Alferi, S. M., … & Carver, C. S. (2001). Cognitive-behavioral stress management intervention decreases the prevalence of depression and enhances benefit finding among women under treatment for early-stage breast cancer. Health Psychology, 20(1), 20.
  • Andersen, B. L., Yang, H. C., Farrar, W. B., Golden‐Kreutz, D. M., Emery, C. F., Thornton, L. M., … & Carson, W. E. (2008). Psychologic intervention improves survival for breast cancer patients. Cancer, 113(12), 3450-3458.

There are important negative trials of supportive expressive therapy and expressive writing being kept hidden in file drawers. Just search clinicaltrials.gov.

Zombie ideas and tooth fairy science still hold sway in the literature and win mediatooth fairy attention. I have in mind the notion that psychological interventions can extend the lives of cancer patients and exaggerated ideas about the mind holding sway over the body and defeating cancer.

You are entering a system of publication and awards that is not working fairly. Papers appear in ostensibly peer reviewed journals without adequate review. There’s rampant cronyism in opportunities to publish and widespread sweetheart deals as to whether authors have to address concerns raised by reviewers. There is sandbagging of critics and negative findings. It is an embarrassment to the field that authors of flawed ideas are able to suppress commentary on their work and censor criticism.

The respected, high impact Journal of Clinical Oncology is particularly bad when it comes to psychosocial studies. It shows consistently flawed peer-review, the influence of sloppy editorial oversight, and serious restrictions on commenting on its miscarriage of the review process. Feeble post publication peer review is continually handicapped and silenced. I believe that journal has an ethical responsibility to identify to its readers which articles have evaded peer review and to announce that authors of published papers can exercise veto over any criticism or negative commentary.

If you want to take the low road, you have lots of opportunities to succeed.

  • Pick a trendy topic.
  • Don’t be critical of the dominant views, even if you see through the hype and hokum.
  • Use biological measures, particularly ones that can be derived from saliva, even if they have no or unknown clinical significance.
  • Report positive findings, even if you have to spin and torture and suppress data.
  • No matter what your results, in your discussion section claim they confirm the dominant view and reaffirm that view, even if it is irrelevant or contradicted by your findings.

When you design studies, have lots of endpoints that you can always ignore later. Pick the one to report that makes your study look best. A lot of the positive findings in literature cannot really be replicated, but you can always appear to do so by pushing aside the results of primary analyses, and favor unplanned secondary and subgroup analyses. If necessary, construct some post hoc new outcome measures you didn’t even envision when you originally designed your study. Prominent examples of these strategies can readily be found in the published literature.

Many of you will do all this, wittingly or unwittingly following the advice and example of your advisors, but you can become more proficient in pursuing this low road.

Alternatively, I invite at least some of you to take the high road and join me and participate in disruptive innovation. Again, it’s not for everyone.

Blog, and if you’re not ready to consistently post your own, join in a group blog. I highlymental ellf1 recommend groups like Mental Elf, where you can take turns offering critical commentary on recently published papers.

If you are not ready to blog, you can tweet. You can selectively follow those on Twitter who show they can offer you both fresh new ideas with which you would not otherwise come into contact, as well as a filtering out of much that is hype, hokum and sheer nonsense.

pubmedcommonsNow I can announce the PubMed Commons revolution is upon us. Here are some links that explain what it is and how it works.

As long as you have a paper published in PubMed, even a letter to the editor, you can secure an invitation to comment on any article that has appeared in PubMed. You can have others “like” or add a response to your comment, and you to theirs as part of a continuing process of post publication peer review. With PubMed Commons, we’re taking post publication peer review out of the hands of editors who so often have aggressively and vainly taken control of a process that should be left with readers.

I’m asking you to join with me in pursuing a larger goal of creating a literature that is an honest and reliable guide for other researchers, clinicians, patients, the media, and policymakers as to the best evidence. Let’s work together to create a system where review process is transparent and persists for the useful life of a work. For this last point, I give thanks to Michael Eisen, cofounder of PLOS one and disruptive innovator extraordinaire.

*Special thanks to

Claire Wakefield, Michelle Peate, University of NSW,Sydney Australia

Kirsten Douma and Inge Henselmans, Academic Medical Center Amsterdam, The Netherlands

Wendy Lichtenthal, Memorial Sloan-Kettering Cancer Center, New York City

(CC-BY-NC-SA)

Revised Ethical Principles Have Profound Implications for Psychological Research

declaration of HelsinkiThe World Medical Association has just released the latest revision of Declaration of the Helsinki Ethical Principles for Medical Research Involving Human Subjects. You can find the full document open access here.

Released on the fiftieth anniversary of the original declaration, the seventh revision will serve as the basis for regulating research involving human subjects. It will also provide the principles for evaluating research protocols submitted to what are called Institutional Review Boards (IRBs) in the United States and to similar bodies elsewhere and as the basis for judging the ethics of investigator behavior.

Manuscripts submitted for publication will have to declare that any research being reported has been reviewed by such bodies and is consistent with the revised Declaration of Helsinki.

The second paragraph of the document notes that it is addressed primarily to physicians, but that others who are involved in medical research are encouraged to adopt the same principles. Based on the reception of past versions of the Declaration of Helsinki, it can be expected that the review of psychological research, whether or not is conducted in medical settings or with medical patients, will be held to the same standards.

The revised standards thus have profound implications for the conduct of psychological research.

One provision is that the design of every research study be preregistered in a publicly accessible place before the first research participant is even enrolled.  The requirement is that investigators will have to commit themselves publicly to basic features of their designs, including numbers of research participants enrolled and the primary outcomes on which the efficacy of the invention will be evaluated. They are allowed to make subsequent revisions, but any revisions have to be recorded, along with a preservation of what was originally proposed.

Preregistration of randomized clinical trials (RCTs) has already been a requirement for subsequent publication of results in many journals.  If there is no preregistration of a clinical trial, there will be no consideration for publication. But requirement has been inconsistently adopted as a condition for publication in psychology journals and inconsistently enforced.

Granting agencies increasingly require that research they fund involving RCTs will be preregistered, but many psychological intervention studies are simply noncompliant. Checking published randomized clinical trials of psychological interventions, one finds that more recent ones are registered, but that often the outcomes reported in the published papers differ from what is reported in the registration. Alternatively, the registration involves designation of a primary outcome that could be assessed by a full range of measures, without stating which measure will be used. Researchers thus assess psychological distress with the BDI, the CES-D, the distress thermometer, adjective checklists, and a battery of self-reported anxiety measures. They then pick the measures that make the intervention looked most effective. This is the source of rampant selective reporting of outcomes and confirmatory bias. The proportion of clinical trials that report negative outcomes continues to decline, and there’s little doubt that this stems from selective reporting, not improvement in the design and evaluation of interventions.

What is radical is that the revised Declaration of Helsinki extends the ethical requirement that trials be registered to cover all research, not just randomized clinical trials.

Does that mean that all social psychology lab studies and even correlational observational studies be registered? The revised declaration seems to be requiring that, but it would be difficult to imagine the requirement taking hold anytime soon.

It remains to be seen just in what form this expansion of registration will be accommodated, but it has profound implications for transparency and replicability of psychological research. Even if the revised declaration is not enforced to the letter in psychological research, it will set new standards for this research’s evaluation.

The demand for replicability of psychological research will now be facilitated by investigators having to preregister the details of their designs ahead of time, at least for expanded range of designs. We can expect investigators still tried to find wiggle room, but it will be harder to simplify designs after research has been initiated or completed by dropping whole cells or experimental conditions when their inclusion embarrasses the interpretation that investigators are trying to promote.

Another expansion of the Declaration of Helsinki that is relevant to psychological research is the elaboration of the ethical requirements for dissemination, i.e., publication, of research.

Paragraph 36 states

Researchers, authors, sponsors, editors and publishers all have ethical obligations with regard to the publication and dissemination of the results of research. Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. All parties should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results must be published or otherwise made publicly available. Sources of funding, institutional affiliations and conflicts of interest must be declared in the publication. Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.

Again, it is unclear how long it will take and what form these requirements will be implemented.

Authors, editors, and publishers of journals have a lot invested in the current practices of suppressing and spinning negative findings. But now those of us who oppose such practices can raise ethical issues when authors spin findings or journals continue to protect their impact factor by publishing only positive studies.

In recent months, draft versions of the revised declaration have already been made available for comment. There has already been some debate, and the recent release of the final document in JAMA was accompanied by some critical commentary. We can be confident that once professional psychological associations become aware of the new requirements for psychological research, there will be intense controversy over whether psychologists should adopt the standards or even whether they have any choice.

Let’s have no illusions. There are a lot vested interests in current editorial practices, but now we are better armed to challenge them, with an ethical code that declares that what we want is not only best practices, it is what is ethical. That in itself is a game changer.