I am holding my revised manuscript hostage until the editor forwards my complaint to a rogue reviewer.

This blog post started as a reply to an editor who had rendered a revise and resubmit decision on my invited article based on a biased review. I realized the dilemma I faced was a common one, but unlike many authors, I am sufficiently advanced in my career to take the risk of responding publicly, rather than just simply cursing to myself and making the changes requested by the rogue reviewer. Many readers will resonate with the issues I identify, even if they do not yet feel safe enough making such a fuss. Readers who are interested in the politics and professional intrigue of promoting screening cancer patients for distress might also like reading my specific responses to the reviewer. I end with an interesting analogy, which is probably the best part of the blog.

Dear Editor,

I appreciate the opportunity to receive reviews and revise and resubmit my manuscript. However, my manuscript is now being held hostage in a safe place. It will be released to you when you assure me that my complaint has been forwarded to a misbehaving reviewer with a request for a response.

Unscrupulous reviewers commonly abuse their anonymity and gatekeeping function by unfairly controlling what appears in peer reviewed journals. They do so to further their own selfish and professional guild interests.

They usually succeed in imposing their views, coercing authors to bring their manuscripts into compliance with their wishes or risk rejection. Effects on the literature include gratuitous citations of the reviewer’s work, authors distorting the reporting of findings in ways that flatter the reviewer’s work, and the suppression of negative findings.  More fundamentally, however, such unscrupulous tactics corrupt what science does best: confronting ideas with evidence and allowing the larger scientific community to decide how and in what form, if any, the idea survives the confrontation.

With their identities masked, unscrupulous reviewers bludgeon authors in unlit alleyways and slip away.  Victimized authors are reluctant to complain because they do not want to threaten future prospects with the journal and so simply give in.

This time, however, I am announcing the crime in the bright daylight of the Internet.  I have not yet unmasked  the reviewer with 100% certainty (although I can say he has a bit of an accent that is different than the other members of his department), but I can ask you forward my communication to him and extend an offer to debate me at a symposium or other professional gathering.

My manuscript was invited as one of two sides of a conference debate concerning screening cancer patients for distress. Although held at 8:30 AM on the last day of the conference, the debate was packed, and as one of the organizers of the conference said afterwards, we woke the crowd up. The other speaker and I had substantial disagreements, but we found common ground in engaging each other in good humor. Some people that I talked with afterwards said that I had persuaded them, but more importantly, others said that I had forced them to think.

Discussions of whether we should screening cancer patients for distress have rapidly moved from considering the evidence to agitation for international  recommendations and even mandating of screening. Pharma has poured millions of dollars into the development of quality indicators that can be used to monitor whether oncologists ask patients about psychological distress and if the patients indicate that they are experiencing distress, the indicators record what action was taken. Mandating screening is of great benefit to Pharma because these quality indicators can be met by oncologists casually offering antidepressants to distressed patients, without formal diagnosis or follow-up.

As I’ve shown in my research, breast cancer patients are already receiving antidepressant prescriptions at an extraordinary rate, and often in the absence of ever having had a two week mood disturbance in their lives. Receiving a prescription for an antidepressant has become an alternative to allowing patients unhurried times with cancer care professionals to discuss why they are distressed and the preferred way of addressing their distress.

This reviewer’s comments are just another effort at suppressing discussion of the lack of evidence that screening cancer patients for distress will improve their outcomes. You’re well aware of other such efforts. Numerous professional advocacy groups have gain privileged access to ostensibly peer-reviewed journals for articles promoting screening with the argument that it would take too long to accumulate evidence whether screening really benefits patients. The flabbiness of their arguments and the poor quality of some of these papers attest to their not having been adequately tempered by peer review.

A phony consensus group has been organized and claims to have done a systematic review of the evidence concerning screening. When I contacted the authors, they conceded that there was no formal process for organizing the group or arriving at consensus. Rather, it was a convenience group of persons already known to have strong positive opinions of screening and there were strict controls on what would go into the paper.  I’ve taken us a close look at that paper and found serious flaws in the identification, classification, and integration of studies. The paper would be ripe for one of the withering point by point deconstructions that my colleagues and I are notorious for. Unfortunately the paper is published in a journal that does not allow post publication commentary and so, at least in the journal in which was published, it will evade critique.

This reviewer abused both the role as gatekeeper for my manuscript and the anonymity of reviewers in demanding that I make changes in the manuscript that were not based on the weight of evidence, but rather  on an insistence that I fall in line with the dictates of party lines and professional politics. requiring the promotion of screening of cancer patients for distress, despite the utter lack of evidence This reviewer insists that I not call attention to lack of evidence screening benefits patients and instead praise screening for its benefits to professionals.

Below I italicized some of the reviewer’s comments, with my responses interspersed:

My review is in many ways unusual and for the sake of clarity and fairness requires a substantial preamble.
The manuscript represents a transcript of one speaker’s portion (Coyne) of a 2-sided debate and both contributions are meant to be published side by side…
I will try to provide this review in an unbiased fashion but that will be a mighty challenge because I was never a swing voter, I had a position prior to this debate and this position is leaning ‘pro’-screening, as long as some key foundational conditions are in place.

Okay, this reviewer declared loyalties ahead of time and provides a strong warning of bias.  But forewarning  is not an excuse for the reviewer not having taken on my manuscript.

I see an urgent need to remove the untenable categorical opinion (i.e., claim that there is no supporting research on screening (see opening line in abstract and page 10)) when the other paper clearly shows the (imperfect) opposite based on his systematic review.

Why the urgency? I make the argument that before we implement routine screening of cancer patients for distress, we need evidence that it will lead to improved patient outcomes. In that sense, screening for distress is no different than any other change in clinical procedures that is potentially harmful or costly and disruptive of existing efforts to meet patient needs.

Evidence would consist of a demonstration in a randomized trial that screening for distress and feedback to clinicians and patients leads to better patient outcomes than simply giving patients opportunities to talk to clinicians without regard to their scores on screening instruments and giving them the same access to services that screened patients have. The other side in the live debate conceded in the debate that there was as yet no such evidence, although I do not get the sense that the reviewer attended the debate.

The author needs to tone down what comes across as an almost personal attack of psycho-oncology researchers from the Calgary group, and needs to remove polemic language around the ‘6th Vital Sign’ (“sloganeering..”) ; 6th Vital sign is a concept created as a marketing strategy rather than a substantive issue.

I appreciate that the reviewer at least concedes calling distress the “sixth vital sign” is a marketing strategy, but the phrase has increasingly made it into the titles of peer-reviewed articles and is offered as a rationale for recommending and  even mandating screening in the absence of data. And let’s look at the vacuousness of this “marketing strategy.” It capitalizes on the well-established 4 vital signs: temperature, pulse or heart rate, blood pressure, and respiratory rate. These are all objective measures that do not depend on patient self-report. Pain has been proposed as the fifth vital sign, although it is controversial, in part because it is not objective. The “Calgary group” as the reviewer refers to them has championed making distress becoming the six sign, but distress is neither objective nor vital, and assessment depends on self-report. Temperature is measured with a thermometer, and distress is measured with a pencil and paper or touchscreen thermometer. But there the analogy ceases.

Coyne raises a number of tangential issues that don’t belong here; I think they merely distract:
[a] do we really need new pejorative lingo: “Anglo-American Linguistic Imperialism”  ?? I think not,  because the real point is that some terms translate better than others and ‘distress’ does not translate well.

How are these issues tangential? Proponents of screening call for international guidelines mandating routine screening, but distress is not a word that translates into many languages. I attended a symposium recently in which a French presenter described the bewilderment of cancer patients when they were asked to mark their level of distress on a picture of a thermometer. In many languages, it is not a matter of finding a direct translation of “distress”, because no direct translation exists and there is no unitary corresponding concept. And the linguistic problems are compounded when advocates stretch distress to include every psychological discomfort, spiritual issue, and side effect of cancer. One word cannot serve so many functions in other languages.

So, I think is a big issue to impose this Anglo-American term on other cultures and to insist patients respond even when there is no coherent concept being assessed in their language. For what purpose, international solidarity? The reviewer defends “sixth vital sign” from the “Calgary group” which I don’t think we need, but he disallows me my “Anglo-American linguistic imperialism” for which I provide an adequate rationale.

Another [partially] straw-man argument is that routine use of screening and follow through are expensive.  There are numerous settings where screening is done via touch-screen computer that autoscore and spit out summary sheets with ‘red-flagged’ results.  This is cheap and I don’t see how anyone could argue otherwise.

Screening is much more than getting patients to tap a touchscreen if the intention is to improve their well-being. Unfortunately, in some American settings that have implemented screening, patients tap a touchscreen thermometer to indicate their level of distress and results are whisked to an electronic medical record where the information is ignored. Is that what the reviewer wants?

Results of screening, particularly with a distress thermometer, are highly ambiguous and need to be followed up with an interview by professional. I cited research in my manuscript that most distressed patients are not seeking a referral, variously because their needs are already addressed elsewhere, they don’t see the cancer care setting as appropriate place to get services for their needs, they want services that are not available at the cancer center, or they are simply not convinced that they need services.

Many screening instruments have items referring to “being better off dead” or other indications of suicidal ideation or intention to self-harm. Although cancer patients endorsing such items have a small likelihood of attempting suicide, the issue needs to be addressed in an interview with a trained professional. In some cancer care settings, this could cost a patient $200, and most endorsements of such items turn out to be false positives. To not do follow-up assessments is unconscionable, unethical and could be the basis of a malpractice suit in many settings. To adopt a clinical policy of “don’t ask, don’t tell,” is equally unconscionable, unethical, and could conceivably be the basis of a malpractice suit.

Coyne reports (based on three studies) that almost half of the samples identified as depressed/anxious were already in psychological/psychiatric treatment when diagnosed with cancer.  While Coyne’s numbers were derived from good quality studies these numbers don’t jibe with population estimates.

If the reviewer doesn’t like my “good quality studies,” he should propose some others. As for the wild estimates of half or more of all the people in the community walking round with untreated mental illness, I don’t think we can take seriously the results of studies based on lay interviewers administering structured interview schedules to community residing persons as estimates of unmet needs for psychiatric services.

Coyne posits that screening should improve patient outcomes and offers a detailed section showing that we don’t yet have convincing evidence that it does; this is where the debate between [the opposing side] and Coyne is particularly interesting and valuable.  However, for reasons that are not explicated, Coyne and a number of individuals with whom he shares the ‘anti’ position never allow the argument that systematic screening has two other valuable functions, namely to offer a degree of social justice inherent in equal access to care, and it helps psychological service providers to use clinical population-derived data for clarifying resource needs and tracking system efficiency. I do wonder whether or not these latter two issues are affected by context, namely that they might be more naturally attractive to Psycho-Oncology clinicians in countries with universal health care.

What is this focusing first on the “ Calgary group” and now on “Coyne and a number of individuals” ?  Is the reviewer talking about rival gangs or ideas?

I fail to see how screening can “offer a degree of social justice inherent in equal access to care” if it does not improve patient outcomes. We know from lots of studies of screening for depression that persons with low income and other social disparities have a difficult time completing referrals, even when some of the obvious barriers like costs are removed. Studies find that persons with social disparities may need 25 efforts at contact by telephone, with up to eight completed, in order to get them to the first session of mental health treatment. Many of them will not return.

So, where is the social justice in referring low income and other disadvantaged patients to services they won’t get to, and especially when there is no assurance that the services are effective? The reviewer should visit an American community mental health setting where Medicaid patients are sent because psychiatrists prefer to treat patients who pay out of pocket. Or visit the bewildered primary care physicians who get sent cancer patients from Danish or Dutch cancer centers screening for distress.

Routine screening risks compounding  social disparities in receipt of services. Persons with higher income or other social resources are much more likely to complete the referrals that are offered. Even when services are free, people with social disparities are much less likely to show up than people who have the resources to get there.

I don’t know what the reviewer intends by saying screening should implemented because it gives providers clinical population-derived data for clarifying resource needs. I see this argument as a transparent effort to exploit patients who are not getting any benefit from screening to bolster support to hire professionals to be available to provide services. Think of it: would we provide mammograms to women simply to document a need for more oncologists, if the women do not get any benefit from the mammograms?

Imagine this scenario: attorneys push for screening the general population for unmet legal needs. With short checklists, pissed-off thermometers, and web-based surveys, they identify people having unresolved disputes with their relatives and neighbors that the attorneys could help them settle by suing each other. They thereby uncover what they consider unmet need for litigation. Now, some people may have misgivings about suing family and supposed friendsS. The attorneys could then argue that this is just due to a sense of stigma and launch anti-stigma campaigns to break down their resistance to accepting services.

The attorneys’ denial that their primary interest was to generate business for themselves would be more easily dismissed than that of mental health professionals calling for screening for cancer patients for distress. But the conflict of interest is just as great.


10 thoughts on “I am holding my revised manuscript hostage until the editor forwards my complaint to a rogue reviewer.”

  1. The fact that you were asked to submit a piece representing one side of a debate, and then you were beaten and harangued for having a viewpoint, is unbelievably dirty pool. Debaters are supposed to have viewpoints. In fact I’m not even sure why you so carefully support your claims with evidence in your post. Although scientists are generally supposed to do that, one is allowed to have an opinion that is….well, an opinion, when engaging in a debate.

    Thanks for outing the lowest of the low. We may not know his/her name, but that nightmarish image will remain firmly in mind.


    1. Many thanks for your support. I suspect, however, that this experience is common, what is different about mine is that I am protesting in public, not caving in.


  2. I’m a relatively new author, so I thought it was a problem with my articles, and that this problem was unique to me. One psychology journal said ‘we see that you are not an attorney, so we aren’t even going to consider an article from you on this topic’ (despite the fact that only psychologists have written on that topic, not attorneys). A psychiatry journal rejected two articles because, among other strange comments, ‘the author made up his own acronym’ and ‘this is a psychology article’. I see that there are a fair amount of complaints lodged against OA journals for a lack of peer review, but no one is documenting the problems with print journal peer review – until now! Thanks….


    1. Lack of peer review is not limited to open access journals. The study sending a bad article for review to open access journals lacked a comparison group in which the same article was sent to conventional for-profit journals. See http://blogs.lse.ac.uk/impactofsocialsciences/2013/10/07/whos-afraid-of-open-access/

      If anything traditional for=profit journals, including those associated with professional organizations are more susceptible to cronyism and privileged access publishing with no or relaxed peer review. See http://www.psychologytoday.com/blog/the-skeptical-sleuth/201204/authors-don-t-call-us-we-ll-call-you

      Psychologists have to deal with this all the time with APA journals, especially the American Psychologist.h


  3. As a victimized author I have often expressed my complain, but no editor has ever deigned to reply. As a reviewer, I would like very much to read the authors’ complaints or comments, as I consider it a way to improve the reviewer’s work. Unfortunately, in most cases editors do not forward such feedbacks to the reviewers. My personal impression is that all the power is exclusively in the hands of the editors. However in your case, since you are a well-known author, certainly you’ll get a reply …..


    1. I think that junior people definitely have a harder time eliciting even a polite response from an editor who has rejected their work. But as a seasoned, senior author, I still experience my fair share of getting dismissed with a refusal to discuss whether an outcome is fair or based on an accurate assessment of the work or the larger literature. International standards require that journals establish appeal processes, but some of these processes do not represent a serious effort.


  4. I thought you might be interested in some reviewer comments from the Journal of the American Academy of Psychiatry and Law (www.jaapl.org):

    Use of terms such as client are psychology-oriented (although there are 268 articles in JAAPL that use the term).

    applying those arguments to the practrice (sic) of psychiatrists (it is an article to the general forensic mental health field, including psychiatry).

    jargon typical of psychologists but often unused by psychiatrist.

    JAAPL primarily targets forensic psychiatrists.

    9 of the 52 references are from psychiatric sources, which makes references to psychological sources very difficult to follow.

    written from a psychologist’s perspective targeting psychologists.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s