No author left behind:  Getting authors published who cannot afford article processing charges

Efforts to promote open access publishing ignore the many scholars who cannot afford the article processing charges of quality open access journals. Their situation may be about to get worse.

mind the brain logo

Efforts to promote open access publishing ignore the many scholars who cannot afford the article processing charges of quality open access journals. Their situation may be about to get worse.

open accessOpen access has turned out to be a misnomer. Of course, free access to research findings is good for science and society. However, open access is clearly not freely open to the scholars who are required to pay exorbitant fees to publish their results, often out of their own pockets.

Andrew V. Suarez and Terry McGlynn 

  • Current proposals for accelerating a transition to full open access for all scholarly articles focus primarily on readers who cannot obtain paywalled articles that require a subscription or privileges at a library with subscriptions.
  • Much less attention to the many prospective authors who cannot pay article processing charges (APCs), but who fall outside a narrow range of eligibility for APC waivers and discounts.
  • This bias perpetuates global and local social inequalities in who gets to publish in quality open access journals and who does not.
  • Many open access journals provide explicit guidelines for authors from particular countries obtaining waivers and discounts, but are deliberately vague about policies and procedures for other classes of authors.
  • Many prospective authors lack resources for publishing an open access journal without having to pay out of their own pockets. They also lack awareness of how to obtain waivers. If they apply at all, they may be disappointed.
  • As an immediate solution, I encourage authors to query journals about waiver policies and share their experience in whether and how they obtain waivers with others in their social networks.
  • For a short while, it is also possible to provide feedback concerning implementation of an ambitious Plan S to encourage and require publication in open access journals. Read on and provide feedback while you can, but hurry.
  • In the absence of corrective action, a group of funding agencies is about to strengthen a model of open access publishing in which the costs of publishing are shifted to authors, most of whom are not receiving or applying for grants. Yet, they will effectively be excluded from publishing in quality of open access journals unless some compensatory mechanism is introduced.

Open access improves health care, especially in less resourced environments.

Open Access involves providing unrestricted free online access to scholarly publications. Among many benefits, open access facilitates clinicians, policymakers, and patients and their caretakers being able to obtain information for decision-making, when they lack subscription to paywalled journals or privileges at a library that subscribes.

The transition from the originally paywalled electronic bibliographic resource Medline to the open access PubMed and Google Scholar meant that without open access, such stakeholders could obtain titles and abstracts through, but making decisions only on this information can prove risky.

PLoS Medicine article noted:

Arthur Amman, President of Global Strategies for HIV Prevention, tells this story: “I recently met a physician from southern Africa, engaged in perinatal HIV prevention, whose primary access to information was abstracts posted on the Internet. Based on a single abstract, they had altered their perinatal HIV prevention program from an effective therapy to one with lesser efficacy. Had they read the full text article they would have undoubtedly realized that the study results were based on short-term follow-up, a small pivotal group, incomplete data, and unlikely to be applicable to their country situation. Their decision to alter treatment based solely on the abstract’s conclusions may have resulted in increased perinatal HIV transmission.”

Advancing open access for readers, but not for authors

wellcome trustCurrently initiatives underway to accelerate the transition to full and immediate open access to scientific and biomedical  publications:

“After 1 January 2020 scientific publications on the results from research funded by public grants provided by national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.”

Among the proposed guiding principles are:

“Where applicable, Open Access publication fees are covered by the Funders or universities, not by individual researchers; it is acknowledged that all scientists should be able to publish their work Open Access even if their institutions have limited means.”

And

“The journal/platform must provide automatic APC waivers for authors from low-income countries and discounts for authors in middle-income countries.”

Stop and think: what about authors who do not and cannot compete for external funding? The first 15 funders [there are currently 16]  to back Plan S accounted for only 3.5% of the global research articles in 2017, but their initiative is about to be implemented, more broadly mandating open access publishing.

Enforcing author‐pay models will strengthen the hand of those who have resources and weaken the hand of those who do not have, magnifying the north‐south academic divide, creating another structural bias, and further narrowing the knowledge‐production system (Medie & Kang 2018; Nagendra et al. 2018). People with limited access to resources will find it increasingly difficult to publish in the best journals. The European mandate will amplify the advantages of some scientists working in developed countries over their less affluent counterparts.

The author‐pays inequality may also affect equity of access within countries, including those considered developed, where there can be major differences between different research groups in their ability to pay (Openjuru et al. 2015). It is harder for disadvantaged groups from these jurisdictions to appeal for waivers (Lawson 2015), deepening the divide between those who can pay and those who cannot.

What exists now for authors who cannot afford article processing charges

What happens for authors who do not have such coverage of APCs– clinicians in community settings, public health professionals, independent scholars, patients and their advocates, or other persons without necessary affiliations or credentials who are nonetheless capable of making a contribution to bettering science and health care? That is a huge group. If they can’t pay, they won’t be able to play the publishing game or will do so in obscurity.

Too much confidence being placed in solutions that are too narrow in focus or simply do not work for this large and diverse group.

doaj logo_squareSolutions that are assumed to work, but that are inadequate

  1. Find a high quality open access journal using the DOAJ (Directory of Open Access Journals). Many of the journals that are indexed in this directory have free or low APCs.

The Directory of Open Access Journals is a service that indexes high quality, peer reviewed Open Access research journals, periodicals and their articles’ metadata. The Directory aims to be comprehensive and cover all open access academic journals that use an appropriate quality control system (see below for definitions) and is not limited to particular languages, geographical region, or subject areas. The Directory aims to increase the visibility and ease of use of open access academic journals—regardless of size and country of origin—thereby promoting their visibility, usage and impact.

DOAJ currently lists over 12,000 journals from 129 countries. It is growing rapidly, with 2018 being the best year to date. Over 1,700 journals were added. Reflecting the level of quality control, DOAJ in the same period rejected without review over 2000 poorly completed applications for journals to be included, removing them from the system so that they would not end up with the editorial teams.

Impressive? Sadly, a considerable proportion of DOAJ listed journals are obscure, narrow in specialization, and often not even listed in PubMed or Web of Knowledge/Web of Science. This is particularly true of the DOAJ journals without fees. Eigenfactor.com did an analysis of over 400 open access journals without APCs and found only the top 31 had a JIF greater than 1.00. Only the top 104 had an impact factor above 0.500. The bottom quarter of journals had JIFs of less than 0.16.

A low impact journal can still be valuable in some contexts, especially if it is in a highly specialized field or contains information relevant to stakeholders not read English. However, even in modestly resourced settings that do not cover authors’ APCs, there are commonly pressures to publish in journals with JIFs more than 1.0 and stigma and even penalties for publishing in lower impact journals.

  1. Apply for waivers or reduction in APCs through a Global Initiative Program. Current proposals are for all journals to establish such programs. Most current programs are for countries on the United Nations Least Developed Country List or countries with the lowest Healthy Life Expectancy (HALE). The PLOS website description of this program for PLOS is particularly clear.

PLOS GLOBAL PARTICIPATION INITIATIVE

The PLOS Global Participation Initiative (GPI) aims to lower barriers to publication based on cost for researchers around the world who may be unable, or have limited ability, to publish in Open Access journals.

Authors’ research funded primarily (50% or more of the work contained within the article) by an institution or organization from eligible low- and middle-income countries is automatically eligible for assistance. If the author’s research funder is based in a Group 1 country, PLOS will cover the entire publication fee and there will be no charge. For authors whose research funder is part of Group 2, PLOS will cover all but part of the publication fee — the remaining publication fee will be $500 USD.

Stop and think: For scholars in Group 2 countries [Click and see which countries these are and which countries are excluded from any such relief. You may be surprised.], how many can come up with $500 per paper? To get concrete, consider a recent PhD in a Group 2 country who is forced to work in the service sector for lack of academic opportunities who needs two quality publications to improve her chances of receiving a postdoctoral opportunity in a better-resourced setting.

  1. Apply for a waiver based on demonstration of individual need and inability to pay. Some journals only provide waivers and discounts to authors in Group 1 or Group 2 countries. Other journals are more flexible. Authors have to ask, and sometimes this must occur before they begin uploading their manuscript. Here too, PLOS is more explicit than most websites and seemingly more generous in granting waivers or discounts.

PLOS PUBLICATION FEE ASSISTANCE PROGRAM

The PLOS Publication Fee Assistance (PFA) program was created for authors unable to pay all or part of their publication fees and who can demonstrate financial need.

An author can apply for PFA when submitting an article for publication. A decision is usually sent to the author within 10 business days. PLOS considers applications on a case-by-case basis.

PLOS publication decisions are based solely on editorial criteria. Information about applications for fee assistance are not disclosed to journal editors or reviewers.

  • Authors should exhaust all alternative funding sources before applying for PFA. The application form includes questions on the availability of alternative funding sources such as the authors’ or co-authors’ institution, institutional library, government agencies and research funders. Funding disclosure information provided by authors will be used as part of the PFA application review.

  • Assistance must be formally applied for at submission. Requests made during the review process or after acceptance will not be considered. Authors cannot apply for the fee assistance by email or through direct request to journal editors.

The PLOS website states:

In 2017 PLOS provided $2.1 million in individual fee support to its authors, through the PLOS Global Participation Initiative (GPI) and Publication Fee Assistance Program.

That sounds like a generous sum of money. It does not distinguish between payments made through the PLOS Global Participation Initiative (GPI) and the fee assistance program requiring individual application. Consider some math.

APCs for PLOS One are currently $1,595 USD; for PLOS Biology and PLOS Medicine, $3,000 USD.

In 2017, PLOS published ~23,000 articles, maybe 80% in PLOS One.

So, a lower estimate would be that PLOS took in $35,000,000 in APCs in 2017.

The Scholarly Kitchen reports that 2017 was not a good financial year for the Public Library of Science (PLOS). Largely as a result of a continued decline in submissions to PLOS One, which peaked at over 32,000 in 2013, revenue was down by $2 million. The Scholarly Kitchen quotes the PLOS’ 2017 Financial Overview:

“All our decisions in 2017 (and 2018) have been driven by the need to be fiscally responsible and remain a sustainable non-profit organization.”

In response, PLOS is increasing APCs by US$100 for 2019.

PLOS is a non-profit, not a charitable organization. It should be no surprise that PLOS did not respond to my request that they publicize more widely details of their program to waive or discount APCs for authors outside of what is done for the Global Participation Initiative. Presumably, at least some authors who cannot pay full APCs find ways of getting reimbursed. A procedure for too easily getting waivers and discounts from PLOS would encourage gaming and authors not utilizing resources in their own settings that are involve more effort, take more time or are more uncertain in whether they will provide reimbursements.

PLOS provides insufficient details of the criteria for receiving a waiver. There is no readily available information about what proportion of requested waivers are granted or the average size of discounts.

My modest efforts to promote publishing in quality open access journals by authors who are less likely to do so

 I work with a range of authors who sometimes need assistance getting published in the open access journals that will most reach the readership that they want to influence. For instance, much probing of published papers for errors and some bad science is done by people on the fringe of academia who currently do not have affiliations. We downloaded and reanalyzed data from a PNAS article, and the authors responded by altering the data without acknowledging they had done so, reanalyzing the data and ridiculing us in a PLOS One article. We had to request a waiver of APCs formally before it was granted. I had to provide evidence of my retirement. Open access journals, like those of PLOS or Nature Springer do not grant waivers automatically for substantive criticism of published articles, even when serious problems are being identified.

As another example, patient citizen scientists have had a crucial role in reanalizing data from the PACE trial of cognitive behavior therapy and graded exercise therapy for chronic fatigue syndrome. These activists have faced strong resistance from the PACE investigators and their supporters when they attempt to publish. It is nonetheless important for these activists reach clinicians and policymakers outside of their own community. Journal of Health Psychology organized a special issue around an article by patient scientist activist Keith Geraghty, ‘PACE-Gate’: When clinical trial evidence meets open data access. A last minute decision by the editorial board (which included me) was crucial in the issue’s rapid distribution within the patient community, but also among policy makers.

A large group of authors who are disadvantaged by current open access publishing policies are early career academics in Eastern Europe and Latin American countries, whom I reach in face-to-face and web-based writing workshops. Their universities do not typically fall into group 1 or group 2 countries, although they share some of the same disadvantages in terms of resources. These ECAs often lack mentorship because the older generation academics and administrators did not have to publish anything of quality, if they often had to publish at all. This older cohort nonetheless hold the ECAs responsible for improving their institutions reputation and visibility with expectations that would be much more appropriate to properly mentored ECAs in well-sourced settings. I have heard these unrealistic expectations referred to as the “field of dreams” administrative philosophy.

It is important for these ECAs to publish in open access journals in their own language, which uniformly low JIFs and often not listed international electronic bibliographic sources. Yet, they also must publish in English-language journals of at least minimal JIF. When I discussed these ECAs with colleagues in more sourced settings, I was criticized falling into the common logical fallacy of “affirming the consequent” by assuming JIF is 1) a true measure of “goodness” and 2) that publishing in smaller, non-English journals is a penalty. My reply is ‘please don’t shoot the messenger’ or blame the victims of irrational and unrealistic expectations.

In brief trainings, I can provide an overview of the process of getting published in the quality journal in a rapidly changing time of digitalization and quick obsolescence of the old ways of doing things. Often these ECAs are struggling without a map. I can show them how to use resources like JANE (Journal/Author Name estimator) to select a range of possible journals; how to avoid the trap of predatory journals, which are increasingly sophisticated and appealing to naïve authors; creative ways of utilizing Google Scholar to be strategic about titles and abstracts; and the more general use of publisher and journal websites to access the resources that are increasingly real there. But ultimately, it is important for ECAs to gain and curate their own experiences and share them as a substitute for the mentorship and accumulated knowledge about publishing in the most appropriate journals that they do not have.

In many of these settings, there is an ongoing crucial transition with retirements opening new opportunities. Just as these ECAs struggle to gain the achievements and credentials that success in their careers require, it could be coming more difficult for them to publish in the most appropriate open access journals. Implementation of Plan S as it is currently envisioned may mean that some major funding agencies and well resourced institutions will assume more of a burden for absorbing the costs of publishing open access.

Scholars with access to international funding and coverage of the APCs required by the dominant model of open access publishing have a huge advantage over many scholars without such resources: scholars outing and correcting bad science; patient citizen scientists; and the large group of scholars disadvantaged by being in the Global South simply being many other settings incapable of providing relief from APCs. It may not be possible to fill gaps in the opportunity to publish in quality open access journals if the dominant business model continues to be author focused APCs or subsidies by publishers and journals. The gap may widen with implementation of Plan S.

global south
Global South

A closing window in which to attempt to influence implementation of Plan S…

If you are concerned about inequalities in the opportunities to publish in quality open access journals, there is a small window in which you can express your concerns and potentially influence the implementation of a broad plan to transform publishing in open access journals, Plan S of cOALition S.

coalitions-1

cOALition S is a group of national research funding organizations, with the support of the European Commission and the European Research Council (ERC), launching an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles. Other researchers from across the world are signing on, including China in December 2018. Nonetheless, Plan S is decidedly focused on issues arising in Western Europe where there well-resourced universities have access to supportive funding organizations.

The 10 principles are no longer up for debate, but there is an opportunity to influence how they will be implemented. Until February 1, 2019, feedback can be left concerning two key questions

  1. Is there anything unclear or are there any issues that have not been addressed by the guidance document?
  2. Are there other mechanisms or requirements funders should consider to foster full and immediate Open Access of research outputs?

Please click and provide feedback now, before it is too late.

Deep Brain Stimulation: Unproven treatment promoted with a conflict of interest in JAMA: Psychiatry [again]

“Even with our noisy ways and cattle prods in the brain, we have to take care of sick people, now,” – Helen Mayberg

“All of us—researchers, journalists, patients and their loved ones–are desperate for genuine progress in treatments for severe mental illness. But if the history of such treatments teaches us anything, it is that we must view claims of dramatic progress with skepticism, or we will fall prey to false hopes.” – John Horgan

An email alert announced the early release of an article in JAMA: Psychiatry reporting effects of brain stimulation therapy for depression (DBS). The article was accompanied by an editorial commentary.

Oh no! Is an unproven treatment once again being promoted by one of the most prestigious psychiatry journals with an editorial commentary reeking of vested interests?

Indeed it is, but we can use the article and commentary as a way of honing our skepticism about such editorial practices and to learn better where to look to confirm or dispel our suspicions when they arise.

Xray depictionLike many readers of this blog, there was a time when I would turn to a trusted, prestigious source like JAMA: Psychiatry with great expectations. Not being an expert in a particular area like DBS, I would be inclined to accept uncritically what I read. But then I noticed how much of what I read conflicted with what I already knew about research design and basic statistics. Time and time again, this knowledge proved sufficient to detect serious hype, exaggeration, and simply false claims.

The problem was no longer simply one of the authors adopting questionable research practices. It expanded to journals and professional organizations adopting questionable publication practices that fit with financial, political, and other, not strictly scientific agendas.

What is found in the most prestigious biomedical journals is not necessarily the most robust and trustworthy of scientific findings. Rather, content is picked in terms of its ability to be portrayed as innovative and breakthrough medicine. But beyond that, the content is consistent with prevailing campaigns to promote particular viewpoints and themes. There is apparently no restriction on those who might most personally profit being selected for accompanying commentaries.

We need to recognize that editorial commentaries often receive weak or no peer review. Invitations from editors to provide commentaries are often a matter of sharing nonscientific agenda and simple cronyism.

Coming to these conclusions, I have been on a mission to learn better how to detect hype and hokum and I have invited readers of my blog posts to come along.

This installment builds on my recent discussion of an article claiming remission of suicidal ideation by magnetic seizure therapy. Like the editorial commentary accompanying previous JAMA: Psychiatry article, the commentary discussed here had an impressive conflict of interest disclosure. The disclosure probably would not have prompted me to search on the Internet for other material about one of the authors. Yet, a search revealed some information that is quite relevant to our interpretation of the new article and its commentary.  We can ponder whether this information should have been withheld. I think it should have been disclosed.

The lesson that I learned is a higher level of vigilance is needed to interpret highly touted article-commentary combos in prestigious journals. Unless we are going to simply dismiss them as advertisements or propaganda, rather than a highlighting of solid biomedical science.

Sadly, though, this exercise convinced me that efforts to scrutinize claims by turning to seemingly trustworthy supplementary sources can provide a misleading picture.

The article under discussion is:

Bergfeld IO, Mantione M, Hoogendoorn MC, et al. Deep Brain Stimulation of the Ventral Anterior Limb of the Internal Capsule for Treatment-Resistant Depression: A Randomized Clinical Trial. JAMA Psychiatry. Published online April 06, 2016. doi:10.1001/jamapsychiatry.2016.0152.

The commentary is:

Mayberg HS, Riva-Posse P, Crowell AL. Deep Brain Stimulation for Depression: Keeping an Eye on a Moving Target. JAMA Psychiatry. Published online April 06, 2016. doi:10.1001/jamapsychiatry.2016.0173.

The trial registration is

Deep Brain Stimulation in Treatment-refractory patients with Major Depressive Disorder.

Pursuing my skepticism by searching on the Internet, I immediately discovered a series of earlier blog posts about DBS by Neurocritic [1] [2] [3] that saved me a lot of time and directed me to still other useful sources. I refer to what I learned from Neurocritic in this blog post. But as always, all opinions are entirely my responsibility, along with misstatements and any inaccuracies.

But what I learned from immediately from Neurocritic is that BSD is a hot area of research, even if it continues to produce disappointing outcomes.

DBS had a commitment of $70 million from President Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative . Premised on the causes of psychopathology being in precise, isolated neural circuitry, it is the poster children of the Research Domain Criteria (RDoC) of former NIMH director Thomas Insel. Neurocritic points to Insel promotion of “electroceuticals” like DBS in his NIMH Director’s Blog 10 Best of 2013:

The key concept: if mental disorders are brain circuit disorders, then successful treatments need to tune circuits with precision. Chemicals may be less precise than electrical or cognitive interventions that target specific circuits.

The randomized trial of deep brain stimulation for depression.

The objective of the trial was:

To assess the efficacy of DBS of the ventral anterior limb of the internal capsule (vALIC), controlling for placebo effects with active and sham stimulation phases.

Inclusion criteria were a diagnosis of major depressive disorder designated as being treatment resistant (TRD) on the basis of

A failure of at least 2 different classes of second-generation antidepressants (eg, selective serotonin reuptake inhibitor), 1 trial of a tricyclic antidepressant, 1 trial of a tricyclic antidepressant with lithium augmentation, 1 trial of a monoamine oxidase inhibitor, and 6 or more sessions of bilateral electroconvulsive therapy.

Twenty-five patients with TRD from 2 Dutch hospitals first received surgery that implanted four contact electrodes deep within their brains. The electrodes were attached to tiny wires leading to a battery-powered pulse generator implanted under their collar bones.

The standardized DBS treatment started after a three-week recovery from the surgery. Brain stimulation was continuous one week after surgery, but at three weeks, patients begin visits with psychiatrists or psychologists on what was at first a biweekly basis, but later less frequently.

deep brain stimulation teamAt the visits, level of depression was assessed and adjustments were made to various parameters of the DBS, such as the specific site targeted in the brain, voltage, and pulse  frequency and amplitude. Treatment continued until optimization – either four weeks of sustained improvement on depression rating scales or the end of the 52 week period. In the original protocol, this this phase of the study was limited to six months, but was extended after experience with a few patients. Six patients went even longer than the 52 weeks to achieve optimization.

Once optimization was achieved, patients were randomized to a crossover phase in which they received two blocks of six weeks of either continued active or sham treatment that involved simply turning off the stimulation. Outcomes were classified in terms of investigator-rated changes in the 17-item Hamilton Depression Rating Scale.

The outcome of the open-label phase of the study was the change of the investigator-rated HAM-D-17 score (range, 0-52) from baseline to T2. In addition, we classified patients as responders (≥50% reduction of HAM-D-17 score at T2 compared with baseline) or nonresponders (<50% reduction of HAM-D-17 score atT2 compared with baseline). Remission was defined as a HAM-D-17 score of 7 or less at T2. The primary outcome measure of the randomized, double-blind crossover trial was the difference in HAM-D-17 scores between the active and sham stimulation phases. In a post hoc analysis, we tested whether a subset of nonresponders showed a partial response (≥25% but <50% reduction of HAM-D-17 score at T2 compared with baseline).

Results

Clinical outcomes. The mean time to first response in responders was 53.6 (50.6) days (range, 6-154 days) after the start of treatment optimization. The mean HAM-D-17 scores decreased from 22.2 (95%CI, 20.3-24.1) at baseline to 15.9 (95% CI, 12.3-19.5) at T2.

An already small sample shrank further from initial assessment of eligibility until retention at the end of the cross over study. Of the 52 patients assessed for eligibility, 23 were in eligible and four refused. Once the optimization phase of the trial started, four patients withdrew for lack of effect. Another five could not be randomized in the crossover phase, three because of an unstable psychiatric status, one because of fear of worsening symptoms, and one because of their physical health. So, the randomized phase of the trial consisted of nine patients randomized to the active treatment and then the sham and another seven patients randomized to the sham and then active treatment.

The crossover to sham treatment did not go as planned. Of the nine (three responders and six nonresponders) randomized to the active-then-sham condition, all had to be crossed over early – one because the patient requested a crossover, two because of a gradual increase in symptoms, and three because of logistics. Of the seven patients assigned to sham- first (four responders and three nonresponders), all had to be crossed over within a day because of increases in symptoms.

I don’t want to get lost in the details here. But we are getting into small numbers with nonrandom attrition, imbalanced assignment of responders versus nonresponders in the randomization, and the breakdown of the planned sham treatment. From what I’ve read elsewhere about DBS, I don’t think that providers or patients were blinded to the sham treatment. Patients should be able to feel the shutting off of the stimulator.

Adverse events. DBS has safety issues. Serious adverse events included severe nausea during surgery (1 patient), suicide attempt (4 patients), and suicidal ideation (2 patients). Two nonresponders died several weeks after they withdrew from the study and DBS had been stopped (1 suicide, 1 euthanasia). Two patients developed full blown mania during treatment and another patient became hypomanic.

The article’s Discussion claims

We found a significant reduction of depressive symptoms following vALIC DBS, resulting in response in 10 patients (40%) and partial response in 6 (24%) patients with TRD.

Remission was achieved in 5 (20%) patients. The randomized active-sham phase study design indicates that reduction of depressive symptoms cannot be attributed to placebo effects…

Conclusions

This trial shows efficacy of DBS in patients with TRD and supports the possible benefits of DBS despite a previous disappointing randomized clinical trial. Further specification of targets and the most accurate setting optimization as well as larger randomized clinical trials are necessary.

A clinical trial with starting with 25 patients does not have much potential to shift our confidence in the efficacy of DBS. Any hope of doing so was further dashed when the sample was reduced to 17 patients who remained for the investigators’ attempted randomization to an active treatment versus sham comparison (seven responders and nine nonresponders). Then sham condition could not be maintained as planed in the protocol for any patients.

The authors interpreted the immediate effects of shifting to sham treatment as ruling out any placebo effect. However, it’s likely that shutting off the stimulator was noticeable to the patients and the immediacy of effects speaks to likelihood an effect due to the strong expectations of patients with intolerable depression having their hope taken away. Some of the immediate response could’ve been a nocebo response.

Helen Mayberg and colleagues’ invited commentary

The commentary attempted to discourage a pessimistic assessment of DBS based on the difficulties implementing the original plans for the study as described in the protocol.

A cynical reading of the study by Bergfeld et al1 might lead to the conclusion that the labor-intensive and expert-driven tuning of the DBS device required for treatment response makes this a nonviable clinical intervention for TRD. On the contrary, we see a tremendous opportunity to retrospectively characterize the various features that best define patients who responded well to this treatment. New studies could test these variables prospectively.

The substantial deviation from protocol that occurred after only two patients were entered into the trial was praised in terms of the authors’ “tenacious attempts to establish a stable response”:

We appreciate the reality of planning a protocol with seemingly conservative time points based on the initial patients, only to find these time points ultimately to be insufficient. The authors’ tenacious attempts to establish a stable response by extending the optimization period from the initial protocol using 3 to 6 months to a full year is commendable and provides critical information for future trials.

Maybe, but I think the need for this important change, along with the other difficulties that were encountered in implementing the study, speak to a randomized controlled trial of DBS being premature.

Conflict of Interest Disclosures: Dr Mayberg has a paid consulting agreement with St Jude Medical Inc, which licensed her intellectual property to develop deep brain stimulation for the treatment of severe depression (US 2005/0033379A1). The terms of this agreement have been reviewed and approved by Emory University in accordance with their conflict of interest policies. No other disclosures were reported.

Helen Mayberg’s declaration of interest clearly identifies her as someone who is not a detached observer, but who would benefit financially and professionally from any strengthening the claims for the efficacy of DBS. We are alerted by this declaration, but I think there were some things that were not mentioned in the article or editorial about Helen Mayberg’s work that would influence her credibility even more if they were known.

Helen Mayberg’s anecdotes and statistics about the success of DBS

Mayberg has been attracting attention for over a decade with her contagious exuberance for DBS. A 2006 article in the New York Times by David Dobbs started with a compelling anecdote of one of Mayberg’s patients being able to resume a normal life after previous ineffective treatments for severe depression. The story reported the success with 8 of12 patients treated with DBS:

They’ve re-engaged their families, resumed jobs and friendships, started businesses, taken up hobbies old and new, replanted dying gardens. They’ve regained the resilience that distinguishes the healthy from the depressed.

Director of NIMH Tom Insel chimed in:

“People often ask me about the significance of small first studies like this,” says Dr. Thomas Insel, who as director of the National Institute of Mental Health enjoys an unparalleled view of the discipline. “I usually tell them: ‘Don’t bother. We don’t know enough.’ But this is different. Here we know enough to say this is something significant. I really do believe this is the beginning of a new way of understanding depression.”

A 2015 press release from Emory University, Targeting depression with deep brain stimulation, gives another anecdote of a dramatic treatment success.

Okay, we know to be skeptical about University press releases, but then there are the dramatic anecdotes and even numbers in a news article in Science, Short-Circuiting Depression that borders on an infomercial for Mayberg’s work.

short-circuiting depression

Since 2003, Mayberg and others have used DBS in area 25 to treat depression in more than 100 patients. Between 30% and 40% of patients do “extremely well”—getting married, going back to work, and reclaiming their lives, says Sidney Kennedy, a psychiatrist at Toronto General Hospital in Canada who is now running a DBS study sponsored by the medical device company St. Jude Medical. Another 30% show modest improvement but still experience residual depression. Between 20% and 25% do not experience any benefit, he says. People contemplating brain surgery might want better odds, but patients with extreme, relentless depression often feel they have little to lose. “For me, it was a last resort,” Patterson says.

By making minute adjustments in the positions of the electrodes, Mayberg says, her team has gradually raised its long-term response rates to 75% to 80% in 24 patients now being treated at Emory University.

A chronically depressed person or someone who cares for someone who is depressed might be motivated to go on the Internet and try to find more information about Mayberg’s trial. A website for Mayberg’s BROADEN (BROdmann Area 25 DEep brain Neuromodulation) study once provided a description of the study, answers to frequently asked questions, and an opportunity to register for screening for the study. However, it’s no longer accessible through Google or other search engines. But you can reach an archived website with a link provided by Neurocritic, but the click links are no longer functional.

Neurocritic’s blog posts about Mayberg and DBS

If you are lucky, a Google search for Mayberg deep brain stimulation, might bring you to any of three blog posts by Neurocritic [1] [2] [3] that have rich links and provide a very different story of Mayberg and DBS.

One link takes you to the trial registration for Mayberg’s BROADEN study: A Clinical Evaluation of Subcallosal Cingulate Gyrus Deep Brain Stimulation for Treatment-Resistant Depression. The updated file registration indicates that the study will end in September 2017, and that the study is ongoing but not recruiting participants.

This information should have been updated, as should other publicity about Mayberg’s BROADEN study. Namely, as Neurocritic documents, the company attempting to commercialize DBS by funding the study, St. Jude Medical terminated after futility analyses indicated that further enrollment of patients had only a 17% probability of achieving a significant effect. At the point of terminating the trial, 125 patients had been role.

Neurocritic also provides a link to an excellent, open access review paper:

Morishita T, Fayad SM, Higuchi MA, Nestor KA, Foote KD. Deep brain stimulation for treatment-resistant depression: systematic review of clinical outcomes. Neurotherapeutics. 2014 Jul 1;11(3):475-84.

The article reveals that although there are 22 published studies of DBS for treatment-resistant depression, only three are randomized trials, one of which was completed with null results. Two – including Mayberg’s BROADEN trial – were discontinued because futility analyses indicate that a finding of efficacy for the treatment was unlikely.

Finally, Neurocritic  also provides a link to a Neurotech Business Report, Depressing Innovation:

The news that St. Jude Medical failed a futility analysis of its BROADEN trial of DBS for treatment of depression cast a pall over an otherwise upbeat attendance at the 2013 NANS meeting [see Conference Report, p7]. Once again, the industry is left to pick up the pieces as a promising new technology gets set back by what could be many years.

It’s too early to assess blame for this failure. It’s tempting to wonder if St. Jude management was too eager to commence this trial, since that has been a culprit in other trial failures. But there’s clearly more involved here, not least the complexity of specifying the precise brain circuits involved with major depression. Indeed, Helen Mayberg’s own thinking on DBS targeting has evolved over the years since the seminal paper she and colleague Andres Lozano published in Neuron in 2005, which implicated Cg25 as a lucrative target for depression. Mayberg now believes that neuronal tracts emanating from Cg25 toward medial frontal areas may be more relevant [NBR Nov13 p1]. Research that she, Cameron McIntyre, and others are conducting on probabilistic tractography to identify the patient-specific brain regions most relevant to the particular form of depression the patient is suffering from will likely prove to be very fruitful in the years ahead.

So, we have a heavily hyped unproven treatment for which the only clinical trials have either been null or terminated following a futility analysis. Helen Mayberg, a patent holder associated with one of these trials was inappropriate to be recruited for commentary on another, more modestly sized trial that also ran into numerous difficulties that can be taken to suggest it was premature. However, I find it outrageous that so little effort has been made to correct the record concerning her BROADEN trial or even to acknowledge its closing in the JAMA: Psychiatry commentary.

Untold numbers of depressed patients who don’t get expected benefits from available treatments are being misled with false hope from anecdotes and statistics from a trial that was ultimately terminated.

I find troubling what my exercise showed might happen when someone who is motivated by the skepticism goes to the Internet and tries to get additional information about the JAMA: Psychiatry paper. They could be careful to rely on only seemingly credible sources – a trial registration and a Science article.  The Science article is not peer-reviewed but nonetheless has a credibility conveyed appearing in the premier and respected Science. The trial registration has not been updated with valuable information and the Science article gives no indication how it is contradicted by better quality evidence. So, they would be misled.

 

 

More sciencey than the rest? The competitive edge of positive psychology coaching

sciencyIs positive psychology coaching better than what its competitors offer? Is positive psychology coaching the science-oriented brand or does it just look sciency? How do we judge?

In Mind the Brain, we have been showing that critical appraisal tools like risk of bias assessment for studies evaluating interventions and a vigilance for signs of confirmatory bias, p-hacking, and significance chasing are crucial in interpreting often untrusworthy scientific claims. Yet, these alone are not enough.

We have also been seeing the need to pay attention to the institutional context, like how journals decide what is publishable, and how universities require that professors prove their worth by publishing lots of papers and telling them where they should publish.

We need to look at the incentives for individual researchers. Do they get rewarded for telling like it is- publishing the fairest interpretation of all their studies  or rather for claiming breakthrough, newsworthy  findings even when the data don’t show that? We need to consider what is suppressed or radically distorted because of these powerful filtering processes. Or else place our faith in the fairness and thoroughness of the peer review process: it must be good science because it got through peer review.

We can’t understand what passes for science in positive psychology unless we grasp the larger context of the positive psychology community, the multimillion dollar industry associated with positive psychology, and incentives that the community and its industry offer to those claiming to provide the science of positive psychology.

Shaping what passes for science are the needs of thousands of positive psychology coaches competitively marketing their services. These coaches are themselves a market for positive psychology ”science,” and they promote their “science-based” products and services to individual clients and corporations. At both levels, claims become important of being more sciencey than competitors not sharing the brand of positive psychology.

A recent interview with a designated “positive psychology expert”, Lisa Sansom provides some fascinating insights into the sciencey branding of positive psychology coaching. Positive Psychology Coaching: 12 Urgent Questions Answered is available at PositivePsychologyProgram: Your One-Stop Positive Psychology Resource. At the outset, the interview promises to answer the questions

What is positive psychology coaching? How does it differ from regular coaching? When can I call myself a positive psychologist?

And more. In this blog post, I’m going to probe this interview to understand the distinctiveness of the positive psychology brand of coaching and its implications for what passes for science and evidence in positive psychology.

We will encounter some tensions. Calling oneself a coach does not require any background in psychology or research methodology. Yet coaches claim they interpret and apply scientific findings and promise that makes their brand better than the rest.

If coaches don’t have a background in psychology and the critical skills to interpret new findings, how are they going to do this? They depend on the eminence of those whom they consider scientists, not the actual evidence their research provide. Researchers may become gurus to an audience that cannot appreciate either whether authoritative statements by the researchers are faithful to their actual findings or whether any evidence is actually relevant to the pronouncements being made.

resident_science_guru_novelty_mug-r2eca08dfba82417d91c636949ef1a320_x7jg5_8byvr_324What a temptation! An audience that cannot tell the difference between reasonable and unreasonable interpretations of the evidence, but will pay more for interpretations that help them sell more of their product and services.

Being an authoritative source has rich rewards in terms of opportunities for lucrative trainings corporate talks and direct-to-consumer marketing of their “science.” But success in this market benefits from claiming stronger findings than the spin and confirmatory bias required for publication.

i know more than you doPositive psychology research comes out of social and personality psychology that already has rampant problems with hype, hokum, and unreproducible findings. Do the temptations of the positive psychology market increase pressures on psychologists doing relevant research to produce simplistic, but seemingly unambiguous answers ? Think of having to match reporting of findings to the wonder, drama, and magic of advertisements for positive psychology products.

Positive psychology articles rarely if ever has declarations of conflict interest. Yet, we know investigators’ financial stakes in obtaining particular outcomes lead to exaggerated and simply false claims. Do investigators seeking market claims in positive psychology further contaminate troubled areas of personality and social psychology with undisclosed conflicts of interest? In other areas of social science, there is growing appreciation for the need for routine declarations of conflicts of interest. Some areas have seen dozens of errata and correction notices to articles that previously did not have a declarations.

The interview quotes a chapter by Carol Kauffman, Ilona Boniwell and Jordan Silberman  in giving a definition of positive psychology coaching:

“Positive Psychology Coaching (PPC) is a scientifically-rooted approach to helping clients. increase well-being, enhance and apply strengths, improve performance, and achieve valued goals. At the core of PPC is a belief in the power of science to elucidate the best.”

The interview keeps emphasizing that it is being rooted in science that distinguishes positive psychology coaching from its competitors.

So how does it differ from regular coaching?

On the surface, it might not look or feel much different to a client. However, what is different is that the PP coach continues his or her life-long learning in the field of positive psychology by staying engaged with the research, the literature, the researchers and other PP professionals.

The PP coach also adjusts his or her coaching techniques, methodologies, etc, accordingly when new findings are discovered. “Regular” coaches may not be as tied to the empirical evidence and research findings, and so their techniques and methodologies may change only as a function of their own experiences, or attending conferences where they learn from other coaches’ anecdotal experiences, or they may not change substantially at all.

And

Perhaps the one thing that is different, as I alluded to above, is that the PP coach also believes in staying close to the science and adjusting his or her approach (etc) accordingly. Coaches that are getting their PP from mass media books only are not getting the full richness and subtleties that are inherent in positive psychology research.

Yet no background in psychology is required to do this:

Overall, to be an effective PP coach or practitioner, one does not need a strong background in traditional psychology and one does not need to be a certified, qualified psychologist.

Even without a coach having a background in psychology,

the benefits to working with a PP coach who is well-trained and qualified are potentially that you will be drawing on a valid body of research (as opposed to just intuition and that individual’s personal coaching experience) and that your coach will know the why and wherefore of the practices, rather than just guessing that things might work for you.

27 factsSurfing around the PositivePsychologyProgram website, I encountered the free resource, 27 of Positive Psychology’s Most Fascinating Facts that advertised

To the point and easy to read (37 pages)

Written by academics, 100% science-based

More free PDF’s, Downloads, Videos…

 

 

Of course, I clicked on the

yes send mePNG

And opened to

Fascinating Fact #4: Positive psychology interventions have the power to reduce depressive symptoms.

Sin and Lyubomirsky’s meta-analysis is the single source. It is described as revealing

positive psychology really does increase wellbeing and sooth depression. Furthermore, the status of depression, the age of the participants and the intervention all had an impact on the effectiveness of the interventions. Because of this, clinicians are strongly encouraged to begin incorporating positive psychology techniques into their work.

You can find specifics here  of my evaluation of Sin and Lyubomirsky. I used the same standards I would apply to any other meta-analysis. I found it to be substandard work:

 Sin and Lyubomirsky provides a biased and seriously flawed assessment of positive psychology interventions. Uncritical citation of this paper suggest either subsequent authors are naïve, careless, or bent on presenting a positive evaluation of positive psychology interventions in defiance of available evidence.

But on to

Fascinating Fact #6: The principles and practice of positive psychology are relevant to brain injury rehabilitation.

 Positive Psychology actually has the ability to foster posttraumatic growth, meaning it can make injury sufferers over-all happier (even more so than they were before). Positive psychology allows individuals to re-assess what is important in life, live more in the moment, identify what they are grateful for and to develop personal and intra­personal goals for recovery. All this makes individuals with brain injuries more appreciative of all aspects of life and allows them to return to their social and physical lives faster.

These are patently ridiculous claims. They leave me thinking that we should all put in our advance directives that if we ever suffer traumatic brain injury, we must be protected from positive psychologists and coaches trying to help us to grow from the experience. And just what the hell do these coaches think they are doing in caring for persons with traumatic brain injury?

In the context of a great debate about positive psychology in cancer care, Howard Tennen and I concluded

We are at a loss to explain why positive psychology investigators continue to endorse the flawed conceptualization and measurement of personal growth following adversity. Despite [Chris] Peterson’s warning that the credibility of positive psychology’s claim to science demands close attention to the evidence, post-traumatic growth—a construct that has now generated hundreds of articles—continues to be studied with flawed methods and a disregard for the evidence generated by psychological science.

More recently, Patricia Frazier, Howard Tennen , and I published a commentary  on Jayawickreme and and Blackie’s updated Posttraumatic Growth as Positive Personality Change: Evidence, Controversies and Future Directions. We concluded that a lot of research had accumulated but it did not change our skeptical assessment. We suggested a lot less, but better research was needed.

thank youAnyone who assumes that psychological science will produce a set of 27 fascinating proven facts ready for application in interventions seriously misunderstands both science and psychological interventions.

Just look at any other area of psychological interventions. Research does not produce fascinating facts, but tentative findings, graded in terms of strength of evidence. That evidence is likely to be limited in quality and quantity and will probably have to be modified with new findings.

Taking a larger overview, we can expect that psychological interventions that are credible and structured will have modest differences among themselves and modest advantages over interventions that are simply supportive and delivered with positive expectations. And psychological interventions are most reliably effective when they are delivered to persons who are sufficiently distressed to register benefit.

The large literature concerning psychological interventions will be very disappointing to anyone seeking ways to produce dramatic change with simple interventions. Anyone or anything that guarantees this should be treated with great skepticism.

Look at the personality and social psychology research from which the positive psychology community draw. Findings are not robustly durable. Newsworthy dramatic breakthroughs typically prove to be false positives or simply nonsense. The shelf life of spectacular claims is increasingly shortened by critics waiting to show the tricks by which such magic was produced.

The positive psychology community may be collectively engaging in wishful thinking, but it attracts and richly rewards those who promise to fulfill the great hunger and pressing marketing needs for sciencey findings. And few in the community will understand the difference in what they get.

If the positive psychology community is serious about making a credible claim for the distinctiveness of their approach, I suggest that everybody drop the vague references to “science” and substitute “evidence-based.”

The  “evidence-based” brand is subject to lots of abuse, but the label at least invites application of some well specified principles for deciding the extent to which claims are indeed evidence-based and grading of the evidence by noncontroversial, established criteria. And to keep a grounding in being evidence-based, interventions need to adhere to the procedures that were validated. This is not a matter of jumping from a correlational study with college students to claims of dramatic effects being achieved in everyday life, as so much of the positive psychology literature does. It is a matter of being faithful –having fidelity to the manualized procedures of the original study.

Or is all of this analysis for naught because the claims of positive psychology being more sciencey than the rest are just vapid advertising slogans and not to be taken seriously? Some researchers notably pitch their work to this waiting audience that lacks the critical skills to evaluate. Should we treat their scholarship as less serious or should we scrutinize it more for bias because of their undeclared conflicts of interest?

DISCLAIMER: I am grateful for PLOS blogs providing me the space for free expression. However, the views I present here are not necessarily those of PLOS nor of any of my institutional affiliations.

Busting foes of post-publication peer review of a psychotherapy study

title_vigilante_blu-rayAs described in the last issue of Mind the Brain, peaceful post-publication peer reviewers (PPPRs) were ambushed by an author and an editor. They used the usual home team advantages that journals have – they had the last word in an exchange that was not peer-reviewed.

As also promised, I will team up in this issue with Magneto to bust them.

Attacks on PPPRs threaten a desperately needed effort to clean up the integrity of the published literature.

The attacks are getting more common and sometimes vicious. Vague threats of legal action caused an open access journal to remove an article delivering fair and balanced criticism.

In a later issue of Mind the Brain, I will describe an  incident in which authors of a published paper had uploaded their data set, but then  modified it without notice after PPPRs used the data for re-analyses. The authors then used the modified data for new analyses and then claimed the PPPRs were grossly mistaken. Fortunately, the PPPRs retained time stamped copies of both data sets. You may like to think that such precautions are unnecessary, but just imagine what critics of PPPR would be saying if they had not saved this evidence.

Until journals get more supportive of post publication peer review, we need repeated vigilante actions, striking from Twitter, Facebook pages, and blogs. Unless readers acquire basic critical appraisal skills and take the time to apply them, they will have to keep turning to the social media for credible filters of all the crap that is flooding the scientific literature.

MagnetoYardinI’ve enlisted Magneto because he is a mutant. He does not have any extraordinary powers of critical appraisal. To the contrary, he unflinchingly applies what we should all acquire. As a mutant, he can apply his critical appraisal skills without the mental anguish and physiological damage that could beset humans appreciating just how bad the literature really is. He doesn’t need to maintain his faith in the scientific literature or the dubious assumption that what he is seeing is just a matter of repeat offender authors, editors, and journals making innocent mistakes.

Humans with critical appraisal risk demoralization and too often shirk from the task of telling it like it is. Some who used their skills too often were devastated by what they found and fled academia. More than a few are now working in California in espresso bars and escort services.

Thank you, Magneto. And yes, I again apologize for having tipped off Jim Coan about our analyses of his spinning and statistical manipulations of his work to get newsworthy finding. Sure, it was an accomplishment to get a published apology and correction from him and Susan Johnson. I am so proud of Coan’s subsequent condemnation of me on Facebook as the Deepak Chopra of Skepticism  that I will display it as an endorsement on my webpage. But it was unfortunate that PPPRs had to endure his nonsensical Negative Psychology rant, especially without readers knowing what precipitated it.

shakespeareanThe following commentary on the exchange in Journal of Nervous and Mental Disease makes direct use of your critique. I have interspersed gratuitous insults generated by Literary Genius’ Shakespearean insult generator and Reocities’ Random Insult Generator.

How could I maintain the pretense of scholarly discourse when I am dealing with an author who repeatedly violates basic conventions like ensuring tables and figures correspond to what is claimed in the abstract? Or an arrogant editor who responds so nastily when his slipups are gently brought to his attention and won’t fix the mess he is presenting to his readership?

As a mere human, I needed all the help I could get in keeping my bearings amidst such overwhelming evidence of authorial and editorial ineptness. A little Shakespeare and Monty Python helped.

The statistical editor for this journal is a saucy full-gorged apple-john.

 

Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective

Domenic V. Cicchetti, PhD, quintessential  biostatistician
Domenic V. Cicchetti, PhD, quintessential biostatistician

Domenic V. Cicchetti, You may be, as your website claims

 A psychological methodologist and research collaborator who has made numerous biostatistical contributions to the development of major clinical instruments in behavioral science and medicine, as well as the application of state-of-the-art techniques for assessing their psychometric properties.

But you must have been out of “the quintessential role of the research biostatistician” when you drafted your editorial. Please reread it. Anyone armed with an undergraduate education in psychology and Google Scholar can readily cut through your ridiculous pomposity, you undisciplined sliver of wild belly-button fluff.

You make it sound like the Internet PPPRs misunderstood Jacob Cohen’s designation of effect sizes as small, medium, and large. But if you read a much-accessed article that one of them wrote, you will find a clear exposition of the problems with these arbitrary distinctions. I know, it is in an open access journal, but what you say is sheer bollocks about it paying reviewers. Do you get paid by Journal of Nervous and Mental Disease? Why otherwise would you be a statistical editor for a journal with such low standards? Surely, someone who has made “numerous biostatistical contributions” has better things to do, thou dissembling swag-bellied pignut.

More importantly, you ignore that Jacob Cohen himself said

The terms ‘small’, ‘medium’, and ‘large’ are relative . . . to each other . . . the definitions are arbitrary . . . these proposed conventions were set forth throughout with much diffidence, qualifications, and invitations not to employ them if possible.

Cohen J. Statistical power analysis for the behavioural sciences. Second edition, 1988. Hillsdale, NJ: Lawrence Earlbaum Associates. p. 532.

Could it be any clearer, Dommie?

Click to enlarge

You suggest that the internet PPPRs were disrespectful of Queen Mother Kraemer in not citing her work. Have you recently read it? Ask her yourself, but she seems quite upset about the practice of using effects generated from feasibility studies to estimate what would be obtained in an adequately powered randomized trial.

Pilot studies cannot estimate the effect size with sufficient accuracy to serve as a basis of decision making as to whether a subsequent study should or should not be funded or as a basis of power computation for that study.

Okay you missed that, but how about:

A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study.

Dommie, although you never mention it, surely you must appreciate the difference between a within-group effect size and a between-group effect size.

  1. Interventions do not have meaningful effect sizes, between-group comparisons do.
  2. As I have previously pointed out

 When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

Turkington recruited a small, convenience sample of older patients from community care who averaged over 20 years of treatment. It is likely that they were not getting much support and attention anymore, whether or not they ever were. The intervention that Turkington’s study provided that attention. Maybe some or all of any effects were due to simply compensating for what was missing from from inadequate routines care. So, aside from all the other problems, anything going on in Turkington’s study could have been nonspecific.

Recall that in promoting his ideas that antidepressants are no better than acupuncture for depression, Irving Kirsh tried to pass off within-group as equivalent to between-group effect sizes, despite repeated criticisms. Similarly, long term psychodynamic psychotherapists tried to use effect sizes from wretched case series for comparison with those obtained in well conducted studies of other psychotherapies. Perhaps you should send such folks a call for papers so that they can find an outlet in Journal of Nervous and Mental Disease with you as a Special Editor in your quintessential role as biostatistician.

Douglas Turkington’s call for a debate

Professor Douglas Turkington: "The effect size that got away was this big."
Professor Douglas Turkington: “The effect size that got away was this big.”

Doug, as you requested, I sent you a link to my Google Scholar list of publications. But you still did not respond to my offer to come to Newcastle and debate you. Maybe you were not impressed. Nor did you respond to Keith Law’s repeated request to debate. Yet you insulted internet PPPR Tim Smits with the taunt,

Click to Enlarge

 

You congealed accumulation of fresh cooking fat.

I recommend that you review the recording of the Maudsley debate. Note how the moderator Sir Robin Murray boldly announced at the beginning that the vote on the debate was rigged by your cronies.

Do you really think Laws and McKenna got their asses whipped? Then why didn’t you accept Laws’ offer to debate you at a British Psychological Society event, after he offered to pay your travel expenses?

High-Yield Cognitive Behavioral Techniques for Psychosis Delivered by Case Managers…

Dougie, we were alerted that bollacks would follow with the “high yield” of the title. Just what distinguishes this CBT approach from any other intervention to justify “high yield” except your marketing effort? Certainly, not the results you have obtained from an earlier trial, which we will get to.

Where do I begin? Can you dispute what I said to Dommie about the folly of estimating effect sizes for an adequately powered randomized trial from a pathetically small feasibility study?

I know you were looking for a convenience sample, but how did you get from Newcastle, England to rural Ohio and recruit such an unrepresentative sample of 40 year olds with 20 years of experience with mental health services? You don’t tell us much about them, not even a breakdown of their diagnoses. But would you really expect that the routine care they were currently receiving was even adequate? Sure, why wouldn’t you expect to improve upon that with your nurses? But would you be demonstrating?

insult 1

 

The PPPR boys from the internet made noise about Table 2 and passing reference to the totally nude Figure 5 and how claims in the abstract had no apparent relationship to what was presented in the results section. And how nowhere did you provide means or standard deviations. But they did not get to Figure 2 Notice anything strange?

figure 2Despite what you claim in the abstract, none of the outcomes appear significant. Did you really mean standard error of measurement (SEMs), not standard deviations (SDs)? People did not think so to whom I showed the figure.

mike miller

 

And I found this advice on the internet:

If you want to create persuasive propaganda:

If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD.

If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.

Why did you expect to be able to talk about effect sizes of the kind you claim you were seeking? The best meta analysis suggests an effect size of only .17 with blind assessment of outcome. Did you expect that unblinding assessors would lead to that much more improvement? Oh yeh, you cited your own previous work in support:

That intervention improved overall symptoms, insight, and depression and had a significant benefit on negative symptoms at follow-up (Turkington et al., 2006).

Let’s look at Table 1 from Turkington et al., 2006.

A consistent spinning of results

Table 1 2006

Don’t you just love those three digit significance levels that allow us to see that p =.099 for overall symptoms meets the apparent criteria of p < .10 in this large sample? Clever, but it doesn’t work for depression with p = .128. But you have a track record of being sloppy with tables. Maybe we should give you the benefit of a doubt and ignore the table.

But Dougie, this is not some social priming experiment with college students getting course credit. This is a study that took up the time of patients with serious mental disorder. You left some of them in the squalor of inadequate routine care after gaining their consent with the prospect that they might get more attention from nurses. And then with great carelessness, you put the data into tables that had no relationship to the claims you were making in the abstract. Or in your attempts to get more funding for future such ineptitude. If you drove your car like you write up clinical trials, you’d lose your license, if not go to jail.

insult babbling

 

 

The 2014 Lancet study of cognitive therapy for patients with psychosis

Forgive me that I missed until Magneto reminded me that you were an author on the, ah, controversial paper

Morrison, A. P., Turkington, D., Pyle, M., Spencer, H., Brabban, A., Dunn, G., … & Hutton, P. (2014). Cognitive therapy for people with schizophrenia spectrum disorders not taking antipsychotic drugs: a single-blind randomised controlled trial. The Lancet, 383(9926), 1395-1403.

But with more authors than patients remaining in the intervention group at follow up, it is easy to lose track.

You and your co-authors made some wildly inaccurate claims about having shown that cognitive therapy was as effective as antipsychotics. Why, by the end of the trial, most of the patients remaining in follow up were on antipsychotic medication. Is that how you obtained your effectiveness?

In our exchange of letters in The Lancet, you finally had to admit

We claimed the trial showed that cognitive therapy was safe and acceptable, not safe and effective.

Maybe you should similarly be retreating from your claims in the Journal of Nervous and Mental Disease article? Or just take refuge in the figures and tables being uninterpretable.

No wonder you don’t want to debate Keith Laws or me.

insult 3

 

 

A retraction for High-Yield Cognitive Behavioral Techniques for Psychosis…?

The Turkington article meets the Committee on Publication Ethics (COPE) guidelines for an immediate retraction (http://publicationethics.org/files/retraction%20guidelines.pdf).

But neither a retraction nor even a formal expression of concern has appeared.

Toilet-outoforderMaybe matters can be left as they now are. In the social media, we can point to the many problems of the article like a clogged toilet warning that Journal of Nervous and Mental Disease is not a fit place to publish – unless you are seeking exceeding inept or nonexistent editing and peer review.

 

 

 

Vigilantes can periodically tweet Tripadvisor style warnings, like

toilets still not working

 

 

Now, Dommie and Dougie, before you again set upon some PPPRs just trying to do their jobs for little respect or incentive, consider what happened this time.

Special thanks are due for Magneto, but Jim Coyne has sole responsibility for the final content. It  does not necessarily represent the views of PLOS blogs or other individuals or entities, human or mutant.