Busting foes of post-publication peer review of a psychotherapy study

title_vigilante_blu-rayAs described in the last issue of Mind the Brain, peaceful post-publication peer reviewers (PPPRs) were ambushed by an author and an editor. They used the usual home team advantages that journals have – they had the last word in an exchange that was not peer-reviewed.

As also promised, I will team up in this issue with Magneto to bust them.

Attacks on PPPRs threaten a desperately needed effort to clean up the integrity of the published literature.

The attacks are getting more common and sometimes vicious. Vague threats of legal action caused an open access journal to remove an article delivering fair and balanced criticism.

In a later issue of Mind the Brain, I will describe an  incident in which authors of a published paper had uploaded their data set, but then  modified it without notice after PPPRs used the data for re-analyses. The authors then used the modified data for new analyses and then claimed the PPPRs were grossly mistaken. Fortunately, the PPPRs retained time stamped copies of both data sets. You may like to think that such precautions are unnecessary, but just imagine what critics of PPPR would be saying if they had not saved this evidence.

Until journals get more supportive of post publication peer review, we need repeated vigilante actions, striking from Twitter, Facebook pages, and blogs. Unless readers acquire basic critical appraisal skills and take the time to apply them, they will have to keep turning to the social media for credible filters of all the crap that is flooding the scientific literature.

MagnetoYardinI’ve enlisted Magneto because he is a mutant. He does not have any extraordinary powers of critical appraisal. To the contrary, he unflinchingly applies what we should all acquire. As a mutant, he can apply his critical appraisal skills without the mental anguish and physiological damage that could beset humans appreciating just how bad the literature really is. He doesn’t need to maintain his faith in the scientific literature or the dubious assumption that what he is seeing is just a matter of repeat offender authors, editors, and journals making innocent mistakes.

Humans with critical appraisal risk demoralization and too often shirk from the task of telling it like it is. Some who used their skills too often were devastated by what they found and fled academia. More than a few are now working in California in espresso bars and escort services.

Thank you, Magneto. And yes, I again apologize for having tipped off Jim Coan about our analyses of his spinning and statistical manipulations of his work to get newsworthy finding. Sure, it was an accomplishment to get a published apology and correction from him and Susan Johnson. I am so proud of Coan’s subsequent condemnation of me on Facebook as the Deepak Chopra of Skepticism  that I will display it as an endorsement on my webpage. But it was unfortunate that PPPRs had to endure his nonsensical Negative Psychology rant, especially without readers knowing what precipitated it.

shakespeareanThe following commentary on the exchange in Journal of Nervous and Mental Disease makes direct use of your critique. I have interspersed gratuitous insults generated by Literary Genius’ Shakespearean insult generator and Reocities’ Random Insult Generator.

How could I maintain the pretense of scholarly discourse when I am dealing with an author who repeatedly violates basic conventions like ensuring tables and figures correspond to what is claimed in the abstract? Or an arrogant editor who responds so nastily when his slipups are gently brought to his attention and won’t fix the mess he is presenting to his readership?

As a mere human, I needed all the help I could get in keeping my bearings amidst such overwhelming evidence of authorial and editorial ineptness. A little Shakespeare and Monty Python helped.

The statistical editor for this journal is a saucy full-gorged apple-john.

 

Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective

Domenic V. Cicchetti, PhD, quintessential  biostatistician
Domenic V. Cicchetti, PhD, quintessential biostatistician

Domenic V. Cicchetti, You may be, as your website claims

 A psychological methodologist and research collaborator who has made numerous biostatistical contributions to the development of major clinical instruments in behavioral science and medicine, as well as the application of state-of-the-art techniques for assessing their psychometric properties.

But you must have been out of “the quintessential role of the research biostatistician” when you drafted your editorial. Please reread it. Anyone armed with an undergraduate education in psychology and Google Scholar can readily cut through your ridiculous pomposity, you undisciplined sliver of wild belly-button fluff.

You make it sound like the Internet PPPRs misunderstood Jacob Cohen’s designation of effect sizes as small, medium, and large. But if you read a much-accessed article that one of them wrote, you will find a clear exposition of the problems with these arbitrary distinctions. I know, it is in an open access journal, but what you say is sheer bollocks about it paying reviewers. Do you get paid by Journal of Nervous and Mental Disease? Why otherwise would you be a statistical editor for a journal with such low standards? Surely, someone who has made “numerous biostatistical contributions” has better things to do, thou dissembling swag-bellied pignut.

More importantly, you ignore that Jacob Cohen himself said

The terms ‘small’, ‘medium’, and ‘large’ are relative . . . to each other . . . the definitions are arbitrary . . . these proposed conventions were set forth throughout with much diffidence, qualifications, and invitations not to employ them if possible.

Cohen J. Statistical power analysis for the behavioural sciences. Second edition, 1988. Hillsdale, NJ: Lawrence Earlbaum Associates. p. 532.

Could it be any clearer, Dommie?

Click to enlarge

You suggest that the internet PPPRs were disrespectful of Queen Mother Kraemer in not citing her work. Have you recently read it? Ask her yourself, but she seems quite upset about the practice of using effects generated from feasibility studies to estimate what would be obtained in an adequately powered randomized trial.

Pilot studies cannot estimate the effect size with sufficient accuracy to serve as a basis of decision making as to whether a subsequent study should or should not be funded or as a basis of power computation for that study.

Okay you missed that, but how about:

A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study.

Dommie, although you never mention it, surely you must appreciate the difference between a within-group effect size and a between-group effect size.

  1. Interventions do not have meaningful effect sizes, between-group comparisons do.
  2. As I have previously pointed out

 When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

Turkington recruited a small, convenience sample of older patients from community care who averaged over 20 years of treatment. It is likely that they were not getting much support and attention anymore, whether or not they ever were. The intervention that Turkington’s study provided that attention. Maybe some or all of any effects were due to simply compensating for what was missing from from inadequate routines care. So, aside from all the other problems, anything going on in Turkington’s study could have been nonspecific.

Recall that in promoting his ideas that antidepressants are no better than acupuncture for depression, Irving Kirsh tried to pass off within-group as equivalent to between-group effect sizes, despite repeated criticisms. Similarly, long term psychodynamic psychotherapists tried to use effect sizes from wretched case series for comparison with those obtained in well conducted studies of other psychotherapies. Perhaps you should send such folks a call for papers so that they can find an outlet in Journal of Nervous and Mental Disease with you as a Special Editor in your quintessential role as biostatistician.

Douglas Turkington’s call for a debate

Professor Douglas Turkington: "The effect size that got away was this big."
Professor Douglas Turkington: “The effect size that got away was this big.”

Doug, as you requested, I sent you a link to my Google Scholar list of publications. But you still did not respond to my offer to come to Newcastle and debate you. Maybe you were not impressed. Nor did you respond to Keith Law’s repeated request to debate. Yet you insulted internet PPPR Tim Smits with the taunt,

Click to Enlarge

 

You congealed accumulation of fresh cooking fat.

I recommend that you review the recording of the Maudsley debate. Note how the moderator Sir Robin Murray boldly announced at the beginning that the vote on the debate was rigged by your cronies.

Do you really think Laws and McKenna got their asses whipped? Then why didn’t you accept Laws’ offer to debate you at a British Psychological Society event, after he offered to pay your travel expenses?

High-Yield Cognitive Behavioral Techniques for Psychosis Delivered by Case Managers…

Dougie, we were alerted that bollacks would follow with the “high yield” of the title. Just what distinguishes this CBT approach from any other intervention to justify “high yield” except your marketing effort? Certainly, not the results you have obtained from an earlier trial, which we will get to.

Where do I begin? Can you dispute what I said to Dommie about the folly of estimating effect sizes for an adequately powered randomized trial from a pathetically small feasibility study?

I know you were looking for a convenience sample, but how did you get from Newcastle, England to rural Ohio and recruit such an unrepresentative sample of 40 year olds with 20 years of experience with mental health services? You don’t tell us much about them, not even a breakdown of their diagnoses. But would you really expect that the routine care they were currently receiving was even adequate? Sure, why wouldn’t you expect to improve upon that with your nurses? But would you be demonstrating?

insult 1

 

The PPPR boys from the internet made noise about Table 2 and passing reference to the totally nude Figure 5 and how claims in the abstract had no apparent relationship to what was presented in the results section. And how nowhere did you provide means or standard deviations. But they did not get to Figure 2 Notice anything strange?

figure 2Despite what you claim in the abstract, none of the outcomes appear significant. Did you really mean standard error of measurement (SEMs), not standard deviations (SDs)? People did not think so to whom I showed the figure.

mike miller

 

And I found this advice on the internet:

If you want to create persuasive propaganda:

If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD.

If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.

Why did you expect to be able to talk about effect sizes of the kind you claim you were seeking? The best meta analysis suggests an effect size of only .17 with blind assessment of outcome. Did you expect that unblinding assessors would lead to that much more improvement? Oh yeh, you cited your own previous work in support:

That intervention improved overall symptoms, insight, and depression and had a significant benefit on negative symptoms at follow-up (Turkington et al., 2006).

Let’s look at Table 1 from Turkington et al., 2006.

A consistent spinning of results

Table 1 2006

Don’t you just love those three digit significance levels that allow us to see that p =.099 for overall symptoms meets the apparent criteria of p < .10 in this large sample? Clever, but it doesn’t work for depression with p = .128. But you have a track record of being sloppy with tables. Maybe we should give you the benefit of a doubt and ignore the table.

But Dougie, this is not some social priming experiment with college students getting course credit. This is a study that took up the time of patients with serious mental disorder. You left some of them in the squalor of inadequate routine care after gaining their consent with the prospect that they might get more attention from nurses. And then with great carelessness, you put the data into tables that had no relationship to the claims you were making in the abstract. Or in your attempts to get more funding for future such ineptitude. If you drove your car like you write up clinical trials, you’d lose your license, if not go to jail.

insult babbling

 

 

The 2014 Lancet study of cognitive therapy for patients with psychosis

Forgive me that I missed until Magneto reminded me that you were an author on the, ah, controversial paper

Morrison, A. P., Turkington, D., Pyle, M., Spencer, H., Brabban, A., Dunn, G., … & Hutton, P. (2014). Cognitive therapy for people with schizophrenia spectrum disorders not taking antipsychotic drugs: a single-blind randomised controlled trial. The Lancet, 383(9926), 1395-1403.

But with more authors than patients remaining in the intervention group at follow up, it is easy to lose track.

You and your co-authors made some wildly inaccurate claims about having shown that cognitive therapy was as effective as antipsychotics. Why, by the end of the trial, most of the patients remaining in follow up were on antipsychotic medication. Is that how you obtained your effectiveness?

In our exchange of letters in The Lancet, you finally had to admit

We claimed the trial showed that cognitive therapy was safe and acceptable, not safe and effective.

Maybe you should similarly be retreating from your claims in the Journal of Nervous and Mental Disease article? Or just take refuge in the figures and tables being uninterpretable.

No wonder you don’t want to debate Keith Laws or me.

insult 3

 

 

A retraction for High-Yield Cognitive Behavioral Techniques for Psychosis…?

The Turkington article meets the Committee on Publication Ethics (COPE) guidelines for an immediate retraction (http://publicationethics.org/files/retraction%20guidelines.pdf).

But neither a retraction nor even a formal expression of concern has appeared.

Toilet-outoforderMaybe matters can be left as they now are. In the social media, we can point to the many problems of the article like a clogged toilet warning that Journal of Nervous and Mental Disease is not a fit place to publish – unless you are seeking exceeding inept or nonexistent editing and peer review.

 

 

 

Vigilantes can periodically tweet Tripadvisor style warnings, like

toilets still not working

 

 

Now, Dommie and Dougie, before you again set upon some PPPRs just trying to do their jobs for little respect or incentive, consider what happened this time.

Special thanks are due for Magneto, but Jim Coyne has sole responsibility for the final content. It  does not necessarily represent the views of PLOS blogs or other individuals or entities, human or mutant.

Evolution and engineering of the megajournal – Interview with Pete Binfield

Image courtesy of PeerJ

Peter Binfield wrote a nice analysis on Mega Journals over at Creative Commons Aotearoa New Zealand  an organisation on which I serve. MegaJournals are a recent phenomenon, that have changed the face of scientific publishing.

I am an academic editor in PeerJ, as well as in PLOS ONE (PONE), “the” megajournal of the Public Library of Science. I entered PONE first as an author (submitted before PONE had began publishing), and then joined as an Academic Editor under the rule of Pete Binfield. I saw PONE grow into the publishing giant it is today, feeling proud of being a small part of it. Not long ago. I saw Pete leave to join Jason Hoyt (formerly from Mendeley, another venture I had signed up to in its very early days) in search of their new adventure that was eventually to become PeerJ.  It wouldn’t be long before I would become an academic editor and find myself, again, under Pete’s rule. It has been about a year since that invitation, and Open Access Week gave me an opportunity to reflect on my experience.

Who is Pete Binfield?

PB: Before PLOS ONE I spent about 14 years in the subscription publishing world.  I worked for Institute of Publishing Physics (doing books), then moved to Holland to work for Kluwer Academic Publishers for 8 years (and Kluwer then merged with Springer), and finally I moved to the US to work for SAGE Publications (the largest social science publisher). It was during my time at Kluwer and then SAGE that the Open Access movement was really taking off, and it quickly became apparent to me that this was the way the industry was (or at least should be!) going. I wanted to be at the leading edge of this movement, not looking in at it from outside, trying to play catch up, so when the opportunity came up to move to PLOS and run PLOS ONE, I jumped at it.

I am a biology teacher (broadly speaking)  mainly in the medical school. As such, I can’t escape talking about evolved and engineered systems. Animals’ bodies are evolved – the changes in structure and function happen against a backdrop of conserved structures. You can’t really understand “why” an organ looks the way it looks and works the way it does without thinking about what building blocks were available to start with. Engineers have it easier in a sense. They don’t have a preset structure they need to hack to get the best they can, they can start from scratch. Building an artificial kidney that works in dry land has less constraints that evolving one from that of a water-dwelling ancestor. So if you are a journal how do you go from print to online?

Building a journal from scratch, too, is not the same as evolving one. When PLOS came to life in the early over a decade ago they were able to invent their journals from scratch. And boy, did they do that well (and still do). They changed the nature of formal scientific communication and sent traditional publishers to chase their tails. Traditional publishers have been slow to adapt – trying to  hack the 17th Century publishing model.  When PLOS ONE was born it was unique, exploiting what PLOS had achieved so well as an Open Access online publication, but also seeking to changed the rules of how papers were to be accepted. This, in the whole evolution analogy was a structural change with a very large downstream effect.

PB: I think some of my prior colleagues might have thought that it was a strange transition – at SAGE I had been responsible for over 200 journal titles in a vibrant program, and now I was moving to PLOS to run a single title (PLOS ONE) in an organization that only had 7 titles. However, even at that time I could see the tremendous potential that PLOS ONE had and how it could bring about rapid change. It was the unique editorial criteria (peer-reviewing only for scientific validity); the innovative functionality; the potential for limitless growth; and the backing of a ‘mover and shaker’ organization which really excited me. I joined PLOS with the hope that we could make PLOS ONE the largest journal in the world, and to use that position to bring about real change in the industry – I think most people would agree we achieved that.

Until last year, you could pretty much put journals into 2 broad bags: those that were evolving from “print” standards and those that were evolving from “online” standards, which also included the ‘megajournals’ like PLOS ONE. Yet over 10 years after the launch of PLOS,  and given the accelerated changes in “online” media,  there was an opportunity for a fresh engineering approach.

PB: When I left, the journal was receiving about 3,000 submissions a month, and publishing around 2,000 – so to change anything about PLOS ONE was like trying to change the engines of a jet, in mid-flight. We had an amazingly successful and innovative product (and, to be clear, it still is) but it was increasingly difficult to introduce significant new innovations (such as new business models; new software; a new mindset).

In addition, myself and Jason wanted to attempt an entirely new business model which would make the act of publishing significantly cheaper for the author. I think it would have been very hard for PLOS to attempt this within the PLOS ONE structure which, in many ways, was already supporting a lot of legacy systems and financial commitments.

When Jason approached me with the original idea for PeerJ it quickly became clear that by partnering together we would be able to do things that we wouldn’t have been able to achieve in our previous roles (he at Mendeley, and me at PLOS). By breaking out and starting something new, from scratch, it was possible to try to take the lessons we had both learned and move everything one or two steps forwards with an entirely new mindset and product suite. That is an exciting challenge of course, but already I think you can see that we are succeeding!

PeerJ had from the start a lot that we (authors) were looking for. We had all been struggling for a while with knowing that the imperative to publish in Open Access was growing, either for personal motivation (as in my case) or because of funders’ or institutional mandates. We were also struggling with the perceived cost of Open Access, especially within the traditional journals. There is too much at stake in individual’s careers to not carefully choose how to “brand” our articles because we know too well that at some point or another someone will value our work more on the brand than on the quality, and that someone has the power to decide if we get hired, promoted, or granted tenure. PLOS ONE had two things in its favour: it was part of the already respected PLOS brand, and it was significantly cheaper than the other PLOS journals. Then, over a year ago, Pete and Jason came out of the closet with one of the best catch-phrases I’ve seen:

If we can set a goal to sequence the Human Genome for $99, then why shouldn’t we demand the same goal for the publication of research?

They had a full package: Pete’s credibility in the publishing industry, Jason’s insights on how to help readers and papers connect, and a cheap price, not just affordable, cheap. I bought my full membership out of my own pocket as soon as I could. I gave them my money because I had met and learned to trust both Pete’s and Jason’s insights and abilities.

PB: [The process from development to launch day ] was very exciting, although clearly nail biting! One of the things which was very important to us was to build our own submission, peer review and publication software entirely from scratch – something which many people thought would not be possible in a reasonable time frame. And yet our engineering team, recruited and led by Jason, were able to complete the entire product suite in just 6 months of development time. First we built the submission and peer review system, and as soon as submissions started moving through that system we switched to build the publication platform. Everything is hosted on the cloud, and implemented using github, and so were able to keep our development infrastructure extremely ‘light’ and flexible.

But even that does not guarantee buy-in. Truth be told, even if PeerJ was to be an interesting experiment I think mine was money well spent. (All in the name of progress.) What tipped the balance for me was the addition of Tim O’Reilly to the mix. Here is someone that understands the web (heck, he popularised that famous web 2.0 meme), publishing and innovation. O’Reilly brought in what, from my point of view, was missing in the original mix and that was crucial to attract authors: a sense of sustainability.

by @McDawg on twitter

PeerJ looked different to me in a very unique way – while other journals screamed out  “brand” or “papers”, PeerJ was screaming out  “authors”.  Whether this might be a bias of mine because of my perception of the founders, or the life-membership model, to me this was a different kind of journal. It wouldn’t be long until I got invited to join the editorial board, and then got to see who my partners in crime would be.

PB: Simultaneously, we were building up the ‘editorial’ side of the journal. We started with a journal with no reputation, brand, or recognized name and managed to recruit an Editorial Board of over 800 world class academics (including yourself, and 5 Nobel Laureates); we created the editorial criteria and detailed author guidelines; we defined a comprehensive subject taxonomy; we established ourselves with all the third party services which support this infrastructure (such as CrossRef, CLOCKSS, COPE, OASPA etc); we contracted with a production vendor and so on.

Everything was completed in perfect time, and worked flawlessly from the very start – it really is a testament to the talented staff we have and I think we have proven to other players that this approach is more than possible.

But to launch a journal you need articles and also to make sure your system does not crash. Academic Editors were invited to submit manuscripts free of charge in exchange of participating in the beta testing. I had an article that was ready to submit, and since by now I had pretty much no funding the free deal was worth any bug reporting nuisance. I had been producing digital files for submission for ages and doing submissions on line for long enough that I set a full day aside to go through the process (especially since this was a bug reporting exercise). And then came the surprise. Yes, there were a few bugs, as expected, but the submission system was easy and as user friendly as I had not anticipated. (Remember when above I said PeerJ screamed “authors”?). For the first time I experienced a submission system that was “user friendly”.

PB: I am constantly amazed that you can start from nothing, and provided you have staff who know what they are doing, and that you have a model which people can get behind, then it is entirely possible to build a world-class publishing operation from a standing start and create something which can compete with, and beat out, the more established players. As a testament to this, we have been named one of the Top 10 “Educational Technology Innovators of 2013” by the Chronicle of Higher Education; and as the “Publishing Innovation of 2013” by the Association of Learned and Professional Scholarly Publishers.

Then came the reviews of the paper – and there is when I found the benefit of knowing who the reviewers were. Many times I encounter these odd reviewer’s comments where I read puzzled and go “uh?”. In this case, because I knew who the reviewer was I could understand where they were coming from.  It made the whole process a lot easier. Apparently, the myth that people won’t review papers if their names are revealed, is , well, a myth.

PB: One particularly pleasant surprise has been the community reaction to our ‘optional open peer review’. At the time of writing, pretty much 100% of our authors are choosing to reproduce their peer-review history alongside their published articles (for example, every paper we are publishing in OA week is taking this option). We believe that making the peer review process as open as possible is one of the most important things that anyone can do to preserve the valuable comments of their peer-reviewers (time consuming comments which are normally lost to the world) and to prove the rigour of their published work .

I am not alone at being satisfied as an author. Not too long ago, PeerJ did their first author survey. Even as an editor I was biting my nails to see the results, I can only imagine the stress and anticipation in PeerJ headquarters.

PB: Yes, we conducted our first author survey earlier this year and we were extremely pleased to learn, for example, that 92% of responding authors rated their overall PeerJ experience as either “one of the best publishing experiences I have ever had” (42%) or “a good experience” (49%). In addition, 86% of our authors reported that their time to first decision was either “extremely fast” (29%) or “fast” (57%). Any publisher, no matter how well resourced or established, would be proud to be able to report results like these!

Perhaps the biggest surprise was how engaged our authors were, and how much feedback they were willing to provide. We quite literally received reams of free text feedback which we are still going through – so be careful what you ask for!

I am not surprised at this – I myself provided quite a bit of feedback. Perhaps seeing these comments from Pete emphasise the sense of community that some of us feel is the point of difference with  PeerJ.

PB: We are creating a publishing operation, not a ‘facebook for scientists’, however with that said our membership model does mean that we tend to develop functionality which supports and engages our members at every touch point. So although it is early days, I think a real community is already starting to form and as a result you can start to see how our broader vision is taking shape.

Unlike most publishers (who have a very ‘article centric’ mentality), our membership model means that we are quite ‘person centric’. Where a typical publisher might not know (or care) who the co-authors are on a paper, for us they are all Members, and need to be treated well or they will not come back or recommend us to their peers.With this mindset, you can see that we have an intimate knowledge of all the interactions (and who performed them) that happen on a paper. Therefore when you come to our site you can navigate through the contributions of an individual (for example, see the links that are building up at this profile) and see exactly how everyone has contributed to the community (through our system of ‘Academic Contribution’ points.

Another example of our tendency towards ‘community building’ is our newly launched Q&A Functionality. With this functionality, anyone can ask a question (on a specific part of a specific article; on an entire article; or on any aspect of science that we cover) and anyone in the community can answer that question. People who ask or answer questions can be ‘voted’ up or down, and as a result we hope to build up a system of ‘reputation recognition’ in any given field. Again – this is a great way to build communities of practise, and the barrier to entry is very low.

Image courtesy of PeerJ

It is early days – this is new functionality and it will be some time before we can see if it takes off. PLOS ONE also offers commenting, but that seems to be a feature that is under-used. I can’t but wonder whether the experience of PeerJ might be different because the relationship with authors and editors might be also different. Will feeling  that we, the authors (and not our articles), are the centre of attention make a difference?

PB: This is extremely important to us, so thank you for noticing! One of the mistakes that subscription publishers are making is that they have historically focussed on the librarian as the customer (causing them to develop features and functionalities focussed on those people) when in an Open Access world, the customer is the academic (in their roles as authors, editors and reviewers). Open Access publishers are obviously much more attuned to the principle of the ‘academic as customer’ but even they are not as focussed on this aspect as we (with our Membership model) are .

It is very important that authors feel loved; that people receive prompt and effective responses to their queries; that we listen to complaints and react rapidly and so on. One way we are going to scale this is with more automation – for example, if we proactively inform people of the status of their manuscript then they don’t need to email us. On another level, publishing is still a ‘human’ business based on networks of interaction and trust, and so we need to remember that when we resource our organisation going forwards.

This is what I find exciting about PeerJ – there is a new attitude, if not a new concept – that seems to come through. I will not even try to count the number of email and twitter exchanges that I have had with Pete and PeerJ staff ( I would not be surprised that eyes roll at the other end as they see the “from” field in their email inbox). But they have always responded. With graceful and helpful emails. Whether they “love” me or not (as Pete says above) is irrelevant when one is treated with respect and due diligence. I can see similar interactions at least on twitter – PeerJ responsive to suggestions and requests, and, at least from where I am standing, seemingly having innovation at the top of the list.

PB: I think that everyone at PeerJ came here (myself and Jason included) because we enjoy innovating and we aren’t afraid to try new things. Innovation is quite literally written into our corporate beliefs (“#1. Keep Innovating – We are developing a scholarly communication venue for the 21st Century. We are committed to improving scholarly communications in every way possible”) and so yes, it is part of our DNA and a core part of our competitive advantage.

I must admit, it wasn’t necessarily our intention to use twitter as our bug tracker (!), but it is definitely a very good way to get real time feedback on new features or functionality. Because of our flexible architecture, and ‘can do’ attitude, we can often fix or improve functionality in hours or days (compared to months or years at most other publishers who do not control their own software). For an example of this in action, check out this blog post from a satisfied ‘feature requestor’.

I want PeerJ to succeed not only because I like and admire the people involved with it but because it offers something different, including the PrePrint service to which I hope to contribute soon. So I had to ask Pete: how is the journal doing?

PB: Extremely well! But don’t forget that we are more than just a journal, we are actually a publishing ecosystem that aims to support authors throughout their publication cycles. PeerJ, the peer-reviewed journal has published 200 articles now, but we also have PeerJ PrePrints (our pre-print server) which has published over 80 articles. Considering we have only been publishing since February, this is a very strong output (90% of established journals don’t publish at this level). Meanwhile, our brand new Q&A functionality is already generating great engagement between readers and authors.

We have published a ton of great science, some of which has received over 20,000 views (!) already. We are getting first decisions back to authors in a median of 24 days, and we are going from submission to final publication (including revisions and production time) in just 51 days. Our institutional members such as UC Berkeley, University of Cambridge, and Trinity as well as our Editorial Board of >800 and our Advisory Board of 20, have kicked the tires and clearly support the model. We have saved the academic community almost $1m already, and we now have a significant cadre of members who are able to publish freely, for life, for no additional cost. Ever.

by @stephenjjohnson on twitter

I was thrilled when I got the invitation to become an academic editor in PeerJ, as I was when the offer came from PLOS ONE. I  blog in this space primarily because it is part of PLOS, I am not sure I would had added that kind of stress for any other brand. PLOS has been and continues to be a key player in the Open Access movement, and am proud to be one of their editors.

What the future of PeerJ might be, who knows. I will continue to support the venture because I believe it offers something of real value to science that is somewhat different from what we’ve had so far. Cant wait to see what else they will pull out of the hat.

 

Join PubMed’s Revolution in Post Publication Peer Review

At 11 AM on October 22, 2013, the embargo was lifted and so now it can be said: PubMed Commons has been implemented on a trial basis. It could change the process of peer assessment of scientific articles forever.

PubMedSome researchers can now comment on any article indexed at PubMed and read the comments of others. It is a closed and closely watched pilot testing. Bugs may become apparent that will need to be fixed. And NIH could always pull the plug. But so many people have invested so much at this point and spent so much time thinking through all the pros and cons that this is hopefully unlikely.

The implementation could prove truly revolutionary. PubMed Commons is effectively taking post-publication peer review out of the hands of editors and putting control firmly in the hands of the consumers of the scientific literature—where it belongs.

PubMed Commons allows us to abandon a thoroughly antiquated and inadequate reliance on letters to the editor as a means of addressing the many shortcomings of pre-publication peer review.

PubMed Commons is

  • A forum for open and constructive criticism and discussion of scientific issues.
  • That will thrive with high quality interchange from the scientific community.

You can read more about the fascinating history of PubMed here.  PubMed is a free database of references and abstracts from life sciences and biomedical journals. It primarily draws on the MEDLINE database and is maintained by the US National of Library Medicine (NLM). For 16 years ending in 1997, MEDLINE had to be accessed primarily through institutional facilities like University libraries. That excluded many  who draw on PubMed today from using it.Medline287

But then in 1997, in a revolutionary move similar to the launching of PubMed Commons, PubMed made its electronic bibliographic resources free to the public. Everyone was quite nervous at the time about being shut down. Lawyers of the for-profit publishers predictably descended on NIH to try to block the free access to abstracts, arguing, among a number of other things, copyright infringement that cut into their ability to make money. But fortunately NIH held its ground and Vice President Al Gore demonstrated PubMed’s capacity in a public ceremony.al gore pubmed

So, in the first revolutionary move, the for-profit journals lost their control over access to abstracts. In the second move, they are losing control over post publication commentary on articles– unless they succeed in squashing PubMed Commons.

Who can participate in PubMed Commons at this time?

  • Recipients of NIH (US) or Wellcome Trust (UK) grants can go to the NCBI website and register. You need a MyNCBI account, but they are available to the general public.
  • If you are not a NIH or Wellcome Trust grant recipient, you are still eligible to participate if you are listed as an author on any publication listed in PubMed, even a letter to the editor. But you will need to be invited by somebody already signed up for participation in PubMed Commons. So, if you have a qualifying publication, you can simply get a colleague with the grant to sign up and then invite you.

Inadequacies of letters to the editor as post-publication commentary

Up until now, the main option for post publication commentary has been in later reviews of the literature, although there was a misplaced confidence in the more immediate letters to the editor.

letter-to-editor-scaled500I regret my blog post last year recommending writing conventional letters to the editor. Letters remain better than journal clubs for junior investigators eager to develop critical appraisal skills. But it could be a waste of time to send the letters off because letters are simply not effective contributions to post publication commentary. Letters never worked reliably well, and for a number reasons, they are now obsolete.

In the not-so-good-old days of exclusively print journals, there was a rationale for these journals putting limits on letters to the editor.

  • With delays in availability due to the scheduling of print journals, letters to the editor were seldom available in a timely fashion. Readers usually would have long forgotten the article being critiqued when the letter finally came out.
  • With limits on the number of pages allowed for issue, letters to the editor consumed a scarce resource without contributing to the impact factor of the Journal. So, journals typically had strict restrictions on the length of letters (usually 400 to 800 words), and a tight deadline to submit a letter after publication of the print article, usually no more than three months or less.

Editorial Review of letters to editor has seldom been fair.

  • There is a prejudice against accepting anything but the most vitally relevant commentary. Yet editors are adverse to accepting critical letters that reflect badly on their own review processes. Get past the significance criterion and then you still risk offending the editor’s sense of privilege.
  • While letters to the editor are subject to peer review, responses from authors generally are not. Authors are free to dismiss or distort any criticism of their work, sometimes with the most absurd of statements going unchallenged.
  • Electronic bibliographic resources have become the principal means of accessing articles, but no links are often provided between a letter to the editor and the target article. So, even if the credibility of the published article with thoroughly demolished in a letter to the editor, readers accessing that article through electronic bibliographic source are not informed.
  • Many journals allowed authors to veto publication of any criticism of their work,hammering3pg but the journals do not state this in their instructions to authors. You can submit a letter to the editor, only to have it rejected because the author objects to what you said. But you are told nothing except that you letter is rejected.
  • Many journals allow authors of the target articles the last word in responding to critical letters. Publishing a single letter and a response typically completes discussion of a target article. And the letter writer never gets to see the author’s response until after it is published. So, you can put incredible effort into carefully expressing your points within the limits of 400 to 800 words, only to be made to look ridiculous with mischaracterizations you can do nothing about.

Letters to the editor are thus usually untimely, overly short, and inadequately referenced. And they elicit inadequate and even hostile responses from authors, but are generally ignored by everybody else.

Letters to the editor are seldom cited and this is just one reflection of their failing to play a major role in moving the scientific discussion forward.

The advent of web-based publishing made restrictions on letters to the editor less justifiable. Once a basic structure for processing and posting letters to the editor is set up, processing and posting cost little.

Print journals can reduce costs by maintaining a separate web-based place for letters to the editor, but restrictions on length and time to response have nonetheless continued, even if their economic justification has been lost.

bmj-logo-ogBMJ Rapid Responses provides an exceptional model for post publication peer commentary. BMJ accepts electronic responses that can be accessed by readers within 72 hours, as long as the responses are not grossly irrelevant or libelous. Readers can register “likes” of Rapid Responses and threads of comments often develop. Then, a few comments are selected each week for editing and published in the print edition. Unfortunately, the rapid responses that remain only electronic are not indexed at PubMed and can only be found by going to the BMJ website, which is behind a pay wall for most articles.

Other journals are scrambling to copy and improve upon the BMJ model, but it can take some serious modification of software and that takes time. “Like turning the Titanic around” an editor of one of the largest open access journals told me.

Until such options become widely available, a reluctance to writing letters to the editor remains thoroughly justifiable. Few letters will be submitted and fewer will be published or result in a genuine scientific exchange. And the goal remains elusive of readily accessible, indexed, citable letters to the editor and comments for which writers can gain academic credit.

eisen_630
PLOS cofounder Michael Eisen Photo by Andy Reynolds from Mother Jones

That is, unless PubMed Commons catches on.  It provides a potential of realizing PLOS Co-founder and disruptive innovator Michael Eisen’s goal of continuous peer assessment and reassessment, not stopping with two or three people making an unreliable, but largely irreversible judgment that something should have been published and should eternally be accepted as peer-reviewed.

PubMed Commons is only a rung on the ladder towards overthrowing the now firm line between publication and peer assessment. It’s not a place to stop, but in important step. Please join in and help make it work. If you’ve ever publish an article listed in PubMed, find a way to get invited. If you not ready to post your own comments, lurk, offer others encouragement with “like’s”, and then when the spirit moves you, jump in!

I expect that someday soon you’ll be a say to more junior colleagues, “I was active in the field when authors could prevent you from commenting on their work and editors could prevent you from embarrassing them with demonstration of the stupidity of some of their decisions.” And you junior colleagues can respond “wow, was that in the days before email? Just how did you participate in the dialogue that is at the heart of scientific communication back then? Did you have to get up and challenge speakers at conferences?”