Gold Open Access Journals: From scientists’ “publish or perish” to publishers’ “publish to get rich”

I’ve been sensitized to how privileged I’ve been not ever to have to pay publication fees  because of either grant funding, the support of a well resourced university, or  a waiver. I have become worried about the contribution of open access publishing to gross inequalities in who gets to publish in quality open access journals,

mind the brain logo

cc-by_logo-640x480

I’ve lost track of how many papers I have published open access, how many manuscripts I reviewed for open access journals, and how many times I have recommended the advantages of publishing open access to participants in my writing workshops. But of late, I’ve been sensitized to how privileged I’ve been not ever to have to pay publication fees  because of either grant funding, the support of a well resourced university, or  a waiver. I have become worried about the contribution of open access publishing to gross inequalities in who gets to publish in quality open access journals without having to pay out of their own pockets. We need to work for a different model.

Green Open Access publishing is not sustainable.

Green OA is a business model by which scholarly publications are available free to anyone with an Internet connection, with finances tied to Article Processing Charges (APCs) paid by or on behalf of authors, rather than journal subscriptions.

In the last edition of Mind the Brain, No Author Left Behind, I raised issues concerning the many authors who cannot not receive waivers or affordable discounts for their article processing charges (APCs). At least not for the of quality open access journals that would allow them to get the credit for the work that they deserve and reach the audiences that they should reach.

I ended up questioning whether Green Open Access is a suitable model for ensuring that authors, as well as readers, benefit from the accelerating pace with which open access publication is implemented and even mandated in some settings.

I follow up the last edition with a guest blog from Professor Ferran Martinez-Garcia, a senior Spanish cell biologist who has witnessed the rapid transition from conventional bound-volume, subscription journals to open access under a variety of business models.

Professor Martinez-Garcia too expresses concerns about the sustainability of green open access and poses an alternative solution, namely, scientific organizations or scholarly societies stepping in and financing free or low fee open access publishing. I think this is a part of sustainable open access publishing, if not the whole. He makes a lot of astute observations and a well-written, thoughtful article.

Guest Author: Ferran Martinez-Garcia is  Professor of Cell Biology and Histology and head of the Lab of Functional Neuroanatomy (NeuroFun) at Universitat Jaume I.

Special thanks to Mapping Ignorance,* for Permission to reprint this article. Mapping Ignorance is an initiative of the Chair of Scientific Culture of the University of the Basque Country under the Project Campus of International Excellence – Euskampus

current contentsI’m a man slowly sliding into the old age. Being a scientist (a simple science worker), this means that for decades I’ve become familiar with the uncomfortable feeling of struggling to adapt to a constant, quick change of everything. In the very beginning of my career, still an undergrad, I joined a lab where my first duty was to leaf reprint requestthrough the weekly issue of Current Contents® Science Edition where the Professor had marked some papers, according to his interests. I had to write postcards to the corresponding authors of these papers to request a reprint. Luckily, a couple of weeks afterwards we (the Professor and, in a way, myself) received a large brown envelope that contained an original reprint of the paper. Reprint is an old-fashioned term, a high-quality-printed paper, separated from the rest of the issue of the journal (in Spanish we used to call this a “separata”).

In those old times, the early 1980s, Spain was still a developing country and the libraries of our university were subscribed to very few journals of our interest. We visited regularly several libraries to get Xerox copies of the few papers available. But we had to request reprints of many papers directly to their authors. A lot. When I started my PhD I was already requesting reprints myself, and in about a decade, I’ve got a collection of nearly 5000 reprints. Now I don’t know what to do with all that stuff. Probably I’m destroying it to recycle many kilograms of paper. In case I need one of these old papers, I’m sure I’ll be able to find it in the journal webpage (some journals are scanning and uploading papers of the pre-pdf era, the inexistent God bless them!). Alternatively (I confess it) I will look for it in Sci-Hub (a Peace Nobel Prize is awaiting Alexandra Elbakyan; here it is my proposal).

I’m not prone to longing for the past. Old times were definitely not good times. During the early 1990s the Web grew up, and the first scientific journals started composing pdf files of their papers and launched electronic subscriptions. I immediately understood this was the beginning of a new, fantastic era. The libraries of several public universities of Spain (including mine) made a consortium and negotiated an agreement with Elsevier, Springer, Nature… And, suddenly, we got free online access to thousands and thousands of interesting papers. In the beginning, I printed out the papers I was interested in and added them to my old-fashioned reprint collection. But soon I realised how stupid I was being.

I first heard the term “open access” by the end of the 1990s. The idea looked quite utopic and even revolutionary: scientific papers available through the web to everyone, for free. This allowed free access to scientific information even to labs in developing countries with low funding (I was very sensitive to that, you may understand why). The counterpart was that someone had to pay for the system to be sustainable. And we, the scientists, were the chosen ones, thus leading to another new concept: publication fee. Once your paper is accepted, after a hard peer-review process, you receive an invoice that you have to pay if you want your paper to be published open access. By this time I became a senior PI and I understood what all this meant: I had to get money not only for salaries, equipment, reagents, glassware, registration and attendance to meetings… but also for publication fees. In the ensuing years, new Open Access journals1 appeared and they were very successful. Their Impact Factors rose and they became Q1 in JCR (the journals were it is worth publishing) to the detriment of the old, traditional journals that mostly became Q2 (where you prefer not to publish if you want to get projects and to promote). Frontiers, BMC, PLOS and so on became the target journals for many scientists.

I played the game as soon as I had money. In 2011, I started in a big way: I edited a special topic issue for Frontiers in Neuroanatomy, in which one of the papers was by our group. At that time, the publication fee for Frontiers journals was 900 euros, but I received a discount for being associated editor and my invoice was finally 750 euros. I found it quite expensive, but it was worth. I kept publishing in OA journals while still trying to publish in high-IF traditional paper journals (I couldn’t afford to publish only OA; I can’t indeed). And I received dozens of paper to review from many different journals, most of them also OA. I kept playing the game and did my job once and again.

Last year (2017) I received an invoice from Frontiers, for the publication fee of another paper by our group. The index of retail prices was very high, the invoice amounted 2116.50 USD. I suspect that this had to do with a press release appeared in February 2013: “Nature Publishing Group and Frontiers form alliance to further open science2. And I realised that Open Access publishing had become a big, a huge business. A business with high benefits made on scientists’ work, our work. We look for funds (mainly from public funding agencies), we do the research, we write the papers, we work for free in the peer-review process and finally, we pay ultra-expensive publication fees. Just for the high profit of private publishing companies.

That Open Access journals are a big business is quite evident, in spite of some very respectable journals claiming the contrary3. An interesting paper on the history and nature of OA published some time ago in one of the leading OA journals (PLOS One)4 closes its Introduction with a straightforward sentence:Open Access is a new technology-enabled business model, which is gaining increasing acceptance”. Crystal clear.

An OA Journal usually has an attractive, indexed webpage with all the information on the journal, where the published papers are directly available to everyone. There are an editor-in-chief and a small crew that run the journal. Plus many associate editors. The journal needs a good submission platform. When an OA journal is working stably, once a manuscript is submitted, the editor or associate editors assign reviewers and a bot starts sending message and reminders to the associate editors, reviewers and authors in order to pressure them to do their job in time: referees should send their reviews, authors should respond to their queries. The editors or associate editors observe this process, which repeats once and again until the editors take a final decision and the paper is either accepted (commonly) or rejected. Therefore, associate editors, scientists that usually work for free, referees, scientists that work hard but are not paid at all, and the authors, scientists that work very hard and pay a lot of money, do most of the job using the submission platform, with the annoying help of the insisting bot.

I’ve made my calculations. An OA Journal publishing 100 paper per year (about 2 papers per week) has annual direct incomes of about 175.000-200.000€ (in some cases even 300.000€). Most OA journals belong to groups that publish several journals focused on different aspects of a given branch of science. This way, the group and its team may run 10-20 or more journals, thus reducing costs and increasing benefits a lot5. Since OA ensures generalised access to all the published papers, the impact factor of OA Journals increases and this boosts the interest of scientists, always looking for Q1 journals to publish their work in, to publish their work in these OA journals. In addition, once the journal is running at a regular pace, the production costs of such journals relatively stable, so that they get more benefits if they publish more papers. The strategy to achieve this is to publish special issues on very specific subjects, provided there is a scientist wanting to do it and acting as a guest or associate editor (indeed doing the job of the editor-in-chief for this special issue). That’s why we receive everyday SPAM from different publishing companies offering their journals to publish special issues, scientific meeting proceedings, and so on. The more papers an OA journal publishes, the higher the benefits. From scientists’ “publish or perish” to publishers’ “publish to get rich”.

This situation is clearly not sustainable. Scientists, workers of science, receive pressure from multiple agents. On the one hand, we need to publish (publish or perish is still a valid leitmotif for us). But now we need to publish Q1 journals if we want to promote and get funds to keep doing research. And we should do this not just for ourselves (I’m at the end of my career, I can’t promote further than being a full professor with six sexenios6), but especially for keeping our labs alive for the future of our people, PhD students and junior associate professors. And now, we are also pressed to publish open access. This is indeed promoted and required by the national 2011 “Ley de la Ciencia, la Tecnología y la Innovación” (Science, Technology and Innovation Act) if your research has been produced with public funding. And Spain is not an isolated case. This is happening everywhere. Open access philosophy apparently promotes a democratic, solidary and transparent science system, so that governments and public funding agencies are demanding the researchers to acquire the compromise of publishing OA as a sine qua non requisite just to apply for funds. We keep this compromise thanks to Green Open Access, publishing our pre-prints in public, free-access repositories of our institutions. I wonder why we don’t skip the journal and just publish our manuscripts in the repository without the need for journal submission and peer review. I sincerely think that the quality of my papers would be more or less the same (I’m very perfectionist and know how to do my job after 30 years of experience), and the publication time would be substantially reduced7.

Meanwhile, a few private publishing companies are rubbing their hands in glee at such a succulent perspective of future benefits. And we, the scientists, are seeing how an important part of our budget goes to those companies, instead of nourishing our labs. In addition, we are working hard for the benefit of these companies by doing research, writing papers for their journals and reviewing manuscripts for them.

Some governments and funding agencies have negotiated direct payment to the OA journals of the publication fees corresponding to the papers authored by the researchers they fund, a measure taken to guarantee open access science (this is no the case of Spain’s public funding agencies yet). If you are funded by those agencies, once you have a paper accepted in an OA journal, you might indicate which is your funding agency and you save the publication costs (maybe a part), the agency pays them for you (I suppose these agencies are applied advantageous fees). This might give you the false impression that you are saving money, you don’t have to dedicate part of your budget to publication fees in OA journals. But obviously, the agencies have to include these expenses in their budgets, thus necessarily reducing direct funding to researchers. There is no trick.

In this context, I’m not surprised by the recent news8 on a disagreement between the Bibsam Consortium (a Swedish governmental agency) and one of the main (oligopolistic) multinational publishing company, Elsevier Inc. In words of Astrid Söderbergh Widding, President of Stockholm University, Chairman of the Bibsam consortium steering committee and Head of the negotiation team:

Increasing costs of scientific information are straining university budgets on a global scale while publishers operate on high-profit margins. An alternative to the current publishing and pricing model is ‘open access,’ where institutions pay to publish their articles and the articles become open for everyone to read, immediately upon publication. We need to monitor the total cost of publication as we see a tendency towards a rapid increase of costs for both reading and publishing. The current system for scholarly communication must change and our only option is to cancel deals when they don’t meet our demands for a sustainable transition to open access.

As a consequence of this, the Bibsam Consortium has, after 20 years, decided not to renew the agreement with the scientific publisher Elsevier, as the publisher was not able to present a model that met the demands of the Consortium.

This is the problem. Is there a solution? I think the answer is YES. Scientists have, logically, a leading role in scientific publication and the solution to this unbearable situation is in our hands. We cannot be working for the benefit of private companies anymore. Moreover, measures of governments and funding agencies designed to promote open access policy (enforcing researchers to publish in OA journals; reaching millionaire agreements with publishing oligopolistic companies) have failed because they were inadequate. The solution is that, once again, science workers (scientists) start leading and commanding the publication of our results. Scientific societies, national and international, were promoters of classical journals. For instance, in my field, neuroscience, the International Brain Research Organisation published Neuroscience as its official journal. The journal is currently being published by Elsevier Inc. A couple of years ago, IBRO announced the launch of a new OA journal, IBRO Reports. Guess who’s running it: Elsevier. The Federation of European Neuroscience Societies, FENS, has also an official journal, European Journal of Neuroscience. It is published in association with another private publisher, Wiley-Blackwell. The official journal of the American Society for Neuroscience is the Journal of Neuroscience. And Behavioral Neuroscience is directly published by the American Psychology Association (APA). It seems that American scientific societies are doing their job, whereas European ones are neglectful and prefer to rely on private publishers. This is harmful to their researchers and for the branch of science they have to defend and promote. A change in their policy is urgently needed.

And here is a solution to the problems I have discussed above. Scientific societies, both European and American, must start running themselves open access journals. They might apply sensible publication fees to their authors, lower than 1000 euros/dollars. They might also give special discounts to researchers acting as reviewers for the journal. And they might, even so, get moderate benefits that would help the corresponding society to promote its scientific or academic speciality. On the other hand, funding agencies might help subsidizing those scientific societies applying these OA policies, to boost the growth of fair OA journals, instead of paying astronomic amounts to OA journals for the only benefit of private, oligopolistic publishing companies.

This is my proposal. We, science workers, should get rid of private publishers and go a step ahead to control our own publication systems.

The development of open access journal publishing from 1993 to 2009. Laakso M, Welling P, Bukvova H, Nyman L, Björk BC, Hedlund T. PLoS One. 2011;6(6):e20961. doi: 10.1371/journal.pone.0020961.

5Frontiers in Neuroscience journal series, with 34 different journals, is a successful example of such a business. Frontiers has, in addition, series of journals in other science topics, “500 academic specialities” according to their marketing campaign.

 

6 In Spain, professors and researchers undergo an evaluation of their research quality every six years. A positive evaluation, a sexenio (literally six-year period), is awarded a salary supplement. In addition, sexenios are key for professional promotion.

 

7 Santiago Ramon y Cajal had so many results to publish that he decided to save time and effort by founding his own journal. It was there where most of his work was published. I’m not comparing my ridiculum vitae with Cajal’s magnificent work, but I often think on his solution to the shortage of time.

 

https://openaccess.blogg.kb.se/2018/05/16/sweden-stands-up-for-open-access-cancels-agreement-with-elsevier/

*Special Note of Thanks.

Reprinted with permission from Mapping Ignorance is an initiative of the Chair of Scientific Culture of the University of the Basque Country under the Project Campus of International Excellence – Euskampus.

Do check out the other posts at Mapping Ignorance.  The About page of the blog states

Every time we make a new scientific discovery we sense where the limit of knowledge is, we feel where ignorance begins. Science is, for certain, what we think we know, but more precisely, it is being aware of the boundaries of the unknown.

In this blog we try to translate cutting edge scientific research into an educated lay-person language; consequently, as we do this, we will be Mapping Ignorance. Our goal is very simple: to spread both the latest developments in science and technology and a scientific worldview facilitating the access to it. To achieve this Mapping Ignorance is written by specialists in each field of expertise coordinated by a dedicated editor; the aim of them all is to make sometimes abstruse but otherwise wonderful scientific and technical information enjoyable by the interested general reader.

Busting foes of post-publication peer review of a psychotherapy study

title_vigilante_blu-rayAs described in the last issue of Mind the Brain, peaceful post-publication peer reviewers (PPPRs) were ambushed by an author and an editor. They used the usual home team advantages that journals have – they had the last word in an exchange that was not peer-reviewed.

As also promised, I will team up in this issue with Magneto to bust them.

Attacks on PPPRs threaten a desperately needed effort to clean up the integrity of the published literature.

The attacks are getting more common and sometimes vicious. Vague threats of legal action caused an open access journal to remove an article delivering fair and balanced criticism.

In a later issue of Mind the Brain, I will describe an  incident in which authors of a published paper had uploaded their data set, but then  modified it without notice after PPPRs used the data for re-analyses. The authors then used the modified data for new analyses and then claimed the PPPRs were grossly mistaken. Fortunately, the PPPRs retained time stamped copies of both data sets. You may like to think that such precautions are unnecessary, but just imagine what critics of PPPR would be saying if they had not saved this evidence.

Until journals get more supportive of post publication peer review, we need repeated vigilante actions, striking from Twitter, Facebook pages, and blogs. Unless readers acquire basic critical appraisal skills and take the time to apply them, they will have to keep turning to the social media for credible filters of all the crap that is flooding the scientific literature.

MagnetoYardinI’ve enlisted Magneto because he is a mutant. He does not have any extraordinary powers of critical appraisal. To the contrary, he unflinchingly applies what we should all acquire. As a mutant, he can apply his critical appraisal skills without the mental anguish and physiological damage that could beset humans appreciating just how bad the literature really is. He doesn’t need to maintain his faith in the scientific literature or the dubious assumption that what he is seeing is just a matter of repeat offender authors, editors, and journals making innocent mistakes.

Humans with critical appraisal risk demoralization and too often shirk from the task of telling it like it is. Some who used their skills too often were devastated by what they found and fled academia. More than a few are now working in California in espresso bars and escort services.

Thank you, Magneto. And yes, I again apologize for having tipped off Jim Coan about our analyses of his spinning and statistical manipulations of his work to get newsworthy finding. Sure, it was an accomplishment to get a published apology and correction from him and Susan Johnson. I am so proud of Coan’s subsequent condemnation of me on Facebook as the Deepak Chopra of Skepticism  that I will display it as an endorsement on my webpage. But it was unfortunate that PPPRs had to endure his nonsensical Negative Psychology rant, especially without readers knowing what precipitated it.

shakespeareanThe following commentary on the exchange in Journal of Nervous and Mental Disease makes direct use of your critique. I have interspersed gratuitous insults generated by Literary Genius’ Shakespearean insult generator and Reocities’ Random Insult Generator.

How could I maintain the pretense of scholarly discourse when I am dealing with an author who repeatedly violates basic conventions like ensuring tables and figures correspond to what is claimed in the abstract? Or an arrogant editor who responds so nastily when his slipups are gently brought to his attention and won’t fix the mess he is presenting to his readership?

As a mere human, I needed all the help I could get in keeping my bearings amidst such overwhelming evidence of authorial and editorial ineptness. A little Shakespeare and Monty Python helped.

The statistical editor for this journal is a saucy full-gorged apple-john.

 

Cognitive Behavioral Techniques for Psychosis: A Biostatistician’s Perspective

Domenic V. Cicchetti, PhD, quintessential  biostatistician
Domenic V. Cicchetti, PhD, quintessential biostatistician

Domenic V. Cicchetti, You may be, as your website claims

 A psychological methodologist and research collaborator who has made numerous biostatistical contributions to the development of major clinical instruments in behavioral science and medicine, as well as the application of state-of-the-art techniques for assessing their psychometric properties.

But you must have been out of “the quintessential role of the research biostatistician” when you drafted your editorial. Please reread it. Anyone armed with an undergraduate education in psychology and Google Scholar can readily cut through your ridiculous pomposity, you undisciplined sliver of wild belly-button fluff.

You make it sound like the Internet PPPRs misunderstood Jacob Cohen’s designation of effect sizes as small, medium, and large. But if you read a much-accessed article that one of them wrote, you will find a clear exposition of the problems with these arbitrary distinctions. I know, it is in an open access journal, but what you say is sheer bollocks about it paying reviewers. Do you get paid by Journal of Nervous and Mental Disease? Why otherwise would you be a statistical editor for a journal with such low standards? Surely, someone who has made “numerous biostatistical contributions” has better things to do, thou dissembling swag-bellied pignut.

More importantly, you ignore that Jacob Cohen himself said

The terms ‘small’, ‘medium’, and ‘large’ are relative . . . to each other . . . the definitions are arbitrary . . . these proposed conventions were set forth throughout with much diffidence, qualifications, and invitations not to employ them if possible.

Cohen J. Statistical power analysis for the behavioural sciences. Second edition, 1988. Hillsdale, NJ: Lawrence Earlbaum Associates. p. 532.

Could it be any clearer, Dommie?

Click to enlarge

You suggest that the internet PPPRs were disrespectful of Queen Mother Kraemer in not citing her work. Have you recently read it? Ask her yourself, but she seems quite upset about the practice of using effects generated from feasibility studies to estimate what would be obtained in an adequately powered randomized trial.

Pilot studies cannot estimate the effect size with sufficient accuracy to serve as a basis of decision making as to whether a subsequent study should or should not be funded or as a basis of power computation for that study.

Okay you missed that, but how about:

A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study.

Dommie, although you never mention it, surely you must appreciate the difference between a within-group effect size and a between-group effect size.

  1. Interventions do not have meaningful effect sizes, between-group comparisons do.
  2. As I have previously pointed out

 When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

Turkington recruited a small, convenience sample of older patients from community care who averaged over 20 years of treatment. It is likely that they were not getting much support and attention anymore, whether or not they ever were. The intervention that Turkington’s study provided that attention. Maybe some or all of any effects were due to simply compensating for what was missing from from inadequate routines care. So, aside from all the other problems, anything going on in Turkington’s study could have been nonspecific.

Recall that in promoting his ideas that antidepressants are no better than acupuncture for depression, Irving Kirsh tried to pass off within-group as equivalent to between-group effect sizes, despite repeated criticisms. Similarly, long term psychodynamic psychotherapists tried to use effect sizes from wretched case series for comparison with those obtained in well conducted studies of other psychotherapies. Perhaps you should send such folks a call for papers so that they can find an outlet in Journal of Nervous and Mental Disease with you as a Special Editor in your quintessential role as biostatistician.

Douglas Turkington’s call for a debate

Professor Douglas Turkington: "The effect size that got away was this big."
Professor Douglas Turkington: “The effect size that got away was this big.”

Doug, as you requested, I sent you a link to my Google Scholar list of publications. But you still did not respond to my offer to come to Newcastle and debate you. Maybe you were not impressed. Nor did you respond to Keith Law’s repeated request to debate. Yet you insulted internet PPPR Tim Smits with the taunt,

Click to Enlarge

 

You congealed accumulation of fresh cooking fat.

I recommend that you review the recording of the Maudsley debate. Note how the moderator Sir Robin Murray boldly announced at the beginning that the vote on the debate was rigged by your cronies.

Do you really think Laws and McKenna got their asses whipped? Then why didn’t you accept Laws’ offer to debate you at a British Psychological Society event, after he offered to pay your travel expenses?

High-Yield Cognitive Behavioral Techniques for Psychosis Delivered by Case Managers…

Dougie, we were alerted that bollacks would follow with the “high yield” of the title. Just what distinguishes this CBT approach from any other intervention to justify “high yield” except your marketing effort? Certainly, not the results you have obtained from an earlier trial, which we will get to.

Where do I begin? Can you dispute what I said to Dommie about the folly of estimating effect sizes for an adequately powered randomized trial from a pathetically small feasibility study?

I know you were looking for a convenience sample, but how did you get from Newcastle, England to rural Ohio and recruit such an unrepresentative sample of 40 year olds with 20 years of experience with mental health services? You don’t tell us much about them, not even a breakdown of their diagnoses. But would you really expect that the routine care they were currently receiving was even adequate? Sure, why wouldn’t you expect to improve upon that with your nurses? But would you be demonstrating?

insult 1

 

The PPPR boys from the internet made noise about Table 2 and passing reference to the totally nude Figure 5 and how claims in the abstract had no apparent relationship to what was presented in the results section. And how nowhere did you provide means or standard deviations. But they did not get to Figure 2 Notice anything strange?

figure 2Despite what you claim in the abstract, none of the outcomes appear significant. Did you really mean standard error of measurement (SEMs), not standard deviations (SDs)? People did not think so to whom I showed the figure.

mike miller

 

And I found this advice on the internet:

If you want to create persuasive propaganda:

If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD.

If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.

Why did you expect to be able to talk about effect sizes of the kind you claim you were seeking? The best meta analysis suggests an effect size of only .17 with blind assessment of outcome. Did you expect that unblinding assessors would lead to that much more improvement? Oh yeh, you cited your own previous work in support:

That intervention improved overall symptoms, insight, and depression and had a significant benefit on negative symptoms at follow-up (Turkington et al., 2006).

Let’s look at Table 1 from Turkington et al., 2006.

A consistent spinning of results

Table 1 2006

Don’t you just love those three digit significance levels that allow us to see that p =.099 for overall symptoms meets the apparent criteria of p < .10 in this large sample? Clever, but it doesn’t work for depression with p = .128. But you have a track record of being sloppy with tables. Maybe we should give you the benefit of a doubt and ignore the table.

But Dougie, this is not some social priming experiment with college students getting course credit. This is a study that took up the time of patients with serious mental disorder. You left some of them in the squalor of inadequate routine care after gaining their consent with the prospect that they might get more attention from nurses. And then with great carelessness, you put the data into tables that had no relationship to the claims you were making in the abstract. Or in your attempts to get more funding for future such ineptitude. If you drove your car like you write up clinical trials, you’d lose your license, if not go to jail.

insult babbling

 

 

The 2014 Lancet study of cognitive therapy for patients with psychosis

Forgive me that I missed until Magneto reminded me that you were an author on the, ah, controversial paper

Morrison, A. P., Turkington, D., Pyle, M., Spencer, H., Brabban, A., Dunn, G., … & Hutton, P. (2014). Cognitive therapy for people with schizophrenia spectrum disorders not taking antipsychotic drugs: a single-blind randomised controlled trial. The Lancet, 383(9926), 1395-1403.

But with more authors than patients remaining in the intervention group at follow up, it is easy to lose track.

You and your co-authors made some wildly inaccurate claims about having shown that cognitive therapy was as effective as antipsychotics. Why, by the end of the trial, most of the patients remaining in follow up were on antipsychotic medication. Is that how you obtained your effectiveness?

In our exchange of letters in The Lancet, you finally had to admit

We claimed the trial showed that cognitive therapy was safe and acceptable, not safe and effective.

Maybe you should similarly be retreating from your claims in the Journal of Nervous and Mental Disease article? Or just take refuge in the figures and tables being uninterpretable.

No wonder you don’t want to debate Keith Laws or me.

insult 3

 

 

A retraction for High-Yield Cognitive Behavioral Techniques for Psychosis…?

The Turkington article meets the Committee on Publication Ethics (COPE) guidelines for an immediate retraction (http://publicationethics.org/files/retraction%20guidelines.pdf).

But neither a retraction nor even a formal expression of concern has appeared.

Toilet-outoforderMaybe matters can be left as they now are. In the social media, we can point to the many problems of the article like a clogged toilet warning that Journal of Nervous and Mental Disease is not a fit place to publish – unless you are seeking exceeding inept or nonexistent editing and peer review.

 

 

 

Vigilantes can periodically tweet Tripadvisor style warnings, like

toilets still not working

 

 

Now, Dommie and Dougie, before you again set upon some PPPRs just trying to do their jobs for little respect or incentive, consider what happened this time.

Special thanks are due for Magneto, but Jim Coyne has sole responsibility for the final content. It  does not necessarily represent the views of PLOS blogs or other individuals or entities, human or mutant.

Evolution and engineering of the megajournal – Interview with Pete Binfield

Image courtesy of PeerJ

Peter Binfield wrote a nice analysis on Mega Journals over at Creative Commons Aotearoa New Zealand  an organisation on which I serve. MegaJournals are a recent phenomenon, that have changed the face of scientific publishing.

I am an academic editor in PeerJ, as well as in PLOS ONE (PONE), “the” megajournal of the Public Library of Science. I entered PONE first as an author (submitted before PONE had began publishing), and then joined as an Academic Editor under the rule of Pete Binfield. I saw PONE grow into the publishing giant it is today, feeling proud of being a small part of it. Not long ago. I saw Pete leave to join Jason Hoyt (formerly from Mendeley, another venture I had signed up to in its very early days) in search of their new adventure that was eventually to become PeerJ.  It wouldn’t be long before I would become an academic editor and find myself, again, under Pete’s rule. It has been about a year since that invitation, and Open Access Week gave me an opportunity to reflect on my experience.

Who is Pete Binfield?

PB: Before PLOS ONE I spent about 14 years in the subscription publishing world.  I worked for Institute of Publishing Physics (doing books), then moved to Holland to work for Kluwer Academic Publishers for 8 years (and Kluwer then merged with Springer), and finally I moved to the US to work for SAGE Publications (the largest social science publisher). It was during my time at Kluwer and then SAGE that the Open Access movement was really taking off, and it quickly became apparent to me that this was the way the industry was (or at least should be!) going. I wanted to be at the leading edge of this movement, not looking in at it from outside, trying to play catch up, so when the opportunity came up to move to PLOS and run PLOS ONE, I jumped at it.

I am a biology teacher (broadly speaking)  mainly in the medical school. As such, I can’t escape talking about evolved and engineered systems. Animals’ bodies are evolved – the changes in structure and function happen against a backdrop of conserved structures. You can’t really understand “why” an organ looks the way it looks and works the way it does without thinking about what building blocks were available to start with. Engineers have it easier in a sense. They don’t have a preset structure they need to hack to get the best they can, they can start from scratch. Building an artificial kidney that works in dry land has less constraints that evolving one from that of a water-dwelling ancestor. So if you are a journal how do you go from print to online?

Building a journal from scratch, too, is not the same as evolving one. When PLOS came to life in the early over a decade ago they were able to invent their journals from scratch. And boy, did they do that well (and still do). They changed the nature of formal scientific communication and sent traditional publishers to chase their tails. Traditional publishers have been slow to adapt – trying to  hack the 17th Century publishing model.  When PLOS ONE was born it was unique, exploiting what PLOS had achieved so well as an Open Access online publication, but also seeking to changed the rules of how papers were to be accepted. This, in the whole evolution analogy was a structural change with a very large downstream effect.

PB: I think some of my prior colleagues might have thought that it was a strange transition – at SAGE I had been responsible for over 200 journal titles in a vibrant program, and now I was moving to PLOS to run a single title (PLOS ONE) in an organization that only had 7 titles. However, even at that time I could see the tremendous potential that PLOS ONE had and how it could bring about rapid change. It was the unique editorial criteria (peer-reviewing only for scientific validity); the innovative functionality; the potential for limitless growth; and the backing of a ‘mover and shaker’ organization which really excited me. I joined PLOS with the hope that we could make PLOS ONE the largest journal in the world, and to use that position to bring about real change in the industry – I think most people would agree we achieved that.

Until last year, you could pretty much put journals into 2 broad bags: those that were evolving from “print” standards and those that were evolving from “online” standards, which also included the ‘megajournals’ like PLOS ONE. Yet over 10 years after the launch of PLOS,  and given the accelerated changes in “online” media,  there was an opportunity for a fresh engineering approach.

PB: When I left, the journal was receiving about 3,000 submissions a month, and publishing around 2,000 – so to change anything about PLOS ONE was like trying to change the engines of a jet, in mid-flight. We had an amazingly successful and innovative product (and, to be clear, it still is) but it was increasingly difficult to introduce significant new innovations (such as new business models; new software; a new mindset).

In addition, myself and Jason wanted to attempt an entirely new business model which would make the act of publishing significantly cheaper for the author. I think it would have been very hard for PLOS to attempt this within the PLOS ONE structure which, in many ways, was already supporting a lot of legacy systems and financial commitments.

When Jason approached me with the original idea for PeerJ it quickly became clear that by partnering together we would be able to do things that we wouldn’t have been able to achieve in our previous roles (he at Mendeley, and me at PLOS). By breaking out and starting something new, from scratch, it was possible to try to take the lessons we had both learned and move everything one or two steps forwards with an entirely new mindset and product suite. That is an exciting challenge of course, but already I think you can see that we are succeeding!

PeerJ had from the start a lot that we (authors) were looking for. We had all been struggling for a while with knowing that the imperative to publish in Open Access was growing, either for personal motivation (as in my case) or because of funders’ or institutional mandates. We were also struggling with the perceived cost of Open Access, especially within the traditional journals. There is too much at stake in individual’s careers to not carefully choose how to “brand” our articles because we know too well that at some point or another someone will value our work more on the brand than on the quality, and that someone has the power to decide if we get hired, promoted, or granted tenure. PLOS ONE had two things in its favour: it was part of the already respected PLOS brand, and it was significantly cheaper than the other PLOS journals. Then, over a year ago, Pete and Jason came out of the closet with one of the best catch-phrases I’ve seen:

If we can set a goal to sequence the Human Genome for $99, then why shouldn’t we demand the same goal for the publication of research?

They had a full package: Pete’s credibility in the publishing industry, Jason’s insights on how to help readers and papers connect, and a cheap price, not just affordable, cheap. I bought my full membership out of my own pocket as soon as I could. I gave them my money because I had met and learned to trust both Pete’s and Jason’s insights and abilities.

PB: [The process from development to launch day ] was very exciting, although clearly nail biting! One of the things which was very important to us was to build our own submission, peer review and publication software entirely from scratch – something which many people thought would not be possible in a reasonable time frame. And yet our engineering team, recruited and led by Jason, were able to complete the entire product suite in just 6 months of development time. First we built the submission and peer review system, and as soon as submissions started moving through that system we switched to build the publication platform. Everything is hosted on the cloud, and implemented using github, and so were able to keep our development infrastructure extremely ‘light’ and flexible.

But even that does not guarantee buy-in. Truth be told, even if PeerJ was to be an interesting experiment I think mine was money well spent. (All in the name of progress.) What tipped the balance for me was the addition of Tim O’Reilly to the mix. Here is someone that understands the web (heck, he popularised that famous web 2.0 meme), publishing and innovation. O’Reilly brought in what, from my point of view, was missing in the original mix and that was crucial to attract authors: a sense of sustainability.

by @McDawg on twitter

PeerJ looked different to me in a very unique way – while other journals screamed out  “brand” or “papers”, PeerJ was screaming out  “authors”.  Whether this might be a bias of mine because of my perception of the founders, or the life-membership model, to me this was a different kind of journal. It wouldn’t be long until I got invited to join the editorial board, and then got to see who my partners in crime would be.

PB: Simultaneously, we were building up the ‘editorial’ side of the journal. We started with a journal with no reputation, brand, or recognized name and managed to recruit an Editorial Board of over 800 world class academics (including yourself, and 5 Nobel Laureates); we created the editorial criteria and detailed author guidelines; we defined a comprehensive subject taxonomy; we established ourselves with all the third party services which support this infrastructure (such as CrossRef, CLOCKSS, COPE, OASPA etc); we contracted with a production vendor and so on.

Everything was completed in perfect time, and worked flawlessly from the very start – it really is a testament to the talented staff we have and I think we have proven to other players that this approach is more than possible.

But to launch a journal you need articles and also to make sure your system does not crash. Academic Editors were invited to submit manuscripts free of charge in exchange of participating in the beta testing. I had an article that was ready to submit, and since by now I had pretty much no funding the free deal was worth any bug reporting nuisance. I had been producing digital files for submission for ages and doing submissions on line for long enough that I set a full day aside to go through the process (especially since this was a bug reporting exercise). And then came the surprise. Yes, there were a few bugs, as expected, but the submission system was easy and as user friendly as I had not anticipated. (Remember when above I said PeerJ screamed “authors”?). For the first time I experienced a submission system that was “user friendly”.

PB: I am constantly amazed that you can start from nothing, and provided you have staff who know what they are doing, and that you have a model which people can get behind, then it is entirely possible to build a world-class publishing operation from a standing start and create something which can compete with, and beat out, the more established players. As a testament to this, we have been named one of the Top 10 “Educational Technology Innovators of 2013” by the Chronicle of Higher Education; and as the “Publishing Innovation of 2013” by the Association of Learned and Professional Scholarly Publishers.

Then came the reviews of the paper – and there is when I found the benefit of knowing who the reviewers were. Many times I encounter these odd reviewer’s comments where I read puzzled and go “uh?”. In this case, because I knew who the reviewer was I could understand where they were coming from.  It made the whole process a lot easier. Apparently, the myth that people won’t review papers if their names are revealed, is , well, a myth.

PB: One particularly pleasant surprise has been the community reaction to our ‘optional open peer review’. At the time of writing, pretty much 100% of our authors are choosing to reproduce their peer-review history alongside their published articles (for example, every paper we are publishing in OA week is taking this option). We believe that making the peer review process as open as possible is one of the most important things that anyone can do to preserve the valuable comments of their peer-reviewers (time consuming comments which are normally lost to the world) and to prove the rigour of their published work .

I am not alone at being satisfied as an author. Not too long ago, PeerJ did their first author survey. Even as an editor I was biting my nails to see the results, I can only imagine the stress and anticipation in PeerJ headquarters.

PB: Yes, we conducted our first author survey earlier this year and we were extremely pleased to learn, for example, that 92% of responding authors rated their overall PeerJ experience as either “one of the best publishing experiences I have ever had” (42%) or “a good experience” (49%). In addition, 86% of our authors reported that their time to first decision was either “extremely fast” (29%) or “fast” (57%). Any publisher, no matter how well resourced or established, would be proud to be able to report results like these!

Perhaps the biggest surprise was how engaged our authors were, and how much feedback they were willing to provide. We quite literally received reams of free text feedback which we are still going through – so be careful what you ask for!

I am not surprised at this – I myself provided quite a bit of feedback. Perhaps seeing these comments from Pete emphasise the sense of community that some of us feel is the point of difference with  PeerJ.

PB: We are creating a publishing operation, not a ‘facebook for scientists’, however with that said our membership model does mean that we tend to develop functionality which supports and engages our members at every touch point. So although it is early days, I think a real community is already starting to form and as a result you can start to see how our broader vision is taking shape.

Unlike most publishers (who have a very ‘article centric’ mentality), our membership model means that we are quite ‘person centric’. Where a typical publisher might not know (or care) who the co-authors are on a paper, for us they are all Members, and need to be treated well or they will not come back or recommend us to their peers.With this mindset, you can see that we have an intimate knowledge of all the interactions (and who performed them) that happen on a paper. Therefore when you come to our site you can navigate through the contributions of an individual (for example, see the links that are building up at this profile) and see exactly how everyone has contributed to the community (through our system of ‘Academic Contribution’ points.

Another example of our tendency towards ‘community building’ is our newly launched Q&A Functionality. With this functionality, anyone can ask a question (on a specific part of a specific article; on an entire article; or on any aspect of science that we cover) and anyone in the community can answer that question. People who ask or answer questions can be ‘voted’ up or down, and as a result we hope to build up a system of ‘reputation recognition’ in any given field. Again – this is a great way to build communities of practise, and the barrier to entry is very low.

Image courtesy of PeerJ

It is early days – this is new functionality and it will be some time before we can see if it takes off. PLOS ONE also offers commenting, but that seems to be a feature that is under-used. I can’t but wonder whether the experience of PeerJ might be different because the relationship with authors and editors might be also different. Will feeling  that we, the authors (and not our articles), are the centre of attention make a difference?

PB: This is extremely important to us, so thank you for noticing! One of the mistakes that subscription publishers are making is that they have historically focussed on the librarian as the customer (causing them to develop features and functionalities focussed on those people) when in an Open Access world, the customer is the academic (in their roles as authors, editors and reviewers). Open Access publishers are obviously much more attuned to the principle of the ‘academic as customer’ but even they are not as focussed on this aspect as we (with our Membership model) are .

It is very important that authors feel loved; that people receive prompt and effective responses to their queries; that we listen to complaints and react rapidly and so on. One way we are going to scale this is with more automation – for example, if we proactively inform people of the status of their manuscript then they don’t need to email us. On another level, publishing is still a ‘human’ business based on networks of interaction and trust, and so we need to remember that when we resource our organisation going forwards.

This is what I find exciting about PeerJ – there is a new attitude, if not a new concept – that seems to come through. I will not even try to count the number of email and twitter exchanges that I have had with Pete and PeerJ staff ( I would not be surprised that eyes roll at the other end as they see the “from” field in their email inbox). But they have always responded. With graceful and helpful emails. Whether they “love” me or not (as Pete says above) is irrelevant when one is treated with respect and due diligence. I can see similar interactions at least on twitter – PeerJ responsive to suggestions and requests, and, at least from where I am standing, seemingly having innovation at the top of the list.

PB: I think that everyone at PeerJ came here (myself and Jason included) because we enjoy innovating and we aren’t afraid to try new things. Innovation is quite literally written into our corporate beliefs (“#1. Keep Innovating – We are developing a scholarly communication venue for the 21st Century. We are committed to improving scholarly communications in every way possible”) and so yes, it is part of our DNA and a core part of our competitive advantage.

I must admit, it wasn’t necessarily our intention to use twitter as our bug tracker (!), but it is definitely a very good way to get real time feedback on new features or functionality. Because of our flexible architecture, and ‘can do’ attitude, we can often fix or improve functionality in hours or days (compared to months or years at most other publishers who do not control their own software). For an example of this in action, check out this blog post from a satisfied ‘feature requestor’.

I want PeerJ to succeed not only because I like and admire the people involved with it but because it offers something different, including the PrePrint service to which I hope to contribute soon. So I had to ask Pete: how is the journal doing?

PB: Extremely well! But don’t forget that we are more than just a journal, we are actually a publishing ecosystem that aims to support authors throughout their publication cycles. PeerJ, the peer-reviewed journal has published 200 articles now, but we also have PeerJ PrePrints (our pre-print server) which has published over 80 articles. Considering we have only been publishing since February, this is a very strong output (90% of established journals don’t publish at this level). Meanwhile, our brand new Q&A functionality is already generating great engagement between readers and authors.

We have published a ton of great science, some of which has received over 20,000 views (!) already. We are getting first decisions back to authors in a median of 24 days, and we are going from submission to final publication (including revisions and production time) in just 51 days. Our institutional members such as UC Berkeley, University of Cambridge, and Trinity as well as our Editorial Board of >800 and our Advisory Board of 20, have kicked the tires and clearly support the model. We have saved the academic community almost $1m already, and we now have a significant cadre of members who are able to publish freely, for life, for no additional cost. Ever.

by @stephenjjohnson on twitter

I was thrilled when I got the invitation to become an academic editor in PeerJ, as I was when the offer came from PLOS ONE. I  blog in this space primarily because it is part of PLOS, I am not sure I would had added that kind of stress for any other brand. PLOS has been and continues to be a key player in the Open Access movement, and am proud to be one of their editors.

What the future of PeerJ might be, who knows. I will continue to support the venture because I believe it offers something of real value to science that is somewhat different from what we’ve had so far. Cant wait to see what else they will pull out of the hat.

 

ASAP awards – Interview with Mark Costello

ASAP Finalist Announcement 600x600Mark Costello, a researcher at the Institute of Marine Science and Leigh Marine Laboratory (University of Auckland in New Zealand) was nominated for his work with WoRMS of which he was founding chair. The site provides a database of scientific names for all marine species. Species are sometimes described with different scientific names, and the site helps disambiguate these names and also provides or links to information about each species.

Q: How did the project come about

M.Costello
Courtesy of ASAP awards

MC: When I was in Ireland 1990’s I was involved in workshops developing policies for biodiversity – the main barrier was lack of coordination of species names. This meant we couldn’t merge datasets easily enough. In 1996 I put in a proposal to create an inventory of science species names which was funded by the European Commission. Since 2004, the Flemish Government has funded the hosting of the database. Once the infrastructure was secure and professionally managed, then getting the info into it became possible. People were motivated because this was a permanent website with permanent support from the Flanders Marine Institute (VLIZ). It started as a clean-up exercise.

Q: What is special about the site?

MC: By providing naming information about species, it helps people navigate the scientific literature where alternative names may be used, but it also links to information about the species.

Q: What did you learn from working on WoRMS?

IMG_1650
Courtesy of Mark Costello

MC: There were unexpected patterns that were discovered from the data. We discovered that the number of species being described over time has been increasing at a linear rate. When you look at the authors there are now about 3-5 times more people discovering species than ever before – so taxonomists are not really disappearing as many people have said. The number of species discovered per author is, however, declining. That it is taking more people to discover species than it did before suggests that we have discovered most species on Earth (at least half, perhaps 2/3), not only a small fraction as some have speculated. We found also that science is doing better, conservation is working.

Q: What was people’s response?

MC: Word of mouth helped – there was an element of trust. We only know the people we know – but when you look globally you start to get a different picture than when you look at your own community. The taxonomy is curated by specialists, and people are now more trusting about online collaboration than when we started. But it was important to have a long – term commitment to supporting these databases to make the system sustainable so that the databases are shareable.

According to their stats page, In 2007 the site had received 37,221 unique visitors and by 2012 this number had risen to 817,335 unique visitors and 30,423,583 page views. The material is provided under a CC-BY, although permission needs to be sought for the re-distribution of the entire database, and it seems too to download the entire database too. I asked Mark about that.

MC: I don’t think that the CC-BY is a hindrance for sharing the data or reusing. We provide a clear citation for the data. We want the source to be cited because we consider it a scholarly publication. And users concerned about quality assurance of their sources can then cite it as an ‘authoritative’ rather than anonymous resource. When you combine the data into a new set, people that want to use this new group or want to replicate need to know where the original data came from. Otherwise they would be having to start from scratch. The citation solves this problem.

IMG_0711
Courtesy of Mark Costello

MC: The request was put there originally because databases change over time and we were worried that there would be multiple copies which could create confusion as to what is the best source. It also was a way of not having to deal with data flow issues if too many people were downloading the entire database at the same time. We also needed safeguarding from attacks of sending constant queries to the database. But it is also a good way of knowing and tracking who your users are, so we can provide the list of organisations that use the database when we are out looking for funding and support.

Q: What would you like to see next?

MC: I would love to have all species on Earth in a quality approved database and see what we could then discover about the species. We learned a lot from querying this database, and we could learn a lot more if we had all species in there.

Even if you are not interested in digging into the data, the site is a great place to get to know our underwater neighbours. I encourage you to visit the site.

ASAP awards – Interview with Daniel Mietchen

ASAP Finalist Announcement 600x600 The names of the six finalists for the ASAP awards are now out, and I was pleased to see Daniel Mietchen’s name in the list. Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp have been working on a really valuable project. There was an opportunity in exploiting open access literature to illustrate articles in Wikipedia.

D.Meitchen
Courtesy of ASAP awards

 

Many scientific articles have a “supplementary” materials section, which can be rich in multimedia, but these artifacts may not as easy to find as those that make up the main body of scientific manuscripts. What Daniel, Raphael and Nils did is maximise the impact of those research outputs by putting them in a place where they can be found, explored and reused by scientists and non-scientists alike.

They developed a tool called Open Access Media Importer (OAMI) that searches for multimedia files through Open Access articles in PubMed Central and uploads them to WIkimedia. This tool exemplifies the added value of papers published under open access using a libre copyright licence such as CC-BY. Not only are the articles available to read, but also they can be repurposed in other contexts. The files that the OAMI bot uploaded now illustrate more than 200 English Wikipedia pages, and many more in other languages.

CC0- Uploads by OAMI bot to Wikimedia Commons between 7/12 and 9/13

Q: How did you get started with this project?

DM: My PhD was on Magnetic Resonance Imaging, which primed me to work with videos, and my first postdoc was on music perception, which naturally involved a lot of audio. Both made me aware of all the audiovisual material that was hidden in the supplements of scholarly articles, and I found that the exposure of that part of the literature left much to be desired. For instance, every video site on the Web provides thumbnails or other forms of preview of video content, but back then, no scholarly publisher exposed video content this way. Wikimedia Commons did. I also noticed that Wikipedia articles on scientific topics were rarely illustrated with multimedia. So the two fit well together. Nils, Raphael and I met online, and then sent our first funding proposal in 2011 in order to automate the import of supplementary audio and video files from scholarly articles into Wikimedia Commons.

Q: How did you get started with the project?

DM: We chose to start with PubMed Central. It is one of the largest repositories of scholarly publications, many of which have supplementary materials, and it has an API we could use.

Q: How far have you come?

DM: We have now imported basically all audio and video materials from suitably licensed articles available from PubMed, save a few where there were technical difficulties with file conversion or upload. Initially, we did not know how many files this would be, and had roughly estimated (there is no easy way to search for supplementary video or audio files) the number at somewhere between 5,000 and 10,000 back in 2011. The bot now adds several hundred files from newly published articles every month and passed 14,000 uploads to Wikimedia Commons earlier this week. So if you are going to publish multimedia with a suitably licensed paper in a journal indexed in PubMed Central, you – and anyone else – can find it on Commons shortly thereafter.

Q: How does that compare to other Wikimedia content?

DM: Most of the uploaded files are videos, and given that there are about 36,000 video files on Commons in total, about one third of them now has scientific content. That is a much higher proportion than, say, that of scientific articles out of all articles on any Wikipedia. However, the number would be even higher if more authors (or journals) would decide (or funders mandate) to put their materials under a Wikimedia-compatible license. If materials from their papers cannot be reused on Wikimedia Commons, they are not Open Access.

Q: Were there any hurdles along the way?

DM: Sure. The project actually evolved more slowly than we had anticipated because we had underestimated the extent to which the standards for machine readability of manuscripts deposited in PubMed Central are being ignored by publishers, or interpreted in a rather inconsistent fashion. We put forward a number of suggestions to PubMed Central – who are very cooperative – in order to monitor standard compliance and to facilitate reuse by us and others, and we’ll present a paper on that at a conference during Open Access Week.

Q: What else can OAMI do, and how can people have access to it?

DM: The software is available on GitHub and was built to be both reusable and extendable, so if someone wants to write a plugin to export the videos from PubMed Central to places like YouTube, they can start doing that right now (in fact, work on a YouTube pipeline has already started). Or we could think about harvesting in places other than PMC, or materials other than audiovisuals. If anyone has ideas in this regard, they would be most welcome.

Q: What comes next for you?

DM: This was and is a spare time project and will likely continue as such for some time. While it was a perfect fit to my Wikimedian in Residence project at the Open Knowledge Foundation Germany that ended this summer, I am continuing to work at the interface between research, openness and the public, as I am now at the Natural History Museum in Berlin, working on the pro iBiosphere project that aims to lay the ground for integrating biodiversity research with the Web, which will require a greater degree of openness than what we are used to now, as well as better machine readability of the relevant information, a topic that I am currently focusing on.

I met Daniel online a few years ago, and he has been a source of motivation and inspiration for a lot of us. It makes me very happy to see that his work has not gone unnoticed, and look forward to seeing the outcome of his next projects.

ASAP awards – Interview with Mat Todd

ASAP Finalist Announcement 600x600
The name of the six finalists for the ASAP awards are out.

Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.”

One of the finalists is Mat Todd for his participation in the Global Collaboration to Fight Malaria.

Few research projects reflect this spirit of Open Science as well as the Open Source Malaria Project which is trying to find molecules that can help fight this terrible disease. Unlike other drug discovery projects, they are building on compounds that have been put the public domain and making the discovery process not only open for anyone to look at but also for anyone who wants to participate to do so – whatever that contribution might be.

M.Todd
Courtesy of ASAP awards

I had a chance to talk to Mat Todd the other night and he was gracious to answer some of my questions.

Q: What made you get engaged in Open Research?

MT: I kept looking around me and finding problems that were not being solved efficiently because people are not exploiting the power of the Internet to work together. Putting your work on the web helps to get greater interaction and find the best people to work with you. The psychological barrier, however, is that in the process you lose control of your project and failures are clearly revealed.

Q; How do you get people to overcome those barriers?

MT: I don’t know.  You need to have the attitude that something needs to be done and done really well, even if it is not ultimately done by you. We should assume that the next generation will adopt approaches to solving problems that are more fluid than how they are today.

Q: How did the Open Source Malaria project get started?

MT: We built on an earlier project that solved how to make a drug in an improved way, something that was needed by the World Health Organisation. (here and here) We thought “how about extending this to drug discovery?” That’s interesting because there you have the issue of whether you need patent protection, which is seemingly at odds with a totally open approach. We were able to start with data that GSK had put in the public domain in 2010. This move by GSK was pretty incredible, but they had so many compounds that were active against malaria that they considered putting the data into the public domain as a sound idea to increase their interactions with other scientists. Open data stimulates research activity by others.

Q: What do you think this project means to the Open Science movement?

MT: The project lets people see the process and that might get people more interested in what science is: there’s nothing mysterious about it, just people doing work. The Open Source Malaria project also eschews patents, and that means you need to think about whether new medicines can be taken all the way through to the public without that kind of protection – that’s actually what the session I’m running at OKCon at the moment is all about. How will we cover downstream costs of making the project’s discoveries available to people? Generally though, there is a fair amount of pressure on the project – we need to get it right because we don’t want the project to become the example of open science not working!

Q: Do you think this open source model can be exploited for other diseases?

MT: Diseases vary in their risk and complexity, so it will depend on the disease. Phase III clinical trials is typically the cripplingly expensive bit and drugs can often fail there after lots of investment. In the case of malaria the full set of clinical trials may not be so costly. There is something to be said for the open approach de-risking the whole process because you ought to be more confident in the quality of the drugs you’re trialling. I think the answer to your question is “yes” in short. More generally though we need to think beyond financial profit and start thinking that healthy people are more productive – that changes the reasons why public funds might be used to cover these huge costs.

Q: Where is the project at?

MT: We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing. The project itself has just started looking at a new series of very nice compounds that have also come from the private sector and have been put in the public domain by MMV.

Q: What have you come to enjoy about participating in the project?

MT: What I love about it is working with really smart people wherever they are, from students to professors, Australia through Europe to the US.

 Q: And where do you think Open Science is at?

MT: Very early days. If everyone in the world did open science then it would just be science and I could stop talking about it…

I came across Mat online several years ago, and he, like most others that participated in the Open Science discussions, helped shape my thinking and strengthen my commitment to a better way of doing science. We talked a bit about those “good old days”, and he ended the conversation with a quote from Charles Dickens:

“We are all sailing away to the sea, and have a pleasure in thinking of the river we are upon, when it was very narrow and little.” (From Dickens, C. (2012). The Selected Letters of Charles Dickens. Oxford University Press)

ASAP Awards Finalists announced

Earlier this year, nominations opened for the Accelerating Science Awards Program (ASAP). Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.” From their website:ASAP Finalist Announcement 300x250

The Accelerating Science Award Program (ASAP) recognizes individuals who have applied scientific research – published through Open Access – to innovate in any field and benefit society.

The list of finalists is impressive, as is the work they have been doing taking advantage of Open Access research results. I am sure the judges did not have an easy job. How does one choose the winners?

In the end, this has been the promise of Open Access: that once the information is put out there it will be used beyond its original purpose, in innovative ways. From the use of cell phone apps to help diagnose HIV in low income communities, to using mobile phones as microscopes in education, to helping cure malaria, the finalists are a group of people that the Open Access movement should feel proud of. They represent everything we believed that could be achieved when the barriers to access to scientific information were lowered to just access to the internet.

The finalists have exploited Open Access in a variety of ways, and I was pleased to see a few familiar names in the finalists list. I spoke to three of the finalists, and you can read what Mat Todd, Daniel Mietchen and Mark Costello had to say elsewhere.

One of the finalist is Mat Todd from University of Sydney, whose work I have stalked for a while now. Mat has been working on an open source approach to drug discovery for malaria. His approach goes against everything we are always told: that unless one patents one’s discovery there are no chances that the findings will be commercialised to market a pharmaceutical product. For those naysayers out there, take a second look here.

 

A different approach to fighting disease was led by Nikita Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar tackling HIV. They developed a smartphone app to help circumvent the need to go to a clinic to get an HIV test avoiding the possible discrimination that may come with it. But with the ability to test for HIV with home testing, then what was needed was a way to provide people with the information and support that would normally be provided face to face. Smartphones are increasingly becoming a tool that healthcare is exploring and exploiting. The hope is that HIV infection rates could be reduced by diminishing the number of infected people that are unaware of their condition.

 

What happens when different researchers from different parts of the world use different names for the same species? This is an issue that Mark Costello came across – and decided to do something about it. What he did was become part of the WoRMS project – a database that collects the knowledge of individual species. The site receives about 90,000 visitors per month. The data in the WoRMS database is curated and available under CC-BY. You can read more about Mark Costello here.

 

We’ve all heard about ecotourism. For it to work, it needs to go hand in hand with conservation. But how do you calculate the value (in terms of revenue) that you can put on a species based on ecotourism? This is what Ralf Buckley, Guy Castley, Clare Morrison, Alexa Mossaz, Fernanda de Vasconcellos Pegas, Clay Alan Simpkins and Rochelle Steven decided to calculate. Using data that was freely available they were able to calculate to what extent the populations of threatened species were dependent on money that came from ecotourism. This provides local organisations the information they need to meet their conservation targets within a viable revenue model.

 

Many research papers are rich in multimedia – but many times these multimedia files are published in the “supplementary” section of the article (yes – that part that we don’t tend to pay much attention to!). These multimedia files, when published under open access, offer the opportunity to exploit them in broader contexts, such as to illustrate Wikipedia pages. That is what Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp set out to do. They created a bot called Open Access Media Importer (OAMI) that harvests the multimedia files from articles in PubMed Central. The bot also uploaded these files to Wikimedia Commons, where they now illustrate more than 135 Wikipedia pages. You can read more about it here.

 

Saber Iftekhar Khan, Eva Schmid and Oliver Hoeller were nominated for developing a low weight microscope that uses the camera of a smartphone. The microscope is relatively small, and many of its parts are printed on a 3D printer. For teaching purposes it has two advantages. Firstly, it is mobile, which means that you can go hiking with your class and discover the world that lives beyond your eyesight. Secondly, because the image of the specimen is seen through the camera function on your phone or ipod, several students can look at an image at the same time, which, as anyone who teaches knows, is a major plus. To do this with standard microscopes would cost a lot of money in specialised cameras and monitors. Being able to do this at a relative low cost can provide students with a way of engaging with science that may be completely different from what they were offered before.

Three top awards will be announced at the beginning of Open Access Week on October 21st. Good luck to all!