Evolution and engineering of the megajournal – Interview with Pete Binfield

Image courtesy of PeerJ

Peter Binfield wrote a nice analysis on Mega Journals over at Creative Commons Aotearoa New Zealand  an organisation on which I serve. MegaJournals are a recent phenomenon, that have changed the face of scientific publishing.

I am an academic editor in PeerJ, as well as in PLOS ONE (PONE), “the” megajournal of the Public Library of Science. I entered PONE first as an author (submitted before PONE had began publishing), and then joined as an Academic Editor under the rule of Pete Binfield. I saw PONE grow into the publishing giant it is today, feeling proud of being a small part of it. Not long ago. I saw Pete leave to join Jason Hoyt (formerly from Mendeley, another venture I had signed up to in its very early days) in search of their new adventure that was eventually to become PeerJ.  It wouldn’t be long before I would become an academic editor and find myself, again, under Pete’s rule. It has been about a year since that invitation, and Open Access Week gave me an opportunity to reflect on my experience.

Who is Pete Binfield?

PB: Before PLOS ONE I spent about 14 years in the subscription publishing world.  I worked for Institute of Publishing Physics (doing books), then moved to Holland to work for Kluwer Academic Publishers for 8 years (and Kluwer then merged with Springer), and finally I moved to the US to work for SAGE Publications (the largest social science publisher). It was during my time at Kluwer and then SAGE that the Open Access movement was really taking off, and it quickly became apparent to me that this was the way the industry was (or at least should be!) going. I wanted to be at the leading edge of this movement, not looking in at it from outside, trying to play catch up, so when the opportunity came up to move to PLOS and run PLOS ONE, I jumped at it.

I am a biology teacher (broadly speaking)  mainly in the medical school. As such, I can’t escape talking about evolved and engineered systems. Animals’ bodies are evolved – the changes in structure and function happen against a backdrop of conserved structures. You can’t really understand “why” an organ looks the way it looks and works the way it does without thinking about what building blocks were available to start with. Engineers have it easier in a sense. They don’t have a preset structure they need to hack to get the best they can, they can start from scratch. Building an artificial kidney that works in dry land has less constraints that evolving one from that of a water-dwelling ancestor. So if you are a journal how do you go from print to online?

Building a journal from scratch, too, is not the same as evolving one. When PLOS came to life in the early over a decade ago they were able to invent their journals from scratch. And boy, did they do that well (and still do). They changed the nature of formal scientific communication and sent traditional publishers to chase their tails. Traditional publishers have been slow to adapt – trying to  hack the 17th Century publishing model.  When PLOS ONE was born it was unique, exploiting what PLOS had achieved so well as an Open Access online publication, but also seeking to changed the rules of how papers were to be accepted. This, in the whole evolution analogy was a structural change with a very large downstream effect.

PB: I think some of my prior colleagues might have thought that it was a strange transition – at SAGE I had been responsible for over 200 journal titles in a vibrant program, and now I was moving to PLOS to run a single title (PLOS ONE) in an organization that only had 7 titles. However, even at that time I could see the tremendous potential that PLOS ONE had and how it could bring about rapid change. It was the unique editorial criteria (peer-reviewing only for scientific validity); the innovative functionality; the potential for limitless growth; and the backing of a ‘mover and shaker’ organization which really excited me. I joined PLOS with the hope that we could make PLOS ONE the largest journal in the world, and to use that position to bring about real change in the industry – I think most people would agree we achieved that.

Until last year, you could pretty much put journals into 2 broad bags: those that were evolving from “print” standards and those that were evolving from “online” standards, which also included the ‘megajournals’ like PLOS ONE. Yet over 10 years after the launch of PLOS,  and given the accelerated changes in “online” media,  there was an opportunity for a fresh engineering approach.

PB: When I left, the journal was receiving about 3,000 submissions a month, and publishing around 2,000 – so to change anything about PLOS ONE was like trying to change the engines of a jet, in mid-flight. We had an amazingly successful and innovative product (and, to be clear, it still is) but it was increasingly difficult to introduce significant new innovations (such as new business models; new software; a new mindset).

In addition, myself and Jason wanted to attempt an entirely new business model which would make the act of publishing significantly cheaper for the author. I think it would have been very hard for PLOS to attempt this within the PLOS ONE structure which, in many ways, was already supporting a lot of legacy systems and financial commitments.

When Jason approached me with the original idea for PeerJ it quickly became clear that by partnering together we would be able to do things that we wouldn’t have been able to achieve in our previous roles (he at Mendeley, and me at PLOS). By breaking out and starting something new, from scratch, it was possible to try to take the lessons we had both learned and move everything one or two steps forwards with an entirely new mindset and product suite. That is an exciting challenge of course, but already I think you can see that we are succeeding!

PeerJ had from the start a lot that we (authors) were looking for. We had all been struggling for a while with knowing that the imperative to publish in Open Access was growing, either for personal motivation (as in my case) or because of funders’ or institutional mandates. We were also struggling with the perceived cost of Open Access, especially within the traditional journals. There is too much at stake in individual’s careers to not carefully choose how to “brand” our articles because we know too well that at some point or another someone will value our work more on the brand than on the quality, and that someone has the power to decide if we get hired, promoted, or granted tenure. PLOS ONE had two things in its favour: it was part of the already respected PLOS brand, and it was significantly cheaper than the other PLOS journals. Then, over a year ago, Pete and Jason came out of the closet with one of the best catch-phrases I’ve seen:

If we can set a goal to sequence the Human Genome for $99, then why shouldn’t we demand the same goal for the publication of research?

They had a full package: Pete’s credibility in the publishing industry, Jason’s insights on how to help readers and papers connect, and a cheap price, not just affordable, cheap. I bought my full membership out of my own pocket as soon as I could. I gave them my money because I had met and learned to trust both Pete’s and Jason’s insights and abilities.

PB: [The process from development to launch day ] was very exciting, although clearly nail biting! One of the things which was very important to us was to build our own submission, peer review and publication software entirely from scratch – something which many people thought would not be possible in a reasonable time frame. And yet our engineering team, recruited and led by Jason, were able to complete the entire product suite in just 6 months of development time. First we built the submission and peer review system, and as soon as submissions started moving through that system we switched to build the publication platform. Everything is hosted on the cloud, and implemented using github, and so were able to keep our development infrastructure extremely ‘light’ and flexible.

But even that does not guarantee buy-in. Truth be told, even if PeerJ was to be an interesting experiment I think mine was money well spent. (All in the name of progress.) What tipped the balance for me was the addition of Tim O’Reilly to the mix. Here is someone that understands the web (heck, he popularised that famous web 2.0 meme), publishing and innovation. O’Reilly brought in what, from my point of view, was missing in the original mix and that was crucial to attract authors: a sense of sustainability.

by @McDawg on twitter

PeerJ looked different to me in a very unique way – while other journals screamed out  “brand” or “papers”, PeerJ was screaming out  “authors”.  Whether this might be a bias of mine because of my perception of the founders, or the life-membership model, to me this was a different kind of journal. It wouldn’t be long until I got invited to join the editorial board, and then got to see who my partners in crime would be.

PB: Simultaneously, we were building up the ‘editorial’ side of the journal. We started with a journal with no reputation, brand, or recognized name and managed to recruit an Editorial Board of over 800 world class academics (including yourself, and 5 Nobel Laureates); we created the editorial criteria and detailed author guidelines; we defined a comprehensive subject taxonomy; we established ourselves with all the third party services which support this infrastructure (such as CrossRef, CLOCKSS, COPE, OASPA etc); we contracted with a production vendor and so on.

Everything was completed in perfect time, and worked flawlessly from the very start – it really is a testament to the talented staff we have and I think we have proven to other players that this approach is more than possible.

But to launch a journal you need articles and also to make sure your system does not crash. Academic Editors were invited to submit manuscripts free of charge in exchange of participating in the beta testing. I had an article that was ready to submit, and since by now I had pretty much no funding the free deal was worth any bug reporting nuisance. I had been producing digital files for submission for ages and doing submissions on line for long enough that I set a full day aside to go through the process (especially since this was a bug reporting exercise). And then came the surprise. Yes, there were a few bugs, as expected, but the submission system was easy and as user friendly as I had not anticipated. (Remember when above I said PeerJ screamed “authors”?). For the first time I experienced a submission system that was “user friendly”.

PB: I am constantly amazed that you can start from nothing, and provided you have staff who know what they are doing, and that you have a model which people can get behind, then it is entirely possible to build a world-class publishing operation from a standing start and create something which can compete with, and beat out, the more established players. As a testament to this, we have been named one of the Top 10 “Educational Technology Innovators of 2013” by the Chronicle of Higher Education; and as the “Publishing Innovation of 2013” by the Association of Learned and Professional Scholarly Publishers.

Then came the reviews of the paper – and there is when I found the benefit of knowing who the reviewers were. Many times I encounter these odd reviewer’s comments where I read puzzled and go “uh?”. In this case, because I knew who the reviewer was I could understand where they were coming from.  It made the whole process a lot easier. Apparently, the myth that people won’t review papers if their names are revealed, is , well, a myth.

PB: One particularly pleasant surprise has been the community reaction to our ‘optional open peer review’. At the time of writing, pretty much 100% of our authors are choosing to reproduce their peer-review history alongside their published articles (for example, every paper we are publishing in OA week is taking this option). We believe that making the peer review process as open as possible is one of the most important things that anyone can do to preserve the valuable comments of their peer-reviewers (time consuming comments which are normally lost to the world) and to prove the rigour of their published work .

I am not alone at being satisfied as an author. Not too long ago, PeerJ did their first author survey. Even as an editor I was biting my nails to see the results, I can only imagine the stress and anticipation in PeerJ headquarters.

PB: Yes, we conducted our first author survey earlier this year and we were extremely pleased to learn, for example, that 92% of responding authors rated their overall PeerJ experience as either “one of the best publishing experiences I have ever had” (42%) or “a good experience” (49%). In addition, 86% of our authors reported that their time to first decision was either “extremely fast” (29%) or “fast” (57%). Any publisher, no matter how well resourced or established, would be proud to be able to report results like these!

Perhaps the biggest surprise was how engaged our authors were, and how much feedback they were willing to provide. We quite literally received reams of free text feedback which we are still going through – so be careful what you ask for!

I am not surprised at this – I myself provided quite a bit of feedback. Perhaps seeing these comments from Pete emphasise the sense of community that some of us feel is the point of difference with  PeerJ.

PB: We are creating a publishing operation, not a ‘facebook for scientists’, however with that said our membership model does mean that we tend to develop functionality which supports and engages our members at every touch point. So although it is early days, I think a real community is already starting to form and as a result you can start to see how our broader vision is taking shape.

Unlike most publishers (who have a very ‘article centric’ mentality), our membership model means that we are quite ‘person centric’. Where a typical publisher might not know (or care) who the co-authors are on a paper, for us they are all Members, and need to be treated well or they will not come back or recommend us to their peers.With this mindset, you can see that we have an intimate knowledge of all the interactions (and who performed them) that happen on a paper. Therefore when you come to our site you can navigate through the contributions of an individual (for example, see the links that are building up at this profile) and see exactly how everyone has contributed to the community (through our system of ‘Academic Contribution’ points.

Another example of our tendency towards ‘community building’ is our newly launched Q&A Functionality. With this functionality, anyone can ask a question (on a specific part of a specific article; on an entire article; or on any aspect of science that we cover) and anyone in the community can answer that question. People who ask or answer questions can be ‘voted’ up or down, and as a result we hope to build up a system of ‘reputation recognition’ in any given field. Again – this is a great way to build communities of practise, and the barrier to entry is very low.

Image courtesy of PeerJ

It is early days – this is new functionality and it will be some time before we can see if it takes off. PLOS ONE also offers commenting, but that seems to be a feature that is under-used. I can’t but wonder whether the experience of PeerJ might be different because the relationship with authors and editors might be also different. Will feeling  that we, the authors (and not our articles), are the centre of attention make a difference?

PB: This is extremely important to us, so thank you for noticing! One of the mistakes that subscription publishers are making is that they have historically focussed on the librarian as the customer (causing them to develop features and functionalities focussed on those people) when in an Open Access world, the customer is the academic (in their roles as authors, editors and reviewers). Open Access publishers are obviously much more attuned to the principle of the ‘academic as customer’ but even they are not as focussed on this aspect as we (with our Membership model) are .

It is very important that authors feel loved; that people receive prompt and effective responses to their queries; that we listen to complaints and react rapidly and so on. One way we are going to scale this is with more automation – for example, if we proactively inform people of the status of their manuscript then they don’t need to email us. On another level, publishing is still a ‘human’ business based on networks of interaction and trust, and so we need to remember that when we resource our organisation going forwards.

This is what I find exciting about PeerJ – there is a new attitude, if not a new concept – that seems to come through. I will not even try to count the number of email and twitter exchanges that I have had with Pete and PeerJ staff ( I would not be surprised that eyes roll at the other end as they see the “from” field in their email inbox). But they have always responded. With graceful and helpful emails. Whether they “love” me or not (as Pete says above) is irrelevant when one is treated with respect and due diligence. I can see similar interactions at least on twitter – PeerJ responsive to suggestions and requests, and, at least from where I am standing, seemingly having innovation at the top of the list.

PB: I think that everyone at PeerJ came here (myself and Jason included) because we enjoy innovating and we aren’t afraid to try new things. Innovation is quite literally written into our corporate beliefs (“#1. Keep Innovating – We are developing a scholarly communication venue for the 21st Century. We are committed to improving scholarly communications in every way possible”) and so yes, it is part of our DNA and a core part of our competitive advantage.

I must admit, it wasn’t necessarily our intention to use twitter as our bug tracker (!), but it is definitely a very good way to get real time feedback on new features or functionality. Because of our flexible architecture, and ‘can do’ attitude, we can often fix or improve functionality in hours or days (compared to months or years at most other publishers who do not control their own software). For an example of this in action, check out this blog post from a satisfied ‘feature requestor’.

I want PeerJ to succeed not only because I like and admire the people involved with it but because it offers something different, including the PrePrint service to which I hope to contribute soon. So I had to ask Pete: how is the journal doing?

PB: Extremely well! But don’t forget that we are more than just a journal, we are actually a publishing ecosystem that aims to support authors throughout their publication cycles. PeerJ, the peer-reviewed journal has published 200 articles now, but we also have PeerJ PrePrints (our pre-print server) which has published over 80 articles. Considering we have only been publishing since February, this is a very strong output (90% of established journals don’t publish at this level). Meanwhile, our brand new Q&A functionality is already generating great engagement between readers and authors.

We have published a ton of great science, some of which has received over 20,000 views (!) already. We are getting first decisions back to authors in a median of 24 days, and we are going from submission to final publication (including revisions and production time) in just 51 days. Our institutional members such as UC Berkeley, University of Cambridge, and Trinity as well as our Editorial Board of >800 and our Advisory Board of 20, have kicked the tires and clearly support the model. We have saved the academic community almost $1m already, and we now have a significant cadre of members who are able to publish freely, for life, for no additional cost. Ever.

by @stephenjjohnson on twitter

I was thrilled when I got the invitation to become an academic editor in PeerJ, as I was when the offer came from PLOS ONE. I  blog in this space primarily because it is part of PLOS, I am not sure I would had added that kind of stress for any other brand. PLOS has been and continues to be a key player in the Open Access movement, and am proud to be one of their editors.

What the future of PeerJ might be, who knows. I will continue to support the venture because I believe it offers something of real value to science that is somewhat different from what we’ve had so far. Cant wait to see what else they will pull out of the hat.


ASAP awards – Interview with Mark Costello

ASAP Finalist Announcement 600x600Mark Costello, a researcher at the Institute of Marine Science and Leigh Marine Laboratory (University of Auckland in New Zealand) was nominated for his work with WoRMS of which he was founding chair. The site provides a database of scientific names for all marine species. Species are sometimes described with different scientific names, and the site helps disambiguate these names and also provides or links to information about each species.

Q: How did the project come about

Courtesy of ASAP awards

MC: When I was in Ireland 1990’s I was involved in workshops developing policies for biodiversity – the main barrier was lack of coordination of species names. This meant we couldn’t merge datasets easily enough. In 1996 I put in a proposal to create an inventory of science species names which was funded by the European Commission. Since 2004, the Flemish Government has funded the hosting of the database. Once the infrastructure was secure and professionally managed, then getting the info into it became possible. People were motivated because this was a permanent website with permanent support from the Flanders Marine Institute (VLIZ). It started as a clean-up exercise.

Q: What is special about the site?

MC: By providing naming information about species, it helps people navigate the scientific literature where alternative names may be used, but it also links to information about the species.

Q: What did you learn from working on WoRMS?

Courtesy of Mark Costello

MC: There were unexpected patterns that were discovered from the data. We discovered that the number of species being described over time has been increasing at a linear rate. When you look at the authors there are now about 3-5 times more people discovering species than ever before – so taxonomists are not really disappearing as many people have said. The number of species discovered per author is, however, declining. That it is taking more people to discover species than it did before suggests that we have discovered most species on Earth (at least half, perhaps 2/3), not only a small fraction as some have speculated. We found also that science is doing better, conservation is working.

Q: What was people’s response?

MC: Word of mouth helped – there was an element of trust. We only know the people we know – but when you look globally you start to get a different picture than when you look at your own community. The taxonomy is curated by specialists, and people are now more trusting about online collaboration than when we started. But it was important to have a long – term commitment to supporting these databases to make the system sustainable so that the databases are shareable.

According to their stats page, In 2007 the site had received 37,221 unique visitors and by 2012 this number had risen to 817,335 unique visitors and 30,423,583 page views. The material is provided under a CC-BY, although permission needs to be sought for the re-distribution of the entire database, and it seems too to download the entire database too. I asked Mark about that.

MC: I don’t think that the CC-BY is a hindrance for sharing the data or reusing. We provide a clear citation for the data. We want the source to be cited because we consider it a scholarly publication. And users concerned about quality assurance of their sources can then cite it as an ‘authoritative’ rather than anonymous resource. When you combine the data into a new set, people that want to use this new group or want to replicate need to know where the original data came from. Otherwise they would be having to start from scratch. The citation solves this problem.

Courtesy of Mark Costello

MC: The request was put there originally because databases change over time and we were worried that there would be multiple copies which could create confusion as to what is the best source. It also was a way of not having to deal with data flow issues if too many people were downloading the entire database at the same time. We also needed safeguarding from attacks of sending constant queries to the database. But it is also a good way of knowing and tracking who your users are, so we can provide the list of organisations that use the database when we are out looking for funding and support.

Q: What would you like to see next?

MC: I would love to have all species on Earth in a quality approved database and see what we could then discover about the species. We learned a lot from querying this database, and we could learn a lot more if we had all species in there.

Even if you are not interested in digging into the data, the site is a great place to get to know our underwater neighbours. I encourage you to visit the site.

ASAP awards – Interview with Daniel Mietchen

ASAP Finalist Announcement 600x600 The names of the six finalists for the ASAP awards are now out, and I was pleased to see Daniel Mietchen’s name in the list. Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp have been working on a really valuable project. There was an opportunity in exploiting open access literature to illustrate articles in Wikipedia.

Courtesy of ASAP awards


Many scientific articles have a “supplementary” materials section, which can be rich in multimedia, but these artifacts may not as easy to find as those that make up the main body of scientific manuscripts. What Daniel, Raphael and Nils did is maximise the impact of those research outputs by putting them in a place where they can be found, explored and reused by scientists and non-scientists alike.

They developed a tool called Open Access Media Importer (OAMI) that searches for multimedia files through Open Access articles in PubMed Central and uploads them to WIkimedia. This tool exemplifies the added value of papers published under open access using a libre copyright licence such as CC-BY. Not only are the articles available to read, but also they can be repurposed in other contexts. The files that the OAMI bot uploaded now illustrate more than 200 English Wikipedia pages, and many more in other languages.

CC0- Uploads by OAMI bot to Wikimedia Commons between 7/12 and 9/13

Q: How did you get started with this project?

DM: My PhD was on Magnetic Resonance Imaging, which primed me to work with videos, and my first postdoc was on music perception, which naturally involved a lot of audio. Both made me aware of all the audiovisual material that was hidden in the supplements of scholarly articles, and I found that the exposure of that part of the literature left much to be desired. For instance, every video site on the Web provides thumbnails or other forms of preview of video content, but back then, no scholarly publisher exposed video content this way. Wikimedia Commons did. I also noticed that Wikipedia articles on scientific topics were rarely illustrated with multimedia. So the two fit well together. Nils, Raphael and I met online, and then sent our first funding proposal in 2011 in order to automate the import of supplementary audio and video files from scholarly articles into Wikimedia Commons.

Q: How did you get started with the project?

DM: We chose to start with PubMed Central. It is one of the largest repositories of scholarly publications, many of which have supplementary materials, and it has an API we could use.

Q: How far have you come?

DM: We have now imported basically all audio and video materials from suitably licensed articles available from PubMed, save a few where there were technical difficulties with file conversion or upload. Initially, we did not know how many files this would be, and had roughly estimated (there is no easy way to search for supplementary video or audio files) the number at somewhere between 5,000 and 10,000 back in 2011. The bot now adds several hundred files from newly published articles every month and passed 14,000 uploads to Wikimedia Commons earlier this week. So if you are going to publish multimedia with a suitably licensed paper in a journal indexed in PubMed Central, you – and anyone else – can find it on Commons shortly thereafter.

Q: How does that compare to other Wikimedia content?

DM: Most of the uploaded files are videos, and given that there are about 36,000 video files on Commons in total, about one third of them now has scientific content. That is a much higher proportion than, say, that of scientific articles out of all articles on any Wikipedia. However, the number would be even higher if more authors (or journals) would decide (or funders mandate) to put their materials under a Wikimedia-compatible license. If materials from their papers cannot be reused on Wikimedia Commons, they are not Open Access.

Q: Were there any hurdles along the way?

DM: Sure. The project actually evolved more slowly than we had anticipated because we had underestimated the extent to which the standards for machine readability of manuscripts deposited in PubMed Central are being ignored by publishers, or interpreted in a rather inconsistent fashion. We put forward a number of suggestions to PubMed Central – who are very cooperative – in order to monitor standard compliance and to facilitate reuse by us and others, and we’ll present a paper on that at a conference during Open Access Week.

Q: What else can OAMI do, and how can people have access to it?

DM: The software is available on GitHub and was built to be both reusable and extendable, so if someone wants to write a plugin to export the videos from PubMed Central to places like YouTube, they can start doing that right now (in fact, work on a YouTube pipeline has already started). Or we could think about harvesting in places other than PMC, or materials other than audiovisuals. If anyone has ideas in this regard, they would be most welcome.

Q: What comes next for you?

DM: This was and is a spare time project and will likely continue as such for some time. While it was a perfect fit to my Wikimedian in Residence project at the Open Knowledge Foundation Germany that ended this summer, I am continuing to work at the interface between research, openness and the public, as I am now at the Natural History Museum in Berlin, working on the pro iBiosphere project that aims to lay the ground for integrating biodiversity research with the Web, which will require a greater degree of openness than what we are used to now, as well as better machine readability of the relevant information, a topic that I am currently focusing on.

I met Daniel online a few years ago, and he has been a source of motivation and inspiration for a lot of us. It makes me very happy to see that his work has not gone unnoticed, and look forward to seeing the outcome of his next projects.

ASAP awards – Interview with Mat Todd

ASAP Finalist Announcement 600x600
The name of the six finalists for the ASAP awards are out.

Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.”

One of the finalists is Mat Todd for his participation in the Global Collaboration to Fight Malaria.

Few research projects reflect this spirit of Open Science as well as the Open Source Malaria Project which is trying to find molecules that can help fight this terrible disease. Unlike other drug discovery projects, they are building on compounds that have been put the public domain and making the discovery process not only open for anyone to look at but also for anyone who wants to participate to do so – whatever that contribution might be.

Courtesy of ASAP awards

I had a chance to talk to Mat Todd the other night and he was gracious to answer some of my questions.

Q: What made you get engaged in Open Research?

MT: I kept looking around me and finding problems that were not being solved efficiently because people are not exploiting the power of the Internet to work together. Putting your work on the web helps to get greater interaction and find the best people to work with you. The psychological barrier, however, is that in the process you lose control of your project and failures are clearly revealed.

Q; How do you get people to overcome those barriers?

MT: I don’t know.  You need to have the attitude that something needs to be done and done really well, even if it is not ultimately done by you. We should assume that the next generation will adopt approaches to solving problems that are more fluid than how they are today.

Q: How did the Open Source Malaria project get started?

MT: We built on an earlier project that solved how to make a drug in an improved way, something that was needed by the World Health Organisation. (here and here) We thought “how about extending this to drug discovery?” That’s interesting because there you have the issue of whether you need patent protection, which is seemingly at odds with a totally open approach. We were able to start with data that GSK had put in the public domain in 2010. This move by GSK was pretty incredible, but they had so many compounds that were active against malaria that they considered putting the data into the public domain as a sound idea to increase their interactions with other scientists. Open data stimulates research activity by others.

Q: What do you think this project means to the Open Science movement?

MT: The project lets people see the process and that might get people more interested in what science is: there’s nothing mysterious about it, just people doing work. The Open Source Malaria project also eschews patents, and that means you need to think about whether new medicines can be taken all the way through to the public without that kind of protection – that’s actually what the session I’m running at OKCon at the moment is all about. How will we cover downstream costs of making the project’s discoveries available to people? Generally though, there is a fair amount of pressure on the project – we need to get it right because we don’t want the project to become the example of open science not working!

Q: Do you think this open source model can be exploited for other diseases?

MT: Diseases vary in their risk and complexity, so it will depend on the disease. Phase III clinical trials is typically the cripplingly expensive bit and drugs can often fail there after lots of investment. In the case of malaria the full set of clinical trials may not be so costly. There is something to be said for the open approach de-risking the whole process because you ought to be more confident in the quality of the drugs you’re trialling. I think the answer to your question is “yes” in short. More generally though we need to think beyond financial profit and start thinking that healthy people are more productive – that changes the reasons why public funds might be used to cover these huge costs.

Q: Where is the project at?

MT: We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing. The project itself has just started looking at a new series of very nice compounds that have also come from the private sector and have been put in the public domain by MMV.

Q: What have you come to enjoy about participating in the project?

MT: What I love about it is working with really smart people wherever they are, from students to professors, Australia through Europe to the US.

 Q: And where do you think Open Science is at?

MT: Very early days. If everyone in the world did open science then it would just be science and I could stop talking about it…

I came across Mat online several years ago, and he, like most others that participated in the Open Science discussions, helped shape my thinking and strengthen my commitment to a better way of doing science. We talked a bit about those “good old days”, and he ended the conversation with a quote from Charles Dickens:

“We are all sailing away to the sea, and have a pleasure in thinking of the river we are upon, when it was very narrow and little.” (From Dickens, C. (2012). The Selected Letters of Charles Dickens. Oxford University Press)

ASAP Awards Finalists announced

Earlier this year, nominations opened for the Accelerating Science Awards Program (ASAP). Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.” From their website:ASAP Finalist Announcement 300x250

The Accelerating Science Award Program (ASAP) recognizes individuals who have applied scientific research – published through Open Access – to innovate in any field and benefit society.

The list of finalists is impressive, as is the work they have been doing taking advantage of Open Access research results. I am sure the judges did not have an easy job. How does one choose the winners?

In the end, this has been the promise of Open Access: that once the information is put out there it will be used beyond its original purpose, in innovative ways. From the use of cell phone apps to help diagnose HIV in low income communities, to using mobile phones as microscopes in education, to helping cure malaria, the finalists are a group of people that the Open Access movement should feel proud of. They represent everything we believed that could be achieved when the barriers to access to scientific information were lowered to just access to the internet.

The finalists have exploited Open Access in a variety of ways, and I was pleased to see a few familiar names in the finalists list. I spoke to three of the finalists, and you can read what Mat Todd, Daniel Mietchen and Mark Costello had to say elsewhere.

One of the finalist is Mat Todd from University of Sydney, whose work I have stalked for a while now. Mat has been working on an open source approach to drug discovery for malaria. His approach goes against everything we are always told: that unless one patents one’s discovery there are no chances that the findings will be commercialised to market a pharmaceutical product. For those naysayers out there, take a second look here.


A different approach to fighting disease was led by Nikita Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar tackling HIV. They developed a smartphone app to help circumvent the need to go to a clinic to get an HIV test avoiding the possible discrimination that may come with it. But with the ability to test for HIV with home testing, then what was needed was a way to provide people with the information and support that would normally be provided face to face. Smartphones are increasingly becoming a tool that healthcare is exploring and exploiting. The hope is that HIV infection rates could be reduced by diminishing the number of infected people that are unaware of their condition.


What happens when different researchers from different parts of the world use different names for the same species? This is an issue that Mark Costello came across – and decided to do something about it. What he did was become part of the WoRMS project – a database that collects the knowledge of individual species. The site receives about 90,000 visitors per month. The data in the WoRMS database is curated and available under CC-BY. You can read more about Mark Costello here.


We’ve all heard about ecotourism. For it to work, it needs to go hand in hand with conservation. But how do you calculate the value (in terms of revenue) that you can put on a species based on ecotourism? This is what Ralf Buckley, Guy Castley, Clare Morrison, Alexa Mossaz, Fernanda de Vasconcellos Pegas, Clay Alan Simpkins and Rochelle Steven decided to calculate. Using data that was freely available they were able to calculate to what extent the populations of threatened species were dependent on money that came from ecotourism. This provides local organisations the information they need to meet their conservation targets within a viable revenue model.


Many research papers are rich in multimedia – but many times these multimedia files are published in the “supplementary” section of the article (yes – that part that we don’t tend to pay much attention to!). These multimedia files, when published under open access, offer the opportunity to exploit them in broader contexts, such as to illustrate Wikipedia pages. That is what Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp set out to do. They created a bot called Open Access Media Importer (OAMI) that harvests the multimedia files from articles in PubMed Central. The bot also uploaded these files to Wikimedia Commons, where they now illustrate more than 135 Wikipedia pages. You can read more about it here.


Saber Iftekhar Khan, Eva Schmid and Oliver Hoeller were nominated for developing a low weight microscope that uses the camera of a smartphone. The microscope is relatively small, and many of its parts are printed on a 3D printer. For teaching purposes it has two advantages. Firstly, it is mobile, which means that you can go hiking with your class and discover the world that lives beyond your eyesight. Secondly, because the image of the specimen is seen through the camera function on your phone or ipod, several students can look at an image at the same time, which, as anyone who teaches knows, is a major plus. To do this with standard microscopes would cost a lot of money in specialised cameras and monitors. Being able to do this at a relative low cost can provide students with a way of engaging with science that may be completely different from what they were offered before.

Three top awards will be announced at the beginning of Open Access Week on October 21st. Good luck to all!