Chance discovery of forgotten 1960s ‘preprint’ experiment

“For years, scientists have complained that it can take months or even years for a scientific discovery to be published, because of the slowness of peer review. To cut through this problem, researchers in physics and mathematics have long used “preprints” – preliminary versions of their scientific findings published on internet servers for anyone to read. In 2013, similar services were launched for biology, and many scientists now use them. This is traditionally viewed as an example of biology finally catching up with physics, but following a chance discovery in the archives of Cold Spring Harbor Laboratory, Matthew Cobb, a scientist and historian at the University of Manchester, has unearthed a long-forgotten experiment in biology preprints that took place in the 1960s, and has written about them in a study publishing 16 November in the open access journal PLOS Biology.”

ScienceOpen launches new Open Access journal hosting services – ScienceOpen Blog

Recently, we announced a new Open Access hosting partnership with UCL Press. But just what does this mean, exactly?

Our customised hosting services are designed to help publishers showcase and distribute the Open Access journals that they publish to maximum effect. These are the natural extension of our marketing and indexing services, developed on the basis of our years of experience in content management architecture layered with advanced discovery technologies. By working with a range of publishers and content types, we have built a flexible platform to inter-connect scholarly articles at the level of their metadata, and establish a forum for user interaction around them. For Open Access journals, however, we are able to offer further advantages by embedding the full text articles within our discovery environment.

If funders and libraries subscribed to open access: The case of eLife, PLOS, and BioOne [PeerJ Preprints]

“Following on recent initiatives in which funders and libraries directly fund open access publishing, this study works out the economics of systematically applying this approach to three biomedical and biology publishing entities by determining the publishing costs for the funders that sponsored the research, while assigning the costs for unsponsored articles to the libraries. The study draws its data from the non-profit biomedical publishers eLife and PLOS, and the nonprofit journal aggregator BioOne, with this sample representing a mix of publishing revenue models, including funder sponsorship, article processing charges (APC), and subscription fees. This funder-library open access subscription model is proposed as an alternative to both the closed-subscription model, which funders and libraries no longer favor, and the APC open access model, which has limited scalability across scholarly publishing domains. Utilizing PubMed filtering and manual-sampling strategies, as well as publicly available publisher revenue data, the study demonstrates that in 2015, 86 percent of the articles in eLife and PLOS acknowledged funder support, as did 76 percent of the articles in the largely subscription journals of BioOne. Twelve percent of the articles identified the NIH as a funder, 8 percent identifies other U.S. government agencies. Approximately half of the articles were funded by non-U.S. government agencies, including 1 percent by Wellcome Trust and 0.5 percent by Howard Hughes Medical Institute. For 17 percent of the articles, which lacked a funder, the study demonstrates how a collection of research libraries, similar to the one currently subscribing to BioOne, could cover publishing costs. The goal of the study is to inform stakeholder considerations of open access models that can work across the disciplines by (a) providing a cost breakdown for direct funder and library support for open access publishing; (b) positing the use of publishing data-management organizations (such as Crossref and ORCID) to facilitate per article open access support; and (c) proposing ways in which such a model offers a more efficient, equitable, and scalable approach to open access than the prevailing APC model, which originated with biomedical publishing.”

Peer review: the end of an error?

“It is not easy to have a paper published in the Lancet, so Wakefield’s paper presumably underwent a stringent process of peer review. As a result, it received a very strong endorsement from the scientific community. This gave a huge impetus to anti-vaccination campaigners and may well have led to hundreds of preventable deaths. By contrast, the two mathematics ­preprints were not peer reviewed, but that did not stop the correctness or otherwise of their claims being satisfactorily established.

An obvious objection to that last sentence is that the mathematics preprints were in fact peer-reviewed. They may not have been sent to referees by the editor of a journal, but they certainly were carefully scrutinized by peers of the authors. So to avoid any confusion, let me use the phrase “formal peer review” for the kind that is organized by a journal and “informal peer review” for the less official scrutiny that is carried out whenever an academic reads an article and comes to some sort of judgement on it. My aim here is to question whether we need formal peer review. It goes without saying that peer review in some form is essential, but it is much less obvious that it needs to be organized in the way it usually is today, or even that it needs to be organized at all.

What would the world be like without formal peer review? One can get some idea by looking at what the world is already like for many mathematicians. These days, the arXiv is how we disseminate our work, and the arXiv is how we establish priority. A typical pattern is to post a preprint to the arXiv, wait for feedback from other mathematicians who might be interested, post a revised version of the preprint, and send the revised version to a journal. The time between submitting a paper to a journal and its appearing is often a year or two, so by the time it appears in print, it has already been thoroughly assimilated. Furthermore, looking a paper up on the arXiv is much simpler than grappling with most journal websites, so even after publication it is often the arXiv preprint that is read and not the journal’s formatted version. Thus, in mathematics at least, journals have become almost irrelevant: their main purpose is to provide a stamp of approval, and even then one that gives only an imprecise and unreliable indication of how good a paper actually is….

An alternative system would almost certainly not be perfect, but to insist on perfection, given the imperfections of the current system, is nothing but status quo bias. To guard against this, imagine that an alternative system were fully established and see whether you can mount a convincing argument for switching to what we have now, where all the valuable commentary would be hidden away and we would have to pay large sums of money to read each other’s writings. You would be laughed out of court.”

Sen. Rand Paul Introduces Bill to Overhaul Federal Research Grant System | American Institute of Physics

“On Oct. 18, Paul introduced the “BASIC Research Act,” which would make several changes to peer review processes and would broaden public access requirements for grant applications and research results….In addition, the bill incorporates almost all of the “Fair Access to Science and Technology Research (FASTR) Act” …”

IIT – KSHIP [Knowledge Sharing in Publishing]

“KSHIP is an Open Access Publisher of peer reviewed open access books, journals and other forms of academic publishing. We are a part of Ubiquity Press’s partner network of university open access publishing. KSHIP is a lot of things – it means ‘inspired’ in Sanskrit, our institute IIT Indore is on the banks of the Kshipra river and it expands to Knowledge Sharing in Publishing.”

Impact of Social Sciences – The next stage of SocArXiv’s development: bringing greater transparency and efficiency to the peer review process

“Almost 1,500 papers have been uploaded to SocArXiv since its launch last year. Up to now the platform has operated alongside the peer-review journal system rather than seriously disrupting it. Looking ahead to the next stage of its development, Philip Cohen considers how SocArXiv might challenge the peer review system to be more efficient and transparent, firstly by confronting the bias that leads many who benefit from the status quo to characterise mooted alternatives as extreme. The value and implications of openness at the various decision points in the system must be debated, as should potentially more disruptive innovations such as non-exclusive review and publication or crowdsourcing reviews.”

Understanding Open Science: Definitions and framework

Understanding Open Science: Definitions and framework

  1. 1. Understanding Open Science: Definitions and framework Dr. Nancy Pontika Open Access Aggregation Officer CORE Twitter: @nancypontika
  2. 2. What is Open Science
  3. 3. Research Lifecycle: as simple as it gets Idea Methodology Data Collection Analysis Publish
  4. 4. Idea Methodology Data Collection Analysis Publish Journal article, Dissertation, Book, Source Code, etc. Experiments, Interviews, Observations, etc. Numbers, Code, Text, Images, sound records, etc. Statistics, processes, analysis, documentation, etc. Research Lifecycle: focus on the steps”

The onward march of open science | The Horizons Tracker

“The increasingly open and transparent nature of academic research is something I’ve touched upon many times on this blog in recent years.  Further evidence of this general trend has emerged via the launch of MNI Open Research, a new platform for the publication of neuroscience research.

The platform aims to facilitate open and transparent peer-review, with all of the data used in the studies published, including null results, so that other researchers can avoid duplication, and also test the replicability of research.”