“On June 1st, 2011, Peter Binfield, then publisher of PLOS ONE, made a bold and shocking prediction at the Society of Scholarly Publishing annual meeting: I believe we have entered the era of the OA mega journal,” adding, “Some basic modeling predicts that in 2016, almost 50% of the STM literature could be published in approximately 100 mega journals…Content will rapidly concentrate into a small number of very large titles. Filtering based solely on Journal name will disappear and will be replaced with new metrics. The content currently being published in the universe of 25,000 journals will presumably start to dry up. If you were not present for that pre-meeting workshop, you likely heard it repeated throughout the conference. The open access (OA) megajournal was taking over STM publishing and Binfield had data to prove it. PLOS ONE, which had received its first 2010 Impact Factor (4.351) the previous summer, was exploding with new submissions. In a few weeks, the journal would receive its second Impact Factor (4.411), a confirmation that its model was both wildly successful and dangerously competitive. PLOS had discovered the future of STM publishing and others had better get on board or get out of the way….”
Abstract: With the advent of the Internet and online publishing, the notion has arisen that access to the world’s research publications could be made available to one and all for free, presumably by shifting the costs to other places in the value chain and disintermediating publishers, a circumstance called Open Access (OA) publishing. While there are many hopes embedded in this view (lower costs, wider access, etc.), it appears more likely that Open Access will come about not through a revolution in the world of legacy publishing, but through upstart media built with the innate characteristics of the Internet in mind. An unanticipated outcome of this situation will be that the overall cost of research publications will rise, though the costs will be borne by different players, primarily authors and their proxies.
As an entity, whilst preprints have been around for some time, there have been a number of significant developments over the last few years. In this short talk, Graham will take you through a journey in time, touching upon the history, developments and what the future may hold in terms of preprints.
“Now a new study has found that nearly half of all academic articles that users want to read are already freely available. These studies may or may not have been published in an open-access journal, but there is a legally free version available for a reader to download.
To arrive at this conclusion, researcher Heather Piwowar and her colleagues used data from a web-browser extension they had developed called Unpaywall. When users of the extension land on an academic article, it trawls the web to find if there are free versions to download from places such as pre-print services or those uploaded on university websites.
In an analysis of 100,000 papers queried by Unpaywall, Piwowar and her colleagues found that as many as 47% searched for studies that had a free-to-read version available. The study is yet to be peer-reviewed, but Ludo Waltman of Leiden University told Nature that it is ‘careful and extensive.'”
“As we move remorselessly into a world where no individual or team can hope either to read or keep track of the published research in any defined field without machine learning or AI support, primary publishing becomes less important than getting into the dataflow and thus into the workflow of scholarship . It still helps to be published in Nature or Cell , but that could take place after visibility on figshare or F1000. Get the metadata right , ensure the visibility and reputation management can commence . So the first question about the post journal world is ‘ Who keeps score and how is worth measured ?’ And then we come to the next question . If the article is simply a waystage data report , and all the other materials of scholarly communication ( blogs , presentations etc) can be tracked , and the data from an experimental sequence can be as important for reproducibility as the article , and reports of successfully repeated experiments are as important in some instances as innovation, then the scheme of Notification and communication and cross-referencing must be open , community-owned and universally available , so how does it get established ?”
“There is no doubt that Sci-Hub, the infamous—and, according to a U.S. court, illegal—online repository of pirated research papers, is enormously popular. (See Science’s investigation last year of who is downloading papers from Sci-Hub.) But just how enormous is its repository? That is the question biodata scientist Daniel Himmelstein at the University of Pennsylvania and colleagues recently set out to answer, after an assist from Sci-Hub.
Their findings, published in a preprint on the PeerJ journal site on 20 July, indicate that Sci-Hub can instantly provide access to more than two-thirds of all scholarly articles, an amount that Himmelstein says is “even higher” than he anticipated. For research papers protected by a paywall, the study found Sci-Hub’s reach is greater still, with instant access to 85% of all papers published in subscription journals. For some major publishers, such as Elsevier, more than 97% of their catalog of journal articles is being stored on Sci-Hub’s servers—meaning they can be accessed there for free.
Given that Sci-Hub has access to almost every paper a scientist would ever want to read, and can quickly obtain requested papers it doesn’t have, could the website truly topple traditional publishing? In a chat with ScienceInsider, Himmelstein concludes that the results of his study could mark “the beginning of the end” for paywalled research. This interview has been edited for clarity and brevity. …”
“Digital scholarly book files should be open and flexible. This is as much a design question as it is a business question for publishers and libraries. The working group returned several times to the importance of scholarly book files being available in nonproprietary formats that allow for a variety of uses and re-uses…. Another pointed out that the backlist corpus of scholarly books in the humanities and social sciences is an invaluable resource for text-mining, but the ability to carry out that research at scale means that the underlying text of the books has to be easy to extract. “It’s so important to be able to ‘scrape’ the text,” one participant said, using a common term for gathering machine-readable characters from a human-readable artifact (for example, a scanned page image)….Whether a wider group of publishers and technology vendors will feel that they can enable these more expansive uses of a book file without upending the sustainability of the scholarly publishing system is a larger question than this project sought to answer….Our working group also pointed to other challenges for the future of the monograph that have little to do with its visual representation in a user interface: for example, what might be a viable long-term business model for monographs, and whether a greater share of the publishing of monographs in a free-to-read, open-access model can be made sustainable….As interest continues to grow in extending the open-access publishing model from journals to scholarly books, publishers and librarians are working to understand better the upfront costs that must be covered in order to operate a self-sustaining open-access monograph publishing program—costs that have been complicated to pin down because the production of any given scholarly book depends on partial allocations of staff time from many different staff members at a press, and different presses have different cost bases, as well….”
“‘What might peer review look like in 2030’ examines how peer review can be improved for future generations of academics and offers key recommendations to the academic community. The report is based on the lively and progressive sessions at the SpotOn London conference held at Wellcome Collection Conference centre in November 2016
It includes a collection of reflections on the history of peer review, current issues such as sustainability and ethics, while also casting a look into the future including advances such as preprint servers and AI applications. The contributions cover perspectives from the researcher, a librarian, publishers and others. …”
“In September 2016, 1564 life science preprints were posted to eight of the largest preprint servers available to life scientists. This is a five-fold increase from September 2011, when only 300 preprints were posted, and only three of the platforms examined, if they existed at all at that point, hosted any life science preprints. The increase in submissions has prompted journal policy changes and attitude changes amongst funders and research institutions.
But what does this mean for medicine? Biomedical sciences still only constitute approximately 22% of preprints submitted to BioRxiv, genetics research accounting for almost half of this figure. Clinical trials in particular are rarely posted, and account for less than 1%. The restrictions placed upon medical researchers by journals have been a leading cause of this. Some prominent medical publishers still abide by conservative policies, for instance, the American Heart Association has stated in correspondence as recently as September 2016 that it will not review preprinted submissions. A similar policy was reported in communications from the American Association for Cancer research in November 2015.
Concerns also exist surrounding the sharing of medical research before peer review. This is understandable as poorly conducted research, particularly in medicine, can certainly be damaging. For this reason pharmaceutical companies, major funders of medical research, have been cautious of using preprint platforms. This is particularly true for clinical trial results, and stems from fears that sharing research publicly ahead of peer review could violate regulations regarding off-label or direct-to-consumer promotion.
The meeting concluded with the benefits of preprints seen so far, and encouraging wider uptake in all fields of the life sciences. It is clear in the figures presented that many significant shortcomings of journal publishing can be ameliorated through the hosting of preprints upon submission. Engagement with research is facilitated, with 10% of the preprints posted on BioRxiv receiving comments from other users on the site; 90% of submissions to the site are made publically available and all that pass the basic editorial check are shared in less than 24 hours from submission. Whether or not preprint platforms become widely adopted in biomedical research is yet to be seen, but they have great potential if author behaviour and funder attitudes continue on their present trajectory.”