PeerJ Preprints Succumbs

“The number and range of preprint initiatives has been expanding for a few years now, with bioRxiv, medRxiv, chemRxiv, and socRxiv among a much longer list, some quite obscure.

The recent announcement that PeerJ Preprints won’t be posting any more preprints after the end of this month may represent the beginning of “preprint deflation,” the first obvious retreat in the preprint realm, a world that has been haunted by questions of financial viability since Day 1.

Even long-standing preprint servers like arXiv have wrestled with the expense and work involved in posting free drafts of papers. The systems, people, and bandwidth needed to support technology platforms longterm aren’t cheap. Preprint platforms are no exception. This year, arXiv moved from one part of Cornell to another, in what looked like an attempt to shuffle overheads out of budgetary approval scrutiny for a time — after all, as I’ve calculated, if you include these, arXiv is hemorrhaging money every year, and nobody seems to want to confront that possibility.

Other indications of preprint deflation are observable in the analyses I’ve done around bioRxiv and socRxiv. The goals of these platforms — to encourage collaboration and pre-publication review — aren’t shared by most users, with authors increasingly using the platforms as marketing adjuncts or to meet Green OA requirements after successful submission to a journal….”

How journals are using overlay publishing models to facilitate equitable OA

“Preprint repositories have traditionally served as platforms to share copies of working papers prior to publication. But today they are being used for so much more, like posting datasets, archiving final versions of articles to make them Green Open Access, and another major development — publishing academic journals. Over the past 20 years, the concept of overlay publishing, or layering journals on top of existing repository platforms, has developed from a pilot project idea to a recognized and growing publishing model.

In the overlay publishing model, a journal performs refereeing services, but it doesn’t publish articles on its website. Rather, the journal’s website links to final article versions hosted on an online repository….”

A conceptual peer review model for arXiv and other preprint databases – Wang – 2019 – Learned Publishing – Wiley Online Library

Abstract:  A global survey conducted by arXiv in 2016 showed that 58% of arXiv users thought arXiv should have a peer review system. The current opinion is that arXiv should adopt the Community Peer Review model. This paper evaluates and identifies two weak points of Community Peer Review and proposes a new peer review model – Self?Organizing Peer Review. We propose a model in which automated methods of matching reviewers to articles and ranking both users and articles can be implemented. In addition, we suggest a strategic plan to increase recognition of articles in preprint databases within academic circles so that second generation preprint databases can achieve faster and cheaper publication.

‘Broken access’ publishing corrodes quality

I’m passionately in favour of everyone having open access to the results of the scientific research that their taxes pay for. But I think there are deep problems with one of the current modes for delivering it. The author-pays model (which I call broken access) means journals increase their profits when they accept more papers and reject fewer. That makes it all too tempting to subordinate stringent acceptance criteria to the balance sheet. This conflict of interest has allowed the proliferation of predatory journals, which charge authors to publish papers but do not provide the expected services and offer no quality control.

The problem is not addressed, in my view, by the Plan S updates announced in May …

But I know of a fix, and I have seen it in operation. I propose a model in which journals compete not for libraries’ or authors’ money, but for funds allocated by public-research agencies. The major agencies should call for proposals, similar to research-grant applications. Any publisher could apply with its strategic plans and multi-year budgets; applications would be reviewed by panels of scientists and specialists in scientific publishing.

The number of papers published would then become one of a journal’s qualities that could be assessed rather than the clearest route to economic viability. Other assessable factors could include turnaround times, quality of searchable databases, durability of archiving, procedures to deal with fraud and retractions, innovations in cooperative peer review, and the option of post-publication review. Although the updated Plan S calls for many such factors to be reported openly, it does not provide any clear mechanism to reward their implementation.

I call my proposed approach Public Service Open Access (PSOA). It uncouples the publisher’s revenues from the number of papers published, removing incentives to publish low-quality or bogus science. Crucially, scientists would decide how to allocate resources to journals….

The journal that I have directed for the past four years, Swiss Medical Weekly, has functioned in this way since 2001. Readers don’t pay for access, authors don’t pay for publication and reviewers are paid 50 Swiss francs (US$50) for each report. The journal’s costs (roughly 1,900? Swiss francs for each published paper, although automated systems might lower costs in the future) are covered by a consortium of Swiss university and cantonal hospitals, the Swiss Medical Association, the Swiss Academy of Medical Sciences and charities — which have evaluated our model and prioritized it over those of other publishers….

In the past, journals were only economically viable if their value was deemed worth their subscription fees, thereby collimating the publisher’s and the readers’ interest. A mechanism must be restored to align the financial interests of publishers with the research enterprise’s need for high-quality (rather than high-quantity) publications.”

How big are preprints? – Adam Day – Medium

Current estimates put the total number of peer-reviewed research articles at around 100m. Growth is around 3.5–4m articles per annum and accelerating. ArXiv, the largest preprint server in the world, has published only 1.5m preprints and is currently putting out around 100k preprints per annum. ArXiv (and preprint servers generally) are accelerating too, but there’s still a lot of catching up to be done….”

arXiv and the Symbiosis of Physics Preprints and Journal Review Articles: A Model

Abstract:  This paper recommends a publishing model that can help achieve the goal of reforming physics publishing. It distinguishes two complementary needs in scholarly communication. Preprints, increasingly important in science, are properly the vehicle for claiming priority of discovery and for eliciting feedback that will help with versioning. Traditional journal publishing, however, should focus on providing synthesis in the form of overlay journals that play the same role as review articles.

‘I can understand anger against publishers’ | Research Information

“I presently see a lot of anger against the big publishers, and think this anger is the biggest challenge right now. Scientists are the publishers’ main customers, and they’re very dissatisfied with what is going on. There are repeated calls to boycott this or that publisher, which I find somewhat ridiculous because publishers are doing a lot of things that scientists normally don’t acknowledge. For example, the whole issue of data storage and indexing and retrieval. This is a lot of work, and scientists seem to think it just comes from nowhere.

Some scientists are trying to do their own things, and in most cases I don’t think that’s particularly useful. I’m a theoretical physicist, so in my area almost all of the papers are on the arXiv. There are now a few arXiv overlay journals that basically use the data that is stored already on the arXiv, and that means they don’t have to worry about how to store the data, and how to make sure that it will remain accessible for the forseeable future. But we’re doing science here that we hope will still be used in 100 to 300 years’ time, and someone has to think about how to make sure that this data will remain accessible. …”

ORCID at Cornell University | LYRASIS NOW

Cornell is a founding member of ORCID, committed to the ORCID vision “where all who participate in research, scholarship, and innovation are uniquely identified and connected to their contributions across disciplines, borders, and time.” 

Cornell runs and operates arXiv.org, where authors can authenticate and link their ORCID iD to their author page. Before ORCID was created, arXiv.org used local identifiers and had 15,000 local author identifiers created and claimed in the first six years of platform use. In early 2015, they introduced the ability to connect to ORCID iD instead, and arXiv.org has seen a very rapid adoption that has continued to this day, with over 63,000 connections to author ORCID iDs within a few years. This accelerated uptake of ORCID shows understanding among the research community that a universal identifier is better than a local identifier. Local identifiers in arXiv.org are now deprecated, and only ORCID is used now….

Out of the thousands of ORCID iDs that claim to be affiliated with Cornell, only 672 are so far connected with their Cornell NetID through the ORCID authentication process. This means that more promotion and outreach is needed to get more people to connect, which is a common theme among research institutions adopting ORCID in the US….”

Editorial board mutinies: are they what’s needed or are they part of the problem?

“However, what I find striking is that the combined number of articles published by Lingua and Glossa has doubled since 2014, far outpacing the annual 4% growth in scholarly articles.

Does this mean linguistics is a burgeoning field? Or that these journals have won share from others? Or are we, perhaps, observing induced demand in action?

(Induced demand is a phenomenon where adding supply capacity prompts increased demand. A common example is new roads increasing traffic levels.) …

Twenty year ago, the authors of the Budapest Open Access Declarationthought that new, digital, forms of publishing would cost less than the traditional analogue methods. Unfortunately, as the financial travails at PLoSillustrate, we now know that digital publishing is far from low-cost. Worse, despite two decades of investment costs are increasing.

This latter point was brought home to me when I saw a tweet about arXiv’s costs. In 2010, arXiv had 4 staff and total expenses of $420,000. For 2019, arXiv has budgeted 10 staff and $2,070,000 in expenses. So, expenses have grown five-fold over the past decade, a period which saw postings double. To put it another way, the cost per posting has risen to $14.40 from $5.80 over the past decade, a 247% increase….

One reason costs continue to climb is because digital makes possible desirable things that were impossible before. For example, digital makes it possible to publish associated datasets and to disambiguate authors, funders, and institutions and digital has led to new, complex, standards for things like content capture and metadata to improve discoverability and machine readability.

Many of these new digital things have become standard fixtures in any quality scholcom solution, setting expectations for the future. cOAlition S’ Plan S doesn’t just seek to flip journals to open access, it sets mandatory standards on how they should be published, about which many researchers agree. It’s hardly a surprise that the original 60 things publishers did in 2012 had grown to 102 by 2018, many of the additions things digital….”