Abstract: For many decades, the hyperinflation of subscription prices for scholarly journals have concerned scholarly institutions. After years of fruitless efforts to solve this “serials crisis”, open access has been proposed as the latest potential solution. However, also the prices for open access publishing are high and are rising well beyond inflation. What has been missing from the public discussion so far is a quantitative approach to determine the actual costs of efficiently publishing a scholarly article using state-of-the-art technologies, such that informed decisions can be made as to appropriate price levels. Here we provide a granular, step-by-step calculation of the costs associated with publishing primary research articles, from submission, through peer-review, to publication, indexing and archiving. We find that these costs range from less than US$200 per article in modern, large scale publishing platforms using post-publication peer-review, to about US$1,000 per article in prestigious journals with rejection rates exceeding 90%. The publication costs for a representative scholarly article today come to lie at around US$400. We discuss the additional non-publication items that make up the difference between publication costs and final price.
“Scholarly publishing is a world of maddening inefficiencies. It’s also an unavoidable part of scientific discussion, and it remains one of the only features of academic life that offers some semblance of a meritocratic measure of a scholar’s contributions to the field. “Publish or perish,” as the adage goes, and publishing means dealing with publishers.
Yet every step of the typical academic publication process is fraught with practices that would quickly drive away the customer base of almost any other industry….”
The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”
In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.
Academic publishing has changed considerably; now we need to reconsider how we measure success.
“It has become a fact of academic life, that when researchers publish papers in academic journals, they sign away the copyright to their research, or licence it for distribution. However, from a historical perspective this practice is a relatively recent phenomenon. In this post Aileen Fyfe, explores how copyright has become intertwined with scholarly publishing and presents three insights from the history of the Royal Society that inform ongoing debates around openness in research and scholarly communication….”
“Scientific progress is anchored in the way science is communicated to other scientists. Research papers are published through an antiquated system: scientific journals. This system, enforced by the scientific journals’ lobby, enormously slows down the progress of our society. This article analyzes the limitations of the current scientific publishing system, focusing on journals’ interests, their consequences on science and possible solutions to overcome the problem….”
“AtPLOS ONEwe like to speed up the publication process wherever we can. We like science to be out in the open, and publication of peer-reviewed research to take place without undue delays, so that others can use and build upon the findings. Aligned with our founding mission, we aim to be as fast as we can while remaining true to our publication criteria and without compromising the quality of the peer review process. To ensure common editorial standards across the journal we have also increased desk rejects of submission that fail our editorial criteria. This rate now stands at around 23%.
In the past few months we have seen a few exciting improvements in the speed of manuscript handling at PLOS ONE. During April our median time to first editorial decision after peer review dropped to 42 days. It was at 53 days a year ago. And our median time from submission to publication online has also dropped to 165 days in April, coming down from 183 days earlier in 2018. This means that manuscripts that we publish move now 18 days faster through the full peer review process than a year ago, and the first decision after peer review is reached 11 days earlier. A more comprehensive list of long-term metrics is appended below and on our web page. We are very grateful to the members of our Editorial Board and our reviewers that have facilitated a fast peer review at the journal….”
Abstract: The changing world of scholarly communication and the emerging new wave of ‘Open Science’ or ‘Open Research’ has brought to light a number of controversial and hotly debated topics. Evidence-based rational debate is regularly drowned out by misinformed or exaggerated rhetoric, which does not benefit the evolving system of scholarly communication. This article aims to provide a baseline evidence framework for ten of the most contested topics, in order to help frame and move forward discussions, practices, and policies. We address issues around preprints and scooping, the practice of copyright transfer, the function of peer review, predatory publishers, and the legitimacy of ‘global’ databases. These arguments and data will be a powerful tool against misinformation across wider academic research, policy and practice, and will inform changes within the rapidly evolving scholarly publishing system.
“Some publications charge up to $3,900 (Rs 2.7 lakh) as APCs, which leaves researchers from lower to middle-income countries such as India much poorer. And if academic publication is skewed in favour of high-income countries, science becomes skewed in favour of them.
Explaining real-world phenomena objectively has always been touted as the “white man’s burden” and has been the backbone of the colonising mission. Often only researchers and academics from certain privileged pockets have the resources to conduct and publish cutting-edge research. After all, they enjoy superior infrastructure and funding opportunities.
This disparity is exacerbated when they have sufficient resources to publish their work, often allowing knowledge to be created by only a certain kind of individual. Further, their blinkers and biases may continue to play a role in what they propose is a universal phenomenon – a form of neo-colonialism. Therefore, making science open access from both the production and the consumption perspectives is essential to make knowledge more democratic….”
“The chart shows the same measures taken (using the same methods and data sources) over successive years. The lines should match, but in more recent years, they diverge. The data varies depending on when the readings were taken.
Notice how, for example, the number of articles published in 2016 varies by 14% depending on when the index was consulted. The data suggest that articles continued to be published after the year they were published in. The trends suggest a catastrophic fall-off in output.
Clearly something is wrong. If publication output had dropped by 90%+ since 2016, every scholarly publishing stakeholder would be both aware and on high alert!…
The reason the divergence illustrated in the chart occurs is because it takes time for the major indexes to count publication outputs. Our industry lacks common infrastructure for gathering basic measures, leaving it instead to the thousands of publishers to deposit information. Even where infrastructure exists – such as CrossRef – publishers are not consistent about how quickly, how much, or even if they deposit information about their outputs. Additionally, the formats and standards they use do not always include the most effective meta data for characterizing publications (case in point: clearly and consistently specifying open access articles in hybrid journals)….
One might be tempted to think that the state of our data in scholarly publishing is “par for the course” – surely all industries are like this. However, that is not the case….
Basic metadata in our information industry should be like basic hygiene in healthcare. Boring but necessary. If scholarly publishers are stewards of the world’s evidence base, then surely, we need to get our own evidence in order?”