Open access publishing is the ethical choice | Wonkhe

“I had a stroke half a decade ago and found I couldn’t access the medical literature on my extremely rare vascular condition.

I’m a capable reader, but I couldn’t get past the paywalls – which seemed absurd, given most research is publicly funded. While I had, already, long been an open access advocate by that point, this strengthened my resolve.

The public is often underestimated. Keeping research locked behind paywalls under the assumption that most people won’t be interested in, or capable of, reading academic research is patronising….

While this moral quandary should not be passed to young researchers, there may be benefits to them in taking a firm stance. Early career researchers are less likely to have grants to pay for article processing charges to make their work open access compared to their senior colleagues. Early career researchers are also the ones who are inadvertently paying the extortionate subscription fees to publishers. According to data from the Higher Education Statistics Agency (HESA), the amount of money UK universities fork out each year to access paywalled content from Elsevier – the largest academic publisher in the world – could pay 1,028 academic researchers a salary of £45,000 per year.

We know for-profit publishers, such as Elsevier, hold all the cards with respect to those prestigious titles. What we need are systematic “read and publish” deals that allow people to publish where they want without having to find funding for open access….

The current outlook for prospective researchers to secure an academic position at a university is compromised because so much money is spent propping up for-profit, commercial publishers. Rather than focusing on career damage to those who can’t publish with an Elsevier title, we should focus on the opportunity cost in hundreds of lost careers in academia….”

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

Guest Post by Jean-Claude Guédon: Scholarly Communication and Scholarly Publishing – OASPA

“In December, I responded to an “Open Post” signed by a diverse group of scholarly publishers: commercial, learned societies, and university presses. Despite differing perspectives and objectives, all the signatories opposed “immediate green OA”. Their unanimity apparently rested on one concept: the “version of record”. 

Invited to contribute something further to this discussion (and I thank OASPA for this opportunity), I propose exploring how scholarly publishing should relate to scholarly communication. Ostensibly aligned, publishing and communication have diverged. Journals and the concept of “version of record” are not only a legacy from print, but their roles have shifted to the point where some processes involved in scholarly publishing are getting in the way of optimal scholarly communication, as the present pandemic amply reveals. Taking full advantage of digital affordances requires moving in different directions. This is an opportunity, not a challenge. Platforms and “record of versions” will eventually supersede journals and their articles, and now is the time to make some fundamental choices….”

Guest Post by Jean-Claude Guédon: Scholarly Communication and Scholarly Publishing – OASPA

“In December, I responded to an “Open Post” signed by a diverse group of scholarly publishers: commercial, learned societies, and university presses. Despite differing perspectives and objectives, all the signatories opposed “immediate green OA”. Their unanimity apparently rested on one concept: the “version of record”. 

Invited to contribute something further to this discussion (and I thank OASPA for this opportunity), I propose exploring how scholarly publishing should relate to scholarly communication. Ostensibly aligned, publishing and communication have diverged. Journals and the concept of “version of record” are not only a legacy from print, but their roles have shifted to the point where some processes involved in scholarly publishing are getting in the way of optimal scholarly communication, as the present pandemic amply reveals. Taking full advantage of digital affordances requires moving in different directions. This is an opportunity, not a challenge. Platforms and “record of versions” will eventually supersede journals and their articles, and now is the time to make some fundamental choices….”

Requiem for impact factors and high publication charges: Accountability in Research: Vol 0, No ja

Abstract:  Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.

 

Openness Profile: Modelling research evaluation for open scholarship | Zenodo

“Knowledge Exchange (KE) has published the report ‘Openness Profile: Modelling research evaluation for open scholarship

The report describes mechanisms and approaches to improve and incentivize the recording, the evaluation and the recognition of contributions to Open Scholarship practice. The report presents the Openness Profile and how it can help address existing gaps in the assessment of open science.

The Openness Profile is a digital resource, a portfolio of a research contributor’s outputs and activities, accessible in a single place. Academic and non-academic open scholarship activities become visible and more easily recognised. The Openness Profile is modelling how research evaluation in an open science context can be improved. Expected benefits are highlighted and requirements listed. Recommendations are provided to various stakeholders how to establish the Openness Profile as research evaluation routine.

Over 80 individual stakeholders from 48 different organisations provided input to this report on research assessment and open scholarship. The work and writing were done by consultants Fiona Murphy and Phill Jones of MoreBrains Cooperative, together with the KE Open Scholarship Research Evaluation task & finish group….”

Incentivization Blueprint — Open Research Funders Group

“A growing number of funders are eager to encourage grantees to share their research outputs – articles, code and materials, and data. To accelerate the adoption of open norms, deploying the right incentives is of paramount importance. Specifically, the incentive structure needs to both reduce its reliance on publication in high-impact journals as a primary metric, and properly value and reward a range of research outputs.

This Incentivization Blueprint seeks to provide funders with a stepwise approach to adjusting their incentivization schemes to more closely align with open access, open data, open science, and open research. Developed by the Open Research Funders Group, the Blueprint provides organizations with guidance for developing, implementing, and overseeing incentive structures that maximize the visibility and usability of the research they fund.

A number of prominent funders have committed to taking steps to implement the Incentivization Blueprint. Among them are the following: …”

Open Research Funders Group (ORFG) | DORA

“The ORFG released guidance for funders called, Incentivizing the sharing of research outputs through research assessment: a funder implementation blueprint. The group created the document to assist funders in encouraging researchers to maximize the impact of their work by openly sharing research outputs. The blueprint identifies three goals to be successful:

change the perception that publication in high-impact journals is the only metric that counts;
provide demonstrable evidence that, while journal articles are important, we value and reward all types of research outputs; and
ensure that indicators like the venue of publication or journal impact factor are not used as surrogate measures of quality in researcher assessment.

To do this, the blueprint provides three steps with concrete actions for funders: 1) policy development and declarations, 2) implementation, and 3) engagement.  Template language for funders is included in the document to promote easy uptake….”

Recognition and rewards in the Open Era: Turning thoughts into actions | Open Working

“The TU Delft Open Science programme held its very first thematic session on the Recognition and Rewards cross-cutting theme on October 5, 2020. The Open Science Programme currently has 5 projects and 3 cross-cutting themes, from FAIR software to Open Education. This means that the programme core team is composed of members from many different departments (not only within the Library), bringing in their diverse perspectives and skills! But this also poses a challenge on teamwork- we need a way for us to all stay in touch, be able to see and learn from each other’s work, and contribute and provide feedback – hence the idea of the thematic sessions.Ingrid Vos, the leader of the Recognition and Rewards theme, has kindly volunteered to lead this first thematic session. Since this theme relates to everyone’s work within the Open Science Programme, Ingrid wanted to make sure everyone can be effectively engaged in the session and their voices can be heard – more on this below.Key takeaways: A re-examination of rewards and recognition is needed to further fuel the cultural and behavioural changes towards open science TU Delft’s work in this aspect builds upon VSNU’s “Room for everyone’s talent” position paper. Every university in the Netherlands has a committee on Recognition & Rewards. The TU Delft committee is led by Ena Voûte. The Open Science Programme team had fruitful discussions around open research and education behaviours and “products”, how to evaluate, appreciate and reward these, as well as emerging career paths We’d love to hear your ideas and thoughts, both on rewards and recognition and on how you’d like to contribute and participate in these discussions- please use the comment section of this post!  …”

Indonesia should stop pushing its academics to chase empty indicators – Nikkei Asia

“An assessment system that predominantly evaluates research performance based on journal output and citations is steering academics from developing countries like mine to chasing quantity over quality. And being exploited while doing so.

Researchers in Indonesia are the second most likely in the world to publish in dubious journals that print articles for a fee without proper scientific peer review, a process where several experts in the field review the merit of the research, according to a new study by economists Vit Machacek and Martin Srholec.

 

These predatory journals prey on academics whose career progressions, and therefore salary increase, are determined by credit points. They exploit the processing fees that authors pay to make articles open to the public. They pocket the payment, an average of $178, an amount close to the basic salary of an entry-level lecturer in a state university in Indonesia, without facilitating proper peer review. The papers published by predatory journals are often low-quality, with typographical and grammatical errors….

In addition to the predatory journal problem, the metric also discourages science collaboration. As the metric values article count, academics who want to turn out several journal articles from a data set has an incentive to hold on to them rather than sharing them for other scientists to analyze….”